Back to Articles
    AI Governance & Policy

    How to Create an AI Acceptable Use Policy for Staff and Volunteers

    Protect your organization's data, mitigate AI risks, and enable responsible innovation with a clear, practical acceptable use policy that every team member can understand and follow.

    Published: January 18, 202616 min readAI Governance & Policy
    Creating an AI acceptable use policy for nonprofit staff and volunteers

    While 82% of nonprofits now use AI tools in some capacity, less than 10% have formal policies governing that use. This gap between adoption and governance creates significant risks—from inadvertent data breaches when staff upload sensitive client information to free AI tools, to bias amplification when automated systems make decisions without human oversight. Yet many organizations delay creating policies, uncertain where to start or worried about stifling innovation with overly restrictive rules.

    An AI acceptable use policy serves as your organization's guardrails for responsible AI adoption. It clarifies what AI tools staff and volunteers can use, what data they can share, what activities require approval, and what practices are prohibited. A well-designed policy protects beneficiary privacy, ensures compliance with data protection regulations, prevents reputational damage from AI misuse, and maintains donor trust. Importantly, it also enables innovation by providing clear guidance rather than blanket prohibitions—helping staff leverage AI's benefits while avoiding its pitfalls.

    The challenge lies in creating a policy that's comprehensive enough to address genuine risks yet practical enough that busy staff will actually follow it. Overly technical policies gather dust in shared drives, ignored by team members who don't understand their relevance. Overly restrictive policies get circumvented when staff use AI tools without reporting them because official approval processes are too burdensome. The most effective policies strike a balance—establishing clear principles, providing practical guidelines, and creating processes that support responsible use rather than block legitimate needs.

    This article guides you through creating an AI acceptable use policy tailored to your nonprofit's context, size, and risk tolerance. We'll cover the essential components every policy should include, how to adapt frameworks to your organization, strategies for implementation and training, methods for monitoring compliance without creating surveillance culture, and approaches to keeping policies current as AI capabilities evolve. Whether you're drafting your first AI policy or strengthening an existing one, you'll find practical guidance for protecting your organization while enabling teams to benefit from AI tools.

    By the end, you'll understand not just what to include in an AI acceptable use policy, but how to make it a living document that shapes organizational culture around responsible, ethical, and effective AI use.

    Why Your Nonprofit Needs an AI Policy Now

    The gap between AI adoption and governance in nonprofits isn't just a compliance issue—it represents real, immediate risks to the people you serve and the mission you advance. Every day without clear guidelines, well-intentioned staff members may inadvertently expose sensitive information, amplify biases in program delivery, or create legal liabilities that could threaten organizational sustainability.

    Consider these scenarios happening in nonprofits right now: A case manager uploads client intake forms containing personally identifiable health information to ChatGPT to help draft a service plan summary, unaware that this data now exists in OpenAI's systems and may be used for model training. A fundraising coordinator uses a free AI tool to analyze donor data, not realizing the service's terms allow third-party data sharing. A program director asks an AI to help allocate scarce resources among program participants, unknowingly perpetuating historical biases present in the training data. Each action seems harmless in isolation, but collectively they expose organizations to data breaches, regulatory violations, discrimination claims, and erosion of stakeholder trust.

    The urgency increases when you consider that donors care deeply about AI use. Research shows that 31% of donors report they would give less to organizations using AI in certain ways, particularly when AI feels like it's replacing human connection or when data privacy concerns aren't adequately addressed. Without clear policies and transparent communication about responsible AI use, nonprofits risk losing the trust that fundraising depends on. As we've discussed in our article on communicating AI use to donors, proactive transparency supported by strong policies actually builds confidence rather than raising concerns.

    Regulatory pressure is also mounting. While comprehensive AI regulations are still developing, existing data protection laws like HIPAA, FERPA, and GDPR already govern how nonprofits must handle information, and those requirements don't disappear when AI tools enter the picture. In fact, data regulators increasingly scrutinize AI use because these tools create new pathways for data exposure. An AI acceptable use policy demonstrates due diligence, showing that your organization takes data protection seriously and has implemented appropriate safeguards.

    Beyond risk mitigation, policies serve an enabling function. Clear guidelines give staff confidence to experiment with AI tools within defined boundaries, knowing what's allowed rather than avoiding AI altogether out of uncertainty. Policies also streamline decision-making—instead of ad hoc judgments about each AI tool request, you have consistent criteria for evaluation. This is particularly valuable for organizations building AI champions who need framework to guide others through responsible adoption.

    Critical Risks Without an AI Policy

    • Data breaches: Confidential beneficiary, donor, or organizational information exposed through AI tools with inadequate security
    • Regulatory violations: Non-compliance with HIPAA, FERPA, GDPR, or other data protection requirements resulting in fines and legal action
    • Bias and discrimination: AI systems perpetuating or amplifying inequities in program access, resource allocation, or service delivery
    • Donor trust erosion: Loss of funding when donors discover AI use without clear policies or ethical guardrails
    • Reputational damage: Public incidents involving AI misuse that undermine years of community trust-building
    • Mission drift: Gradual erosion of human-centered values as efficiency metrics replace relationship-based approaches

    Essential Components of an AI Acceptable Use Policy

    An effective AI acceptable use policy consists of several interconnected components, each addressing different aspects of responsible AI adoption. While you should adapt these elements to your organization's specific context, the following framework provides a comprehensive foundation that works for nonprofits of various sizes and missions.

    1. Purpose, Scope, and Applicability

    Define who the policy covers and what it governs

    Begin by clearly stating why the policy exists and who must follow it. Specify that the policy applies to all staff, board members, volunteers, contractors, interns, and any other individuals with access to organizational systems or data. Define what constitutes "AI" for policy purposes—include both generative AI tools like ChatGPT and Claude, as well as AI-powered features embedded in other software like email platforms, CRM systems, or productivity tools.

    Clarify that the policy governs both official organizational AI tools and personal AI tools used for work purposes. Many staff use free consumer AI services for work-related tasks, and your policy must address this reality. Make clear that using personal AI accounts doesn't exempt someone from policy requirements—if they're handling organizational data or performing work tasks, the policy applies regardless of what tools they're using.

    State that the policy supplements rather than replaces existing organizational policies on data privacy, information security, and acceptable use of technology. This helps staff understand how AI policies fit within broader governance frameworks and ensures consistency across policy documents.

    2. Core Principles and Values

    Establish ethical foundations for AI use

    Articulate the values that guide your organization's approach to AI. These principles provide a decision-making framework when staff encounter situations not explicitly covered in policy details. Common principles for nonprofits include:

    • Human-centered AI: AI augments rather than replaces human judgment, particularly in decisions affecting beneficiaries
    • Privacy protection: Beneficiary and donor data receives the highest level of protection, with strict limitations on AI processing
    • Equity and fairness: AI systems are monitored for bias and adjusted to prevent discrimination
    • Transparency: Stakeholders know when and how AI is used in ways that affect them
    • Accountability: Clear responsibility for AI decisions and designated oversight for AI implementations
    • Mission alignment: AI use advances organizational mission rather than being adopted for technology's sake

    These principles should connect explicitly to your organization's mission and values, demonstrating that AI governance stems from—rather than exists separate from—your core commitments to the communities you serve.

    3. Data Protection and Privacy Requirements

    Safeguard sensitive information in AI workflows

    This section forms the heart of most nonprofit AI policies because data protection represents the highest risk area. Establish clear rules about what information can never be entered into AI systems, what requires special handling, and what data is safe for AI processing.

    Prohibited Data

    Explicitly prohibit uploading or entering these categories into any AI tool, including "secure" or approved platforms, unless specifically configured for this purpose with appropriate safeguards:

    • Personally identifiable information (PII) including names, addresses, phone numbers, email addresses, or ID numbers
    • Protected health information (PHI) governed by HIPAA regulations
    • Student education records protected by FERPA
    • Financial information including credit card numbers, bank accounts, or donor payment details
    • Confidential case management notes or beneficiary intake information
    • Proprietary organizational information, strategic plans, or confidential communications

    Data Anonymization Requirements

    When staff need AI assistance with tasks involving sensitive data, require thorough anonymization first. Provide clear examples of acceptable anonymization—removing names, replacing identifiers with codes, aggregating data to prevent individual identification. Make clear that simply removing names isn't sufficient if other details make individuals identifiable.

    Tool Security Evaluation

    Require staff to evaluate AI tool security before use, considering: Does the vendor have clear privacy policies? Are data retention periods disclosed? Does the tool use data for model training? Is encryption used for data transmission and storage? Does the vendor comply with relevant regulations? Free, consumer-grade AI tools rarely meet nonprofit security requirements for handling sensitive data.

    4. Approved and Prohibited Uses

    Clarify acceptable and unacceptable AI applications

    Rather than attempting to list every possible AI use case, organize this section around categories of acceptable and prohibited activities. This approach creates flexible guidance that remains relevant as AI capabilities evolve.

    Generally Acceptable Uses

    • Drafting and editing communications (with human review before sending)
    • Summarizing documents and research materials
    • Brainstorming ideas and creative content (with human curation)
    • Analyzing anonymized, aggregated data for insights
    • Generating first drafts of internal documents and presentations
    • Learning about topics through conversational AI interfaces

    Prohibited or Restricted Uses

    • Making automated decisions about program eligibility, resource allocation, or service access without human review
    • Uploading beneficiary or donor personally identifiable information to AI tools
    • Sending external communications (emails, social media, donor appeals) without human review
    • Using AI-generated content in grant applications without disclosure and verification
    • Replacing human interaction in direct service delivery, counseling, or case management
    • Circumventing organizational approval processes by using personal AI accounts for restricted activities

    5. Human Oversight and Review Requirements

    Ensure human judgment in critical decisions

    Establish clear expectations for when and how humans must review AI outputs before acting on them. The level of oversight should scale with potential impact—higher-stakes decisions require more rigorous review.

    For donor communications, require that AI-generated content be reviewed for accuracy, tone, and appropriateness before sending. For program-related decisions, mandate that AI recommendations be evaluated by qualified staff who understand context and can identify inappropriate suggestions. For public-facing content, implement editorial review processes that assess factual accuracy, brand consistency, and mission alignment.

    Make explicit that AI tools are assistants, not decision-makers. Staff remain accountable for outcomes even when AI tools supported the work. This principle prevents the "AI made me do it" defense while encouraging thoughtful rather than blind acceptance of AI outputs. Organizations working on building AI champions should emphasize that champions model responsible oversight practices for others.

    6. Approval Processes for New AI Tools

    Balance innovation with appropriate vetting

    Create a clear process for staff to request approval for new AI tools or applications not currently in use. The process should be straightforward enough that staff actually use it rather than circumventing it, yet thorough enough to assess risks appropriately.

    Define who reviews requests—this might be IT staff, a designated AI committee, program leadership, or a combination depending on the use case. Specify what information requesters must provide: tool name and vendor, intended use case, what data will be processed, cost and licensing details, vendor privacy and security practices, and whether organizational IT can support the tool.

    Establish reasonable response timelines so requests don't languish indefinitely. If your approval process routinely takes weeks, staff will work around it. Aim for decisions within a few business days for simple requests, longer for complex enterprise implementations.

    Consider tiered approval processes based on risk level. Low-risk tools (productivity aids with no sensitive data) might need only manager approval. Medium-risk tools require IT security review. High-risk tools affecting beneficiaries or handling protected data need executive and possibly board approval. This prevents bottlenecks while ensuring appropriate scrutiny for consequential decisions.

    Implementation and Training: Making Policy Real

    Even the most comprehensive policy document fails if staff don't understand, remember, or apply it in daily work. Implementation requires deliberate change management, accessible training, and ongoing reinforcement. The goal isn't perfect compliance from day one—it's building a culture where responsible AI use becomes habitual rather than burdensome.

    Start implementation with a clear launch communication from leadership that explains why the policy matters, what's changing, and how it protects both the organization and the people you serve. Frame the policy as enabling rather than restricting—it provides clarity and confidence for using AI tools responsibly. Acknowledge that AI is powerful and useful, and the policy helps everyone harness those benefits while avoiding pitfalls. This positive framing increases buy-in compared to policies presented primarily as lists of prohibitions.

    Training Approaches for Different Audiences

    Tailor training to different roles and technical comfort levels within your organization. Not everyone needs the same depth of knowledge, and one-size-fits-all training tends to bore some participants while overwhelming others.

    All-Staff Training (60-90 minutes)

    Provide baseline training for everyone covering: why the policy exists, what data can never go into AI tools, how to recognize and anonymize sensitive information, approved AI tools and how to request new ones, when human review is required, who to ask when unsure, and consequences of policy violations. Use real scenarios relevant to your organization—not abstract examples but situations staff actually encounter.

    Role-Specific Training

    Provide additional training for roles with elevated AI-related responsibilities. Program staff handling beneficiary data need detailed guidance on anonymization and when to seek IT approval. Managers need to understand how to evaluate team members' AI use and model responsible practices. Communications staff need training on fact-checking AI-generated content and disclosure requirements.

    Ongoing Micro-Learning

    Supplement initial training with brief, regular reminders and updates. Share quick tips in team meetings, include policy reminders in newsletters, post guidance in Slack or Teams channels, and create decision trees or flowcharts staff can reference when unsure. Repetition across multiple touchpoints builds retention better than a single comprehensive training session. Consider how organizations building knowledge management systems can integrate policy guidance into everyday workflows.

    Making Policy Accessible and Memorable

    Policy documents should exist in multiple formats for different use cases. Create a comprehensive policy document for reference, but also develop practical tools that staff use in the moment:

    • One-page quick reference guide: Printable summary of key do's and don'ts that staff can keep at their desk or bookmark
    • Decision flowcharts: Visual guides answering "Can I use AI for this task?" that walk through key decision points
    • Scenario-based FAQ: Answers to common questions in relatable language, using real situations staff encounter
    • Before-AI checklist: Quick list of questions to ask before using AI for a task, easily referenced when starting new work

    Building a Culture of Responsible Use

    Culture change requires consistent messaging and visible leadership commitment. When executives and board members model policy compliance—acknowledging when they're unsure about AI use and asking for guidance publicly—it signals that following the policy matters more than appearing knowledgeable. When managers praise staff for raising AI-related concerns or questions, it reinforces that caution is valued over expedience.

    Create safe channels for staff to report policy violations or concerns without fear of retaliation. Position your AI governance framework as learning-oriented rather than punitive. Most policy violations stem from confusion or ignorance rather than malicious intent. Responding with education rather than punishment encourages transparency and learning.

    Celebrate responsible AI champions who demonstrate both innovation and adherence to guidelines. Recognize staff who find creative AI applications within policy boundaries, who help colleagues navigate policy questions, or who identify potential issues before they become problems. This positive reinforcement shapes culture more effectively than focusing solely on compliance failures.

    Monitoring Compliance Without Creating Surveillance Culture

    Effective policy enforcement walks a delicate line—you need visibility into how AI tools are being used to identify risks and violations, yet overly intrusive monitoring erodes trust and makes staff feel surveilled rather than supported. The goal is accountability without surveillance, oversight without micromanagement.

    Start with the assumption that most staff want to follow policies and any violations result from confusion, competing priorities, or lack of awareness rather than malicious intent. This assumption should shape your monitoring approach—focus on systemic patterns that indicate policy gaps or training needs rather than gotcha moments to discipline individuals. When monitoring reveals policy violations, investigate whether the policy itself is unclear, whether approval processes created unreasonable friction, or whether staff lacked alternatives to accomplish legitimate work needs.

    Monitoring Approaches for Nonprofits

    Technology-Based Monitoring

    For approved AI tools, particularly enterprise platforms, review usage logs periodically to understand adoption patterns and potential misuse. Monitor what data is being processed, which features are most used, and whether usage aligns with intended purposes. Many platforms provide analytics dashboards showing aggregate usage without exposing individual activities.

    For broader technology use, work with IT to identify when staff access unapproved AI platforms from organizational devices or networks. This doesn't require examining every website visit—automated tools can flag access to known AI platforms for review. Focus monitoring on sensitive roles or departments handling protected data rather than blanket surveillance of all staff.

    Process-Based Monitoring

    Require documentation when AI tools assist with deliverables sent to external stakeholders. For grant applications, external communications, or program materials, ask staff to note when AI was used and describe how outputs were reviewed. This creates transparency and accountability without constant surveillance.

    Conduct periodic spot-checks of AI-assisted work products to ensure outputs meet quality standards and received appropriate human review. Frame these as quality assurance rather than compliance audits—the goal is continuous improvement, not catching mistakes.

    Cultural Monitoring

    Regularly survey or interview staff about their AI experiences—what tools they're using, what challenges they face, where policy guidance is unclear, and what support they need. This provides valuable information about policy effectiveness and implementation gaps while demonstrating that leadership cares about enabling responsible use, not just preventing violations. Anonymous feedback channels help surface concerns staff might not raise openly.

    Responding to Policy Violations

    When policy violations occur, respond proportionally based on intent, severity, and whether protected data was compromised. Most violations warrant education and coaching rather than discipline, particularly for first-time or low-stakes infractions.

    For minor violations—using unapproved tools for low-risk tasks, failing to document AI use when required, or exceeding data processing guidelines without data exposure—provide clear feedback, explain why the policy exists, and ensure understanding of correct procedures. Document the conversation but focus on learning rather than punishment.

    For moderate violations—uploading anonymized but potentially identifiable data, using AI for decisions requiring human review, or repeatedly violating policy after previous warnings—implement more formal consequences including documented warnings, additional training requirements, temporary restrictions on AI tool access, or escalation to supervisors.

    For serious violations—knowingly uploading protected health information, personally identifiable data, or confidential organizational information to unapproved tools; deliberately circumventing security controls; or causing actual harm through policy violations—apply progressive discipline consistent with your personnel policies, which may include suspension or termination depending on severity. Simultaneously assess whether the violation created reportable data breaches requiring notification to affected individuals or regulators.

    Keeping Your Policy Current as AI Evolves

    AI capabilities evolve rapidly—tools available today didn't exist six months ago, and capabilities emerging in the next year will make current tools seem primitive. Your policy must adapt to remain relevant without requiring constant rewrites that confuse staff with frequent changes. The key is building flexibility into policy design rather than attempting comprehensive coverage of every tool and use case.

    Structure policies around principles and categories rather than specific tools or technologies. Instead of listing approved AI platforms by name, define criteria for evaluating any AI tool and establish approval processes. Rather than enumerating prohibited use cases exhaustively, articulate principles about data protection, human oversight, and ethical use that apply regardless of which specific AI capabilities emerge.

    Establish a regular review cycle for policy updates—annually at minimum, more frequently if AI adoption is rapidly expanding in your organization. Schedule reviews independent of incidents or problems, making updates proactive rather than reactive. During reviews, consider: What new AI capabilities have emerged since the last update? What questions or confusion have staff expressed? What violations or close calls have occurred? What regulatory changes affect AI governance? What have peer organizations learned about AI policy effectiveness?

    Designate clear ownership for policy maintenance. Whether this responsibility falls to IT, compliance, legal, or a cross-functional AI committee, someone needs accountability for monitoring AI developments, fielding staff questions, and initiating policy revisions when needed. Without clear ownership, policies become stale as everyone assumes someone else is handling updates.

    Communicate policy changes clearly when updates occur. Don't just republish revised documents—actively inform staff about what changed, why it changed, and how it affects their work. Highlight new permissions granted or additional restrictions implemented. When expanding acceptable use, emphasize enabling language. When tightening restrictions, explain the rationale based on risks identified.

    Staying Informed About AI Developments

    • Follow nonprofit AI resources: Subscribe to updates from organizations like NTEN, TechSoup, and sector-specific associations tracking AI trends
    • Monitor regulatory developments: Track proposed AI regulations, data protection law updates, and guidance from sector regulators
    • Learn from peer organizations: Connect with other nonprofits navigating AI governance through professional networks and conferences
    • Engage with AI vendors: Maintain dialogues with your approved AI tool providers about security updates, policy changes, and new capabilities
    • Review policy effectiveness: Regularly assess whether your policy enables appropriate AI use while preventing genuine risks

    Conclusion: Policy as Foundation for Responsible Innovation

    An AI acceptable use policy represents far more than a compliance checkbox or risk mitigation document. It establishes the foundation for your organization to harness AI's transformative potential while staying true to the values and commitments that define your mission. Good policies don't prevent innovation—they channel it in directions aligned with organizational values, stakeholder trust, and legal requirements.

    The most effective AI policies share common characteristics: they're clear enough that busy staff understand expectations without legal interpretation, practical enough that following them doesn't require heroic effort, flexible enough to remain relevant as technology evolves, and values-based enough to guide decisions in ambiguous situations not explicitly covered by policy details. They enable rather than solely restrict, providing confidence to experiment within defined boundaries rather than creating paralyzing uncertainty.

    Remember that creating a policy document is the beginning, not the end, of AI governance. Policy effectiveness depends on consistent implementation through training, accessible guidance, cultural reinforcement, and ongoing adaptation. The goal isn't perfect compliance from day one—it's building organizational muscle memory around responsible AI use so that appropriate practices become habitual rather than requiring constant conscious thought.

    Start where you are rather than waiting for the perfect policy. If you're drafting your first AI governance document, begin with core principles and essential data protection requirements. You can refine and expand over time based on your organization's experience and evolving needs. Use existing templates and frameworks from organizations like Community IT, TechSoup, and NTEN as starting points, adapting them to your specific context rather than creating policies from scratch.

    Connect your AI policy to broader organizational governance. It should complement existing policies on data privacy, information security, and technology acceptable use rather than existing as an isolated document. Link AI governance to your strategic planning processes so that AI adoption aligns with organizational priorities. Ensure board members understand the policy and their oversight responsibilities for AI-related risks and opportunities.

    Ultimately, an AI acceptable use policy protects what matters most—the people you serve, the trust stakeholders place in your organization, and the mission that drives your work. By establishing clear guidelines for responsible AI use, you create space for innovation that advances rather than compromises those core commitments. In an era where AI adoption is accelerating across all sectors, having strong governance frameworks distinguishes thoughtful organizations from those racing ahead without adequate safeguards.

    Ready to Develop Your AI Governance Framework?

    Get expert guidance on creating AI policies that protect your organization while enabling responsible innovation tailored to your nonprofit's mission, size, and risk profile.