How to Create an AI Policy for Your Nonprofit in One Day
Your staff is already using AI, whether or not you have a policy in place. This step-by-step guide walks you through drafting a practical, enforceable AI policy in a single working day, covering acceptable use, data privacy, governance, and the staff guidelines your team actually needs.

Most nonprofit leaders know they need an AI policy. The challenge is not a lack of awareness but a lack of time. Between board meetings, grant deadlines, and day-to-day operations, drafting a comprehensive governance document keeps sliding to the bottom of the priority list. Meanwhile, staff members are experimenting with ChatGPT, Claude, Gemini, and other tools on their own, creating an invisible risk landscape that grows wider every week.
The good news is that you do not need a months-long committee process to create an effective AI policy. A focused, single-day effort can produce a document that protects your organization, empowers your staff, and satisfies funders who are increasingly asking about AI governance. The key is understanding what your policy must include, what can wait for later revisions, and how to structure the day so you walk out with a usable draft.
This guide breaks the process into a morning session and an afternoon session, with clear deliverables at each stage. Whether you are an executive director working solo or a small team collaborating in a conference room, you will have a board-ready AI policy by end of day. The approach draws on frameworks from NTEN, the National Council of Nonprofits, and organizations that have already navigated this process successfully.
If you are still building your organization's overall AI strategy, this policy document becomes a foundational piece of that larger effort. And if your team is already deep into AI adoption without formal guardrails, this guide will help you catch up without slowing down the progress your people have already made.
Why Your Nonprofit Needs an AI Policy Now
The urgency around AI policies is not hypothetical. A significant portion of nonprofits report that staff are using AI tools without any formal organizational guidance. This creates real risks, from accidentally sharing donor data with a public AI model to generating content that misrepresents the organization's position on sensitive issues. Without clear boundaries, well-intentioned staff can create compliance, privacy, and reputational problems that are expensive to fix after the fact.
Regulatory pressure is also mounting. States like Colorado, with its AI Act taking effect in 2026, and California's AI transparency requirements are creating legal obligations that apply to nonprofits operating in those jurisdictions. The EU AI Act adds another layer for organizations with international programs. Having a documented AI policy is becoming a baseline expectation rather than a nice-to-have.
Funders are paying attention too. Many foundations now include questions about technology governance in their grant applications and due diligence processes. Organizations that can point to a clear, thoughtful AI policy demonstrate the kind of operational maturity that builds funder confidence. Conversely, the absence of any AI governance framework raises questions about organizational readiness, especially for grants that involve data collection or technology implementation.
Morning Session: Research and Framework (8:00 AM to 12:00 PM)
The morning is about gathering information and establishing the structural framework of your policy. You will audit current AI usage, define your organization's risk tolerance, and outline the major sections your policy needs to cover. This groundwork prevents the common mistake of writing a policy that sounds good on paper but does not reflect how your team actually works.
8:00 - 9:30 AM: Audit Current AI Usage
Understand what is already happening before you write rules about it
Start by surveying how your staff currently uses AI. Send a quick poll or, if your team is small enough, have brief conversations. You need to know which tools people are using, what tasks they are applying AI to, and what data they are sharing with these platforms. Many leaders are surprised to discover that AI use is far more widespread than they assumed, from program staff drafting client communications to finance teams using AI for spreadsheet analysis.
- List every AI tool staff members currently use (including free versions of ChatGPT, Gemini, and Copilot)
- Identify what types of data are being entered into these tools (donor info, client records, financial data, general content)
- Note which departments or roles use AI most frequently
- Document any AI tools embedded in software you already pay for (CRM, email platforms, design tools)
9:30 - 10:30 AM: Define Your Risk Categories
Classify data and use cases by sensitivity level
Not all AI use carries the same risk. A staff member using AI to brainstorm event names is fundamentally different from someone inputting client case notes. Create a simple three-tier classification system that your team can apply quickly when deciding whether a particular AI use is appropriate. This classification becomes the backbone of your acceptable use guidelines and saves you from writing overly restrictive rules that kill adoption or overly permissive ones that expose your organization.
- Green (Low Risk): General content drafting, brainstorming, research summaries, public information processing
- Yellow (Moderate Risk): Donor communications, aggregate program data, internal reports, financial summaries without individual identifiers
- Red (High Risk): Client personally identifiable information, protected health data, minor records, confidential donor details, legal documents
10:30 AM - 12:00 PM: Outline Your Policy Sections
Build the skeleton document with all required sections
With your audit data and risk categories in hand, outline the major sections your policy needs. Do not try to write polished prose yet. Focus on creating section headers, noting the key points each section must cover, and flagging any areas where you need additional input from specific team members. A solid outline is the difference between a productive afternoon writing session and hours of unfocused drafting.
Your outline should include at minimum: a purpose statement, scope and applicability, approved tools list, data classification guidelines, acceptable use rules for each risk tier, review and approval processes for new tools, staff training requirements, and an incident response plan for AI-related problems. We will flesh out each of these sections in the afternoon.
The Seven Essential Sections Every Nonprofit AI Policy Needs
While AI policies can grow complex over time, your initial version needs to cover seven core areas. These sections address the most common risks, give staff actionable guidance, and demonstrate governance maturity to funders and board members. You can always expand the policy in future iterations, but these seven areas are non-negotiable for a credible first draft.
1. Purpose and Scope
Start with a clear statement of why this policy exists and who it applies to. The purpose should connect AI governance to your mission, not frame it as a restriction on innovation. Explain that the policy exists to enable responsible AI use that advances your nonprofit's goals while protecting the people you serve, your donors, and your staff.
The scope section should specify that the policy covers all AI tools, including those accessed through personal devices and free-tier accounts. Many organizations make the mistake of only covering enterprise software, leaving a massive gap where most actual AI use happens. Be explicit that the policy applies to volunteers, contractors, and board members in addition to paid staff, especially if these groups handle sensitive organizational data.
2. Approved Tools and Platforms
Create a list of AI tools your organization has evaluated and approved for use. This does not mean you need to test every tool on the market. Start with the tools your audit revealed staff are already using, and apply your evaluation framework to each one. Tools that pass become approved for specific use cases. Tools that fail get listed as prohibited, with an explanation of why.
Include a process for requesting new tools. Staff should know exactly how to propose a new AI tool, who reviews the request, and how long the evaluation takes. Without this process, people will simply use unapproved tools without telling anyone. A reasonable turnaround time for tool evaluation, such as two weeks, keeps the process from becoming a bottleneck that drives shadow IT behavior.
3. Data Privacy and Protection
This is the most critical section for nonprofits that handle sensitive client data. Using the risk tiers you defined in the morning session, spell out exactly what data can and cannot be entered into AI tools. For most nonprofits, the rule is straightforward: never enter personally identifiable information (PII) for clients, beneficiaries, or donors into any AI tool unless the tool has a signed data processing agreement and meets your organization's security standards.
Address specific scenarios that come up frequently. Can staff paste a donor email into ChatGPT to draft a response? Only if they remove the donor's name, contact details, and gift amounts first. Can a program manager use AI to summarize case notes? Only with a tool that has enterprise-grade data protection and does not use inputs for model training. These concrete examples make the policy usable rather than abstract. Organizations working with health data should reference HIPAA compliance requirements, while those serving children need to account for COPPA and FERPA regulations.
4. Acceptable Use Guidelines
Define what staff can and should use AI for, organized by your risk tiers. For green-tier activities, give broad permission and encourage experimentation. For yellow-tier activities, require supervisor awareness and basic quality review of AI outputs before they go external. For red-tier activities, require explicit approval from a designated person (usually the executive director or data privacy lead) before any AI tool is involved.
Include a mandatory human review requirement for any AI-generated content that will be published externally, sent to funders, or used in decision-making about clients and services. AI tools can produce content that sounds authoritative while being inaccurate, biased, or out of alignment with your organization's voice. The human-in-the-loop principle is not just good practice, it is a legal requirement under several emerging state AI regulations. Your AI champions can play a key role in modeling this review process for their teams.
5. Transparency and Disclosure
Decide when and how your organization will disclose AI use. This is particularly important for fundraising communications, grant applications, and published content. Some funders explicitly ask whether AI was used in proposal preparation. Some donors have strong feelings about AI-generated communications. Your policy should give staff clear guidance on when disclosure is required, when it is recommended, and the specific language to use.
A practical approach is to require disclosure in formal documents (grant applications, annual reports, board materials) and leave it optional for routine communications (internal emails, social media drafts, meeting preparation). This balances transparency with practicality. As research on donor attitudes toward AI shows, thoughtful transparency actually builds trust rather than eroding it.
6. Training and Competency Requirements
Specify what training staff need before using AI tools. At minimum, every staff member should understand your data classification system, know which tools are approved, and be able to recognize when an AI output needs additional review. You do not need to make everyone an AI expert, but you do need a baseline of literacy that prevents costly mistakes.
Include a timeline for training completion. New hires should complete AI policy orientation within their first two weeks. Existing staff should complete initial training within 30 days of the policy's adoption. Plan for annual refresher training to cover policy updates, new tools, and lessons learned from the previous year. Organizations that invest in ongoing AI education see significantly better outcomes from their AI initiatives than those that treat training as a one-time event.
7. Incident Response and Policy Violations
Define what happens when something goes wrong. If a staff member accidentally shares client data with an AI tool, who do they report it to? What steps does the organization take to mitigate the breach? How are policy violations handled? This section should emphasize learning over punishment, especially during the first year of implementation. If staff fear harsh consequences for honest mistakes, they will simply hide their AI use rather than following the policy.
Create a simple incident reporting process: who to contact, what information to provide, and what the response timeline looks like. Include escalation procedures for serious incidents, such as when protected health information or client records have been exposed to an AI platform. Document the remediation steps the organization will take, including notifying affected individuals if required by law and updating the policy to prevent recurrence.
Afternoon Session: Drafting and Review (1:00 PM to 5:00 PM)
With your morning research and outline complete, the afternoon is about turning your framework into a polished document. The most effective approach is to write the policy in plain language that every staff member can understand. Jargon-heavy policies collect dust. Clear, direct language gets followed.
1:00 - 3:00 PM: Write the Draft
Work through each of the seven sections, transforming your outline notes into clear policy statements. Use "do" and "do not" language wherever possible instead of vague guidance. "Staff must remove all personally identifiable information before entering data into AI tools" is far more useful than "Staff should be mindful of data privacy when using AI." Aim for two to three pages. A policy that is too long will not be read. A policy that is too short will not cover enough ground.
Here is a tip that saves hours of writing time: use AI to help draft the policy itself. Feed your outline, risk categories, and audit findings into a tool like Claude or ChatGPT and ask it to generate a first draft of each section. Then edit heavily to ensure the language matches your organization's culture, the specifics reflect your actual situation, and the tone is right. This meta-approach, using AI to create your AI policy, is perfectly appropriate and often produces a better starting point than writing from scratch.
3:00 - 4:00 PM: Get Input from Key Stakeholders
Share your draft with two to three key people who can pressure-test it. Ideal reviewers include your most active AI user (they will spot impractical restrictions), someone from your programs team (they understand client data sensitivity), and someone in a leadership or compliance role (they can flag governance gaps). You do not need full consensus at this stage, just enough feedback to catch major blind spots.
Ask each reviewer three specific questions: Is there anything in this policy that would prevent you from doing your job effectively? Is there any AI use you are aware of that this policy does not address? Is there anything confusing or ambiguous that could be interpreted differently by different people? These targeted questions produce better feedback than an open-ended "what do you think?"
4:00 - 5:00 PM: Finalize and Prepare for Adoption
Incorporate stakeholder feedback, make final edits, and prepare the document for formal adoption. Add a version number (start with 1.0), the effective date, and a scheduled review date (six months from now for the first review, then annually). Include an acknowledgment form that staff will sign confirming they have read and understood the policy.
Prepare a brief summary (one page or less) that accompanies the full policy. This summary should highlight the five to seven key rules that matter most in daily practice. Many organizations find that the summary gets more use than the full document, so invest time in making it clear and memorable. Some teams laminate the summary and post it near shared workstations, or pin it in their Slack workspace for easy reference.
Five Common Mistakes That Undermine Nonprofit AI Policies
Even well-intentioned AI policies can fail if they fall into common traps. Knowing these pitfalls in advance helps you avoid them during your drafting day and produces a policy that actually gets followed.
Being Too Restrictive
Policies that ban all AI use or make every task require executive approval do not stop AI usage. They push it underground. Staff will use personal devices, personal accounts, and personal email to access AI tools if the official policy is too burdensome. A good policy says "yes, with these guardrails" rather than "no, unless you jump through these hoops." The goal is to channel AI use into safe patterns, not to eliminate it. Organizations that take a restrictive approach tend to fall behind peers that embrace responsible adoption.
Ignoring Embedded AI
Many nonprofits write AI policies focused on standalone tools like ChatGPT while ignoring AI features embedded in software they already use. Your CRM, email marketing platform, document editor, and design tools likely all have AI capabilities that were added in recent updates. Microsoft 365 Copilot, Google Workspace AI, Canva's AI features, and Salesforce Einstein are examples of AI that staff may be using without thinking of it as "AI." Your policy needs to address these embedded tools alongside standalone ones.
Writing for Lawyers Instead of Staff
An AI policy is a practical guide for daily decision-making, not a legal document. If your policy reads like a terms of service agreement, staff will not absorb it. Use plain language, include examples, and write in the second person ("you should" and "you must") to make the guidance feel direct and personal. Save the legal language for your data processing agreements and vendor contracts. The policy itself should be something a new employee can read and understand in under ten minutes.
Treating the Policy as Final
AI technology and regulations are evolving rapidly. A policy written today will need updates within six months. Build in a review cadence from the start, and make it clear to staff that the policy is a living document. Version numbering helps people track changes. A brief changelog at the end of the document (Version 1.1, updated July 2026: added Gemini to approved tools list) keeps everyone aware of what has changed without requiring a full re-read.
Skipping the Rollout Plan
A policy that sits in a shared drive folder without any communication plan is a policy that does not exist. Plan how you will introduce the policy to staff, when the training sessions will happen, and how you will handle the first few weeks of implementation. A dedicated all-staff meeting, a walkthrough of the key points, and an open Q&A session make the difference between adoption and neglect. If your organization struggles with change management around AI, build extra time into your rollout plan for addressing concerns.
Getting Board Approval Without a Three-Month Delay
Many nonprofit leaders hesitate to move forward with an AI policy because they assume it requires full board approval at a regular board meeting. While board awareness and eventual formal adoption are important, you do not need to wait for the next quarterly meeting to implement basic AI guardrails. Most nonprofit bylaws allow executive directors to establish operational policies and procedures without board vote, bringing them for ratification at the next scheduled meeting.
The approach that works for most organizations is to implement the policy immediately as an operational guideline, inform the board chair and any technology-focused board members via email, add formal ratification to the next board meeting agenda, and prepare a brief presentation that covers the why, the key provisions, and the planned review schedule. This keeps your organization protected now while respecting board governance. Most boards appreciate the proactive approach and ratify the policy with minimal discussion, especially if you frame it as a risk mitigation measure. If you need help structuring your board presentation around AI, focus on the organizational risks the policy addresses rather than the technical details of the tools.
If your board includes members with technology or legal expertise, consider asking one or two of them to review the draft before the full board sees it. Their feedback will strengthen the document and create advocates who can help move the ratification conversation along efficiently.
After Day One: What Comes Next
Your one-day policy is a strong foundation, but it is the beginning of your AI governance journey, not the end. Here is what to prioritize in the weeks and months following your initial policy launch.
Week 1: Roll out the policy to all staff with a live walkthrough. Share the summary document and answer questions. Set a deadline for acknowledgment signatures.
Week 2-4: Conduct brief training sessions, either live or via recorded video, covering the data classification system and acceptable use guidelines. Track completion rates.
Month 2: Check in with team leads to identify pain points, unclear areas, or use cases the policy does not adequately address. Collect these for the first revision.
Month 3: Evaluate any new AI tools that staff have requested. Update the approved and prohibited tools lists based on your evaluations.
Month 6: Conduct the first formal policy review. Assess what is working, what staff are struggling with, and what new risks or regulations have emerged. Publish version 1.1 with any updates.
Conclusion
Creating an AI policy in one day is not about cutting corners. It is about recognizing that a good policy implemented today is far more valuable than a perfect policy that takes six months to draft. Your staff is making decisions about AI use right now, every day, without guidance. Even an imperfect policy gives them a framework for those decisions and protects your organization from the most common risks.
The process outlined here, a morning of research and framework building followed by an afternoon of drafting and review, has been used successfully by nonprofits ranging from five-person teams to organizations with hundreds of employees. The specifics of your policy will differ based on your mission, your data sensitivity, and your team's comfort with technology, but the structure and approach translate across contexts.
Block the day on your calendar. Gather your audit data. Write the draft. Get feedback. Publish. Your future self, and your board, your funders, and your clients, will thank you for taking the time to put thoughtful guardrails around a technology that is not going away.
Need Help Building Your AI Governance Framework?
Our team helps nonprofits develop AI policies, training programs, and governance structures that balance innovation with responsibility. Let us help you get it right.
