Back to Articles
    AI Strategy & Governance

    The 47% Without AI Policies: How to Build Governance When Adoption Outpaces Strategy

    Your staff are almost certainly using AI tools right now. The question is whether your organization has thought through what that means, what it allows, and what it prohibits.

    Published: February 20, 202618 min readAI Strategy & Governance
    Nonprofit team working on AI governance policy

    The data from 2025 and 2026 presents a striking picture: 92% of nonprofits are now using AI in some form, yet according to research by Virtuous and Fundraising.AI covering 346 organizations, 47% have no AI governance policy at all. A separate TechSoup benchmark found that 76% of nonprofits have no formal AI strategy. Only 5% of organizations in one survey reported having a clear, satisfactory AI policy. These numbers represent a genuine organizational risk that most nonprofit leaders have not yet fully reckoned with.

    This is not a moral failure. It is a predictable consequence of how technology adoption and institutional governance have always worked. Staff find useful tools, begin using them quietly, and the organization catches up with policy and structure later, sometimes years later. What is different with AI is the pace of adoption, the breadth of use cases, and the significance of the risks when governance is absent. Donor data entered into free AI tools. Client case notes used as prompts. Grant applications drafted with vendor terms that allow training on your inputs. These are not hypothetical risks; they are happening in nonprofits today.

    The good news is that building baseline AI governance does not require months of work, a technology staff member, or significant budget. Free frameworks from NTEN, Fast Forward, and Candid make it possible to create a working policy in days and refine it over time. This article explains why the governance gap exists, what any nonprofit AI policy must address, how to build one effectively, and how to avoid the common mistakes that produce policies no one follows.

    Why Adoption Has Outrun Governance

    Understanding why so many nonprofits lack AI governance helps explain why the standard advice to "just write a policy" consistently underperforms. The gap is not primarily a knowledge problem, though awareness plays a role. It is a structural problem rooted in how organizations actually work.

    AI arrived as a personal productivity tool, not an organizational system

    When AI tools like ChatGPT first became widely available, they presented as individual utilities, similar to spell-checkers or search engines. Staff adopted them the same way, without IT approval, vendor review, or organizational discussion. By the time leadership became aware of AI use across the organization, it was already widespread and the habit was formed. There was no procurement trigger that would have prompted a policy conversation.

    Leadership knowledge gaps block governance development

    Research cited by Dataconomy found that 40% of nonprofits report no one in their organization is educated in AI. When leaders don't understand the technology, they are not equipped to govern it. They may sense risks without being able to articulate them precisely, which leads to either vague guidelines or paralysis. Developing a policy requires understanding what you're governing, and that knowledge base is genuinely absent in many organizations.

    Resource constraints hit small and mid-sized organizations hardest

    The TechSoup 2025 benchmark found that nearly 30% of nonprofits with budgets under $500,000 cite financial limitations as a primary barrier to any AI strategy. Developing governance frameworks requires staff time and sometimes outside expertise, both of which are scarce in under-resourced organizations. When the choice is between serving clients and writing policy, client service wins, as it should. But this creates compounding risk over time.

    No external compliance pressure has forced the issue

    Unlike HIPAA for healthcare organizations or PCI-DSS for payment processing, there is no equivalent regulatory mandate in the United States specifically requiring nonprofits to build AI governance frameworks. Without a compliance deadline or a funder requirement, policy development gets perpetually deprioritized. The EU AI Act, which took full effect in 2024, provides an external model, but US nonprofits operating domestically have faced limited external pressure.

    The consequence of this gap is captured clearly in the data. The Virtuous 2026 Nonprofit AI Adoption Report found that organizations without governance don't measure outcomes, don't build shared workflows, and don't see meaningful improvement. Only 7% of nonprofits report major improvements in organizational capability through AI, even though 92% use it. The gap between use and impact is largely a governance gap. Organizations using AI without strategy or structure cannot learn from it, scale it, or protect against its risks.

    What a Nonprofit AI Policy Must Include

    Multiple organizations have published nonprofit-specific AI policy frameworks, including NTEN, Candid, Fast Forward, TechSoup, GlobalGiving, and AICPA/CIMA. While they differ in emphasis and comprehensiveness, they converge on a consistent set of essential components. A policy that omits any of these is incomplete in ways that matter.

    Essential Policy Components

    Every nonprofit AI policy should address these areas

    Foundational

    Purpose, scope, and who the policy applies to
    Mission and values alignment statement
    Plain-language definitions of "AI" and related terms
    Approved tools list and criteria for adding new tools
    Data handling rules: what can and cannot go into AI prompts

    Operational & Governance

    Output review requirements before publication or use
    Transparency and attribution guidelines
    Roles and accountability (who owns AI governance)
    "When NOT to use AI" with specific prohibited uses
    Review schedule and amendment process

    Candid identifies the "when NOT to use AI" section as the most important part of any nonprofit AI policy, and it deserves special attention. This is not a small caveat at the end of a policy document. It is the section that prevents genuine harm. Specific categories that should appear here include eligibility decisions for services or benefits, crisis communications in situations involving trauma or suicide prevention, individual client case notes and personally identifiable information, HR disciplinary actions and employment decisions, and any automated decision that could deny someone access to services. In all of these contexts, AI errors are not just inconvenient, they can cause direct harm to the people your organization exists to serve.

    The data handling rules section requires particular care for nonprofits because the risks are concrete and immediate. BDO's analysis of nonprofit AI risks found that many open-source and free AI tools use input data to train their models. Nonprofits that allow staff to paste donor records, client case notes, or financial data into free AI tools may be inadvertently feeding sensitive information into training datasets that could expose it to others. Seventy percent of nonprofit professionals cite data privacy and security as their primary AI concern, but concern without clear rules produces inconsistent behavior. The policy must be specific: no personally identifiable client information, no donor account details, no confidential grant materials, no personnel records.

    A Risk-Tiered Approach to AI Governance

    Not all AI use carries the same risk, and a governance policy that treats brainstorming blog ideas the same as eligibility screening will fail in practice. A risk-tiered approach, drawn from the EU AI Act's classification system and the NIST AI Risk Management Framework and adapted for nonprofit contexts, allows organizations to apply appropriate oversight to each type of use without creating so much friction that staff route around the policy entirely.

    Low Risk: Minimal Oversight Required

    Drafting internal communications and newsletters
    Brainstorming campaign ideas and event themes
    Summarizing long documents and meeting notes
    Grammar checking and proofreading
    Social media content drafts and variations
    Research and literature summaries for grant prep

    Required: Human review before publication, no PII in prompts, internal transparency about AI assistance.

    Medium Risk: Review Process Required

    Grant application drafting and revision
    Donor communications personalization
    Financial forecasting assistance
    Job description writing frameworks
    Translation and multilingual communications
    Program outcome analysis using aggregate data

    Required: Supervisor or subject matter expert review, documented rationale for AI-assisted outputs, disclosure to relevant parties where appropriate.

    High Risk: Human Oversight Mandatory or Use Prohibited

    Eligibility decisions for services, housing, or benefits
    Crisis communications in trauma or crisis situations
    Individual client case notes and care plans
    HR disciplinary actions and employment decisions
    Medical or health recommendations for clients
    Any automated decision denying someone access to services

    Required: Mandatory human decision-making; AI may provide information but cannot determine outcomes. In many cases, AI use should simply be prohibited.

    A practical way to build this tiering for your specific organization is to have staff list every current or potential AI use in their work, then score each use on two axes: the likelihood of an incorrect or biased output causing harm, and the consequences if that harm occurs. High scores on both axes indicates high-risk use that requires strict human oversight or explicit prohibition. This exercise often surfaces use cases leadership was unaware of, which is itself a valuable governance outcome.

    Seven Mistakes That Produce Policies No One Follows

    Most nonprofit AI policy efforts fail not from lack of good intentions but from predictable structural mistakes. Each of these mistakes is common enough to be worth addressing explicitly before you begin.

    1

    Copying a template without adapting it

    Candid, GlobalGiving, and NTEN all warn explicitly against adopting a template verbatim. A policy must reflect your organization's actual data practices, jurisdiction, mission, and staff reality. Templates are starting points. The policy that actually protects your organization is the one that addresses your specific programs, data types, and staff workflows.

    2

    Making it so restrictive it creates shadow AI

    Fear-based policies that prohibit all AI use unless explicitly approved don't eliminate AI use. They push it underground. Staff use unapproved tools, don't disclose AI assistance, and avoid asking questions, which means risks go unreported and the policy cannot evolve. Shadow AI hides best in fear; it surfaces fastest in trust. The goal is responsible empowerment, not restriction.

    3

    Treating it as a one-time document

    AI capabilities and risks change faster than most annual policy review cycles. A policy written in 2023 may be dangerously incomplete in 2025. The policy must explicitly define how frequently it will be reviewed, who is responsible for monitoring the landscape, and what triggers an unscheduled review. Without this, the policy becomes a historical artifact while the technology moves on.

    4

    Ignoring data security specifics

    Concern about data security does not produce safeguards; specific rules do. The policy must name what categories of information cannot be entered into AI systems, identify which tools have been security-reviewed, and specify what staff should do when they are unsure. Vendor sprawl, where staff independently adopt multiple AI tools across departments, creates security blind spots that IT cannot monitor without organizational visibility.

    5

    Writing it without staff involvement

    Policies developed without staff input are policies staff will route around. Frontline workers and middle managers have the hands-on knowledge of where AI provides value and where risks are concentrated. They will surface use cases and concerns that leadership would not anticipate. Involving them in drafting also builds the buy-in that makes a policy actually function as a behavioral guide rather than a compliance document.

    6

    No accountability mechanism

    Policies without named owners, no escalation path, and no enforcement mechanism are decorative documents. The policy must specify who can answer staff questions, who approves new tools, what happens when a staff member encounters a situation the policy doesn't address, and how compliance is monitored. Without these elements, the policy exists on paper but not in practice.

    7

    Skipping training alongside the policy

    Candid notes explicitly that a policy document alone is insufficient. Staff need training on both the policy and responsible AI practices. A policy distributed in an all-hands email and filed in the shared drive will not change behavior. Pair any policy with practical scenarios, role-specific guidance, and regular opportunities to ask questions. The policy creates the framework; training makes it real.

    The Board's Role in AI Governance

    A February 2026 analysis from Forvis Mazars explicitly frames AI governance as a core fiduciary responsibility, arguing that a board's traditional duties of Care, Loyalty, and Obedience should now include technological oversight. This is not merely advisory guidance. It reflects a genuine shift in how governance accountability is being understood across the sector.

    What Boards Should Own

    Setting the ethical framework and values commitments around AI use
    Approving the organization's AI policy at the high level
    Ensuring management has resources and expertise to implement governance
    Incorporating AI risks into enterprise risk management
    Monitoring for algorithmic bias affecting beneficiaries

    What Boards Should Not Own

    Day-to-day tool selection and vendor evaluation
    Implementation decisions about which workflows use AI
    Individual staff training and AI literacy programs
    Technical configuration and data security settings
    Specific prompt guidelines and workflow design

    Corporate governance data shows that board-level AI oversight has grown rapidly: approximately 40% of organizations now assign AI oversight to at least one board-level committee, up from 11% in 2024. Audit committees are the most common location, though technology or risk committees are also used. Nonprofit boards should explicitly define which AI governance topics warrant full-board discussion versus committee-level review, and assign clear ownership rather than leaving AI governance as an informal responsibility.

    Building board competency in AI is itself a governance imperative. Forvis Mazars recommends recruiting board members with technology expertise and investing in AI literacy across the full board. For many nonprofit boards, this may mean adding technology advisory committee members or ensuring the board's annual education agenda includes AI governance topics. Our resource on using AI for board communications and governance covers additional tools boards can use to stay informed.

    Building Your AI Policy in 90 Days

    Organizations need not choose between acting immediately with an imperfect policy and waiting for something comprehensive. A phased approach lets you address the most urgent risks quickly and build sophistication over time. A light policy that staff actually know about and follow is vastly more effective than a comprehensive policy living in a folder.

    Days 1-30Get Something Written

    Use Fast Forward's free Nonprofit AI Policy Builder (ffwd.org/nonprofit-ai-policy-builder) to generate a starting-point policy in approximately 20 minutes. Or adapt NTEN's Generative AI Use Policy Template from nten.org. Focus on the four essentials per Candid: purpose and scope, values and commitments, prompting and data-handling norms, and when NOT to use AI.

    Conduct an immediate AI use audit: survey staff informally about what tools they are already using. Address the most urgent data risk by explicitly prohibiting entry of personally identifiable client data, donor information, and confidential materials into any AI tool until vendor vetting is complete.

    Days 30-60Build Around Real Usage

    Share the draft policy with staff and explicitly invite input. Identify two or three AI champions among high-interest, moderate-proficiency staff and support them in running small, low-risk pilots. Document what works and what raises questions. Run the first round of training on both the policy and responsible AI use. Present to the board for awareness and feedback.

    This phase is where the policy becomes organizational culture rather than just a document. Staff who have input into the policy are far more likely to follow it. Champions who experience the practical implications of the guidelines help refine them for real workflows.

    Days 60-90Build for the Long Term

    Refine the policy based on staff feedback and pilot learnings. Develop role-specific guidance for different teams: development staff, program staff, finance, communications, and case managers will encounter different AI use cases and risks. Establish vendor vetting criteria and review existing tool agreements. Embed the policy into onboarding for new staff. Set a formal annual review date with a named owner. Connect AI governance to the broader organizational strategic plan.

    By the end of this phase, your organization should have a living policy that people know about, use as a reference, and trust as a guide. It will not be perfect. The most important qualities are clarity, practical utility, and clear ownership for continued development. For more on connecting AI governance to organizational strategy, see our guide to building a nonprofit AI strategy.

    Getting Staff Buy-In for AI Governance

    Change management is the defining factor in whether AI governance actually succeeds or quietly fails. The most common resistance to AI governance comes from fear: fear that the policy is a prelude to surveillance, job cuts, or restrictions that will make work harder. Addressing this directly matters more than perfecting the policy language.

    Lead with Involvement, Not Imposition

    Staff have hands-on knowledge of where AI provides value and where risks are concentrated. Policies developed without staff input get routed around. Involve frontline workers and middle managers in drafting. They will surface use cases and risks leadership would not anticipate.

    Address Job Anxiety Directly

    Leadership must explicitly communicate that AI governance is not about replacing staff. Framing AI as a tool that helps staff work more creatively and strategically, not as a replacement for their judgment, reduces the defensiveness that blocks honest conversation about how AI is being used.

    Empower AI Champions

    Identify staff who are both enthusiastic about AI and thoughtful about its limitations. Give them structured opportunities to experiment, document learnings, and share with colleagues. Peer-to-peer learning is far more effective than top-down mandates. Champions become internal resources that reduce the burden on leadership to answer every AI question. See our full guide on building AI champions in your nonprofit.

    Make It a "Yes, and" Policy

    The framing of an AI policy sends a signal. A policy built around prohibition communicates distrust. A policy built around responsible empowerment communicates that the organization wants staff to use AI well. Explicit encouragement for appropriate uses, alongside clear boundaries on inappropriate ones, produces better behavior than restrictions alone.

    Free Resources for Getting Started

    Organizations starting from zero do not need to build AI governance from scratch. A robust set of free, nonprofit-specific resources has emerged from NTEN, Fast Forward, Candid, TechSoup, and others. The challenge is not finding resources but knowing which to use first.

    Start Here: Fast Forward Nonprofit AI Policy Builder

    A free, interactive tool that generates a customized AI policy through a step-by-step intake process. Organizations choose from light, standard, or advanced policy levels and spend approximately 20 minutes to produce a custom policy covering governance, privacy, risk management, and ethics. This is the most accessible starting point for small organizations with no prior policy work. Available at ffwd.org/nonprofit-ai-policy-builder.

    NTEN AI Governance Framework and Templates

    NTEN has built the most comprehensive nonprofit-specific AI governance infrastructure. Their six-module Governance Framework covers governance principles, deciding on AI, tool evaluation, data privacy, and IT governance. Their Generative AI Use Policy Template includes board talking points. The ANB Advisory template, developed by Afua Bruce and Rose Afriyie, offers a complementary equity-focused approach. Their AI Framework for an Equitable World, developed through a community process, emphasizes harm reduction and equity. All available free at nten.org/learn/resource-hubs/artificial-intelligence.

    Candid's Responsible AI Policy Guide

    Practical, plain-language guidance on the four essential policy components, with an emphasis on building policies that actually change behavior. Candid's guide is particularly valuable for its emphasis on what NOT to use AI for and its realistic assessment of what makes policies stick or fail. Available at candid.org.

    Sector Benchmarks: TechSoup and Virtuous Reports

    Before building your policy, it is worth understanding where your organization sits relative to peers. The TechSoup 2025 State of AI in Nonprofits Benchmark Report and the Virtuous/Fundraising.AI 2026 Nonprofit AI Adoption Report provide the best current picture of sector-wide adoption, governance gaps, and outcome data. These reports help you make the case internally for why governance investment matters and calibrate the urgency of different components.

    Real Nonprofit AI Policies: NC Center for Nonprofits and Whole Whale

    Reading actual policies from organizations similar to yours accelerates development. The North Carolina Center for Nonprofits maintains a collection of sample AI policies from real organizations. Whole Whale's analysis of sector-leading policies from United Way, Oxfam, Red Cross, and Save the Children identifies the patterns that distinguish strong policies from weak ones. Seeing what peers have built, and what works, is often more useful than reading abstract guidance.

    The Governance Gap Is Closeable

    The gap between widespread AI adoption and minimal governance is not a permanent feature of the nonprofit sector. It is a transitional condition produced by fast-moving technology, limited organizational capacity, and the absence of external compliance pressure. Each of those factors can be addressed.

    The organizations that close the gap soonest are not necessarily the best-resourced or most technically sophisticated. They are the organizations where a leader decided that the risks of ungoverned AI use outweigh the time required to create basic guardrails, used existing free resources to build something quickly, involved staff in making it practical, and committed to improving it over time. That is available to any nonprofit, regardless of size or budget.

    The 7% of nonprofits that report major organizational improvement through AI, compared to the 92% that use it, are the organizations with strategy and governance in place. The difference between using AI and benefiting from AI is increasingly a governance question. Your staff are using AI tools right now. The question is whether your organization has created the conditions under which that use is safe, consistent, and building toward something.

    For organizations ready to move from policy to strategy, our guide on overcoming AI resistance and building organizational buy-in covers the change management work that makes governance stick, and our overview of AI strategic planning for nonprofits connects governance to the broader organizational direction.

    Ready to Build Your AI Policy?

    Free resources from NTEN, Fast Forward, and Candid make it possible to create a working policy in days. Explore AI tools and strategies designed for nonprofit organizations.

    Related Articles

    Build a comprehensive AI strategy that connects governance to organizational goals and outcomes.

    Identify and develop internal AI leaders who drive adoption and navigate governance challenges.

    Address staff concerns and organizational resistance to create genuine AI adoption across your team.