Building an AI Governance Framework Before Regulators Require One
Nearly half of all organizations have no AI policy in place, yet state-level AI legislation is accelerating rapidly. Nonprofits that build governance frameworks now will protect their missions, strengthen stakeholder trust, and avoid the scramble of reactive compliance when new laws take effect.

A striking statistic from recent industry surveys reveals that 47% of organizations have no AI policy at all. For nonprofits, this gap is particularly concerning. These organizations handle sensitive beneficiary data, make decisions that directly affect vulnerable populations, and operate under heightened public trust obligations. Yet many are adopting AI tools, from donor segmentation models to grant-writing assistants, without any formal governance structure in place.
The regulatory environment is shifting fast. Colorado's AI Act takes effect in June 2026, New York has pending legislation targeting automated decision-making, and California is already enforcing AI-related consumer protections. For nonprofits operating across state lines, the patchwork of emerging regulations creates a compliance challenge that only grows more complex over time. Organizations that wait for mandates before establishing governance will find themselves rushing to build policies under pressure, with less room for thoughtful, mission-aligned design.
But framing AI governance purely as a compliance exercise misses the larger opportunity. A well-designed governance framework does more than satisfy regulators. It clarifies how your organization makes decisions about technology, strengthens accountability across departments, and ensures that AI tools serve your mission rather than quietly undermining it. Governance is the mechanism through which nonprofits translate their values into operational practices, and AI is simply the newest domain where that translation matters. If your organization has already explored the gap between AI policy and actual governance, you know that written policies alone are not enough.
This article provides a practical, phased approach to building an AI governance framework suited to nonprofit realities. You will find guidance on adapting established frameworks like the NIST AI Risk Management Framework, structuring oversight committees, developing policies that evolve with your organization, and managing vendor relationships responsibly. Whether your nonprofit is just beginning to use AI or has already deployed multiple tools, this guide will help you build governance that is proactive, proportionate, and rooted in your mission.
Why Proactive Governance Beats Reactive Compliance
Organizations that treat governance as a proactive discipline rather than a regulatory checkbox gain several tangible advantages. The most important is time. Building a governance framework before external deadlines allows your team to design policies that genuinely reflect your organization's values, operational realities, and risk tolerance. Reactive compliance, by contrast, forces organizations to adopt generic templates that may not fit their specific context, creating policies that exist on paper but fail to guide behavior in practice.
Proactive governance also builds institutional knowledge. When staff participate in developing governance structures, they develop a deeper understanding of how AI tools work, where risks emerge, and what safeguards matter most. This knowledge becomes embedded in the organization's culture, making governance a living practice rather than a binder on a shelf. The Forvis Mazars framework makes this point effectively: board duties of Care, Loyalty, and Obedience should naturally extend to technology oversight. Directors who fulfill their duty of care by staying informed about AI usage, their duty of loyalty by ensuring AI serves organizational interests rather than vendor interests, and their duty of obedience by confirming AI practices align with the nonprofit's charitable purpose are already practicing governance. For a deeper look at how boards can take ownership of AI oversight, see our article on board-level AI oversight responsibilities.
The financial argument is straightforward as well. Organizations that scramble to comply with new regulations typically spend more on external consultants, staff overtime, and tool migrations than those that built governance incrementally. A phased approach, starting with foundational policies and expanding as AI usage grows, distributes costs over time and reduces the risk of expensive emergency pivots.
Proactive Governance Benefits
- Policies designed around your mission and risk profile, not generic templates
- Staff develop genuine understanding of AI risks and safeguards
- Incremental costs spread over time instead of emergency spending
- Stronger funder and stakeholder confidence in your technology practices
Reactive Compliance Risks
- Generic policies that do not reflect organizational realities
- Higher costs from rushed consultant engagements and tool changes
- Staff confusion when policies appear without context or training
- Reputational damage if compliance gaps become public
Core Components of an AI Governance Framework
An effective AI governance framework for nonprofits rests on six interconnected pillars. Each component addresses a different dimension of responsible AI use, and together they create a comprehensive structure that supports both day-to-day operations and long-term strategic planning. Vera Solutions' nine principles of responsible AI, developed specifically for the social sector, emphasize that governance should be holistic rather than narrowly technical. These principles, including fairness, transparency, human agency, and accountability, provide a useful ethical foundation on which to build your framework's operational components.
Mission Alignment
Every AI decision anchored to purpose
Mission alignment is the most important and most frequently overlooked component. Before evaluating any AI tool, your framework should require a clear articulation of how it advances your charitable purpose. This goes beyond asking whether a tool is useful. It asks whether the tool's design, data practices, and outputs are consistent with the values your organization exists to uphold. A food bank using AI to optimize distribution routes serves its mission directly. The same food bank using AI to score beneficiaries by "worthiness" contradicts it, even if the technology works as advertised.
Risk Management
Systematic identification and mitigation
Risk management within AI governance requires identifying, assessing, and mitigating risks specific to your AI usage. This includes data privacy risks, algorithmic bias, accuracy failures, and dependency on specific vendors. Your framework should define risk categories relevant to your work, establish thresholds for acceptable risk, and specify what happens when those thresholds are exceeded. Organizations that maintain formal AI risk registers find it significantly easier to make consistent, defensible decisions about AI adoption.
Organizational Structure
Clear roles, responsibilities, and reporting
Organizational structure defines who is responsible for governance at each level. This includes designating an AI governance lead or committee, clarifying reporting lines, and establishing escalation procedures for issues that exceed individual authority. Small nonprofits may assign governance responsibilities to existing roles, while larger organizations may create dedicated positions or committees. The key is that every staff member knows who to consult when questions arise about AI use.
Oversight and Accountability
Monitoring, auditing, and transparency
Oversight and accountability mechanisms ensure that governance does not exist only at the moment of adoption but continues throughout the lifecycle of each AI tool. This includes regular audits of AI system performance, bias checks on outputs, review of vendor compliance with contractual terms, and transparent reporting to the board and stakeholders. Accountability also means defining consequences when policies are violated, whether through additional training, tool suspension, or other corrective measures.
Vendor Management
Due diligence, contracts, and ongoing review
Vendor management addresses the reality that most nonprofits do not build AI systems in-house. Your framework should include criteria for evaluating AI vendors, required contractual protections (data ownership, processing limitations, breach notification), and a schedule for reviewing vendor performance and compliance. This component is especially critical given how rapidly AI vendors change their terms of service, data practices, and underlying models.
Policy Documentation
Living documents that evolve with your organization
Policy documentation ties everything together in written form. Your AI governance policy should be a living document, revisited at minimum every six months, that captures your organization's current position on AI use, approved tools, prohibited practices, and governance procedures. NTEN's AI governance framework recommends organizing this documentation into six modules covering strategy, ethics, data, operations, people, and compliance. For guidance on keeping your policy current, see our article on updating your AI policy for 2026.
Adapting the NIST AI Risk Management Framework for Nonprofits
The NIST AI Risk Management Framework (AI RMF 1.0) provides one of the most rigorous and well-regarded structures for managing AI-related risks. While it was designed with a broad audience in mind, its four core functions translate directly to the nonprofit context with some adaptation. For a detailed exploration of how each function applies to your organization, see our comprehensive guide to the NIST AI RMF for nonprofits.
The framework's four functions, Govern, Map, Measure, and Manage, provide a repeatable cycle for identifying and addressing AI risks. Nonprofits do not need to implement the full NIST framework in its original form. Instead, the goal is to adopt its logic and adapt its practices to your organization's scale, resources, and mission context.
1Govern: Establishing the Foundation
The Govern function is cross-cutting, meaning it informs and shapes all other functions. For nonprofits, this means establishing the organizational culture, policies, and structures that make AI governance possible. It includes defining your organization's AI principles, assigning governance responsibilities, allocating resources for oversight, and building accountability mechanisms. The Govern function asks: who decides, how do they decide, and how do we ensure those decisions reflect our values?
- Define AI principles grounded in your mission and values
- Assign governance roles with clear authority and accountability
- Allocate budget for training, auditing, and ongoing oversight
- Integrate AI governance into existing board reporting processes
2Map: Understanding Your AI Landscape
The Map function focuses on understanding the context in which your AI systems operate. For nonprofits, this means inventorying every AI tool currently in use (including tools embedded in platforms staff may not think of as "AI"), identifying who uses each tool, what data it processes, what decisions it informs, and who is affected by its outputs. Mapping also includes understanding the broader context: your regulatory environment, your stakeholders' expectations, and the specific risks your beneficiary populations face.
- Create a comprehensive inventory of all AI tools and their use cases
- Document data flows, including what data enters AI systems and where outputs go
- Identify stakeholders affected by AI-informed decisions
- Assess the regulatory landscape for each state where you operate
3Measure: Evaluating Risk and Performance
The Measure function involves analyzing and assessing the risks you identified during the Map phase. For nonprofits, this means evaluating each AI tool against criteria like accuracy, bias, privacy impact, and mission alignment. Measurement should be quantitative where possible (error rates, demographic disparity in outputs) and qualitative where necessary (staff confidence in outputs, beneficiary comfort with AI-assisted services). The goal is not perfection but a clear, honest understanding of where risks are acceptable and where they require intervention.
- Define metrics for accuracy, fairness, and mission alignment for each AI use
- Conduct periodic bias audits, particularly for tools affecting beneficiary services
- Gather feedback from staff and, where appropriate, beneficiaries
- Document findings in a format accessible to both technical and non-technical leaders
4Manage: Taking Action on Risks
The Manage function is where governance translates into action. Based on your measurements, you decide how to respond to identified risks: accept them (with documentation), mitigate them (with safeguards), transfer them (through insurance or contracts), or avoid them (by discontinuing a tool). For nonprofits, the Manage function should include clear escalation paths, defined thresholds that trigger review, and contingency plans for when AI systems fail or produce harmful outputs. This is also where your governance committee demonstrates its value by making and documenting decisions under real conditions.
- Establish response protocols for different risk levels (low, medium, high, critical)
- Create incident response plans for AI system failures or harmful outputs
- Document all risk decisions and their rationale for accountability
- Schedule regular reviews to reassess risks as AI tools and context evolve
Building Your Governance Committee and Structure
Governance frameworks are only as effective as the people who implement them. The organizational structure you create for AI oversight should match your nonprofit's size, complexity, and AI maturity. A small organization with three staff members and one AI tool does not need a formal committee, but it does need someone explicitly responsible for AI decisions. A large organization with dozens of AI-enabled tools across multiple departments needs a more structured approach with clear roles, regular meetings, and formal reporting lines.
Regardless of your organization's size, effective AI governance structures share several characteristics. They include diverse perspectives, not just technology staff but also program leaders, finance, legal counsel, and ideally someone who represents the communities you serve. They have clear authority to make binding decisions about AI adoption, modification, and discontinuation. They report regularly to the executive director or CEO and, at least quarterly, to the board. And they operate with transparency, documenting their deliberations and decisions so that the rationale behind governance choices is preserved even when committee members change. For strategic guidance on integrating AI oversight into board-level planning, see our article on building a strategic AI roadmap for boards.
Recommended Governance Committee Roles
Adapt to your organization's size and structure
Core Members
- Governance Chair: Senior leader who sets agenda, ensures accountability, and reports to the board
- Technology Lead: Staff member who understands the technical capabilities and limitations of AI tools
- Program Representative: Someone who understands how AI tools affect service delivery and beneficiaries
- Data/Privacy Lead: Person responsible for data protection policies and compliance
Advisory Members
- Finance: Reviews budget implications, vendor costs, and insurance considerations
- Legal Counsel: Reviews regulatory compliance, contracts, and liability (can be external)
- Board Liaison: Board member who ensures governance aligns with fiduciary duties
- Community Voice: Beneficiary representative or advocate to ensure affected perspectives are heard
For smaller organizations where a full committee is impractical, consider a lightweight model: designate one staff member as the AI governance lead, create a simple review checklist for new AI tools, and schedule quarterly reviews with your executive director. The structure matters less than the consistency. What makes governance work is not the size of your committee but the regularity and seriousness with which you address AI decisions.
Policy Development Essentials
Your AI governance policy is the written expression of your framework. It should be comprehensive enough to guide real decisions but concise enough that staff will actually read and refer to it. NTEN's framework suggests organizing policy documentation into six modules: strategy (why you use AI), ethics (your principles and boundaries), data (how information flows through AI systems), operations (day-to-day procedures), people (training and accountability), and compliance (regulatory requirements). This modular approach makes it easier to update individual sections as your organization evolves without rewriting the entire document.
A governance policy should be revisited at minimum every six months. The AI landscape changes too quickly for annual reviews to suffice. New tools emerge, vendor terms change, regulations take effect, and your organization's AI usage expands. Each review should assess whether current policies still reflect your actual practices, whether new risks have emerged, and whether staff are following the guidelines in practice. If you have not yet created an initial AI policy, our AI ethics checklist provides a solid starting point for identifying the principles and boundaries your policy should address.
Essential Policy Components
What your AI governance policy should cover
- Scope and applicability: Which tools, departments, and use cases the policy covers, including AI features embedded in existing platforms
- Approved and prohibited uses: Clear lists of sanctioned AI applications and explicit boundaries (e.g., no AI-only decisions about beneficiary eligibility)
- Data handling requirements: What data can be used with AI tools, anonymization requirements, and restrictions on sharing data with third-party AI services
- Human oversight requirements: Which decisions require human review of AI outputs, and the qualifications needed for that review
- Procurement and approval process: Steps required before adopting a new AI tool, including who approves and what evaluation criteria apply
- Incident response procedures: What to do when an AI system produces harmful, biased, or incorrect outputs
- Training requirements: What staff need to know before using AI tools, and how ongoing education is provided
- Review schedule: Commitment to revisiting the policy every six months, with triggers for ad-hoc reviews when significant changes occur
Risk Assessment and Management
Risk assessment is where governance moves from abstract principles to concrete analysis. Nonprofits face a particular set of AI risks that differ from the corporate sector. Your beneficiaries may be more vulnerable to algorithmic bias. Your data may be more sensitive. Your margin for error may be smaller because mistakes in service delivery can have immediate, tangible consequences for people in need. A risk assessment process that accounts for these realities will serve your organization far better than a generic corporate template.
Start by categorizing your AI uses by risk level. Low-risk applications, like using AI to draft internal meeting summaries, require minimal oversight. Medium-risk applications, like AI-assisted donor segmentation, warrant regular review and documented approval. High-risk applications, like AI that influences decisions about who receives services or how resources are allocated, demand the most rigorous oversight, including bias audits, human review of every consequential output, and regular reassessment. This tiered approach ensures your governance effort is proportional to actual risk rather than applying the same heavy process to every AI use.
Low Risk
Minimal oversight required
- Internal content drafting and editing
- Meeting summarization
- General research assistance
- Administrative task automation
Medium Risk
Regular review and documentation
- Donor communication personalization
- Grant prospect identification
- Program outcome analysis
- Social media content generation
High Risk
Rigorous oversight and auditing
- Beneficiary eligibility screening
- Resource allocation decisions
- Hiring and HR screening
- Financial risk scoring
For each high-risk AI application, document the specific risks, the mitigation measures in place, the residual risk after mitigation, and the person responsible for ongoing monitoring. This documentation serves multiple purposes: it guides your governance committee's decisions, provides evidence of responsible practices to regulators and funders, and creates institutional memory that persists through staff transitions. As regulatory requirements like the Colorado AI Act take effect, this documentation will also form the basis of your compliance evidence.
Vendor Due Diligence and Accountability
Most nonprofits do not build AI systems from scratch. They use commercial tools, often provided through nonprofit discount programs or donated by technology companies. This creates a significant governance challenge: your organization is accountable for how AI affects your beneficiaries and operations, but you may have limited visibility into how the AI actually works. Vendor due diligence bridges this gap by ensuring you understand enough about a tool's design, data practices, and limitations to govern its use responsibly.
Effective vendor due diligence starts before procurement and continues throughout the vendor relationship. Before adopting a new AI tool, your governance process should evaluate the vendor's data handling practices, model transparency, bias testing procedures, and contractual commitments. Key questions include: Does the vendor retain your data for model training? Can you export your data if you switch providers? Does the vendor provide documentation about how the model was trained and tested? What happens to your data if the vendor is acquired or goes out of business?
Vendor Evaluation Checklist
Essential questions for AI vendor due diligence
Data and Privacy
- How is your data stored, processed, and retained?
- Is your data used to train or improve the vendor's models?
- What data portability and deletion rights do you have?
- What security certifications does the vendor hold?
Transparency and Accountability
- Does the vendor provide model documentation or model cards?
- Has the vendor conducted and published bias or fairness testing?
- What is the vendor's breach notification timeline and process?
- What contractual remedies exist if the tool underperforms or causes harm?
Contract terms deserve particular attention. Many AI vendors' standard agreements include broad data usage rights, limited liability for AI errors, and unilateral rights to change the service. Your governance framework should include minimum contractual requirements that protect your organization's interests, particularly around data ownership, processing limitations, subprocessor restrictions, and termination rights. Negotiating these terms before signing is far easier than renegotiating after your organization depends on the tool.
Implementation Roadmap: A Phased Approach
Building a governance framework does not require months of dedicated effort or large budgets. The most effective approach is phased, starting with the essentials and expanding as your organization's AI maturity grows. The following roadmap breaks implementation into three phases, each building on the previous one. Most nonprofits can complete the first phase in four to six weeks, establishing a solid foundation that satisfies basic governance needs while more sophisticated components are developed.
1Phase 1: Foundation (Weeks 1-6)
Establish core governance essentials
The foundation phase focuses on visibility and basic guardrails. You cannot govern what you do not know about, so this phase begins with a thorough inventory of all AI tools currently in use across your organization. Include not just obvious tools like ChatGPT but also AI features embedded in your CRM, email marketing platform, accounting software, and other operational systems. Once you have an inventory, draft a basic AI use policy that establishes your organization's principles, identifies prohibited uses, and designates who is responsible for AI governance decisions.
- Conduct a comprehensive AI tool inventory across all departments
- Draft and adopt a foundational AI use policy with clear principles and boundaries
- Designate an AI governance lead or small working group
- Communicate the policy to all staff and provide initial training
- Brief the board on current AI usage and governance plans
2Phase 2: Maturation (Months 2-4)
Deepen governance structures and processes
The maturation phase adds depth and rigor to your initial foundation. This is where you formalize your governance committee structure, develop detailed risk assessment criteria, establish vendor due diligence procedures, and create processes for evaluating and approving new AI tools. You should also begin documenting your risk register, mapping each AI application to its risk level and the mitigation measures in place. During this phase, conduct your first formal review of the foundational policy you created in Phase 1, updating it based on what you have learned.
- Formalize the governance committee with defined roles and meeting cadence
- Develop a risk assessment framework with tiered categories
- Create vendor evaluation criteria and contractual requirements
- Build an AI tool approval workflow with clear decision criteria
- Conduct first policy review and update based on operational experience
3Phase 3: Optimization (Months 5-12 and Ongoing)
Refine, measure, and continuously improve
The optimization phase shifts from building governance to operating and improving it. This means conducting regular audits of AI tool performance and compliance, tracking governance metrics (like the percentage of AI tools that have undergone formal review), and refining your processes based on what works and what does not. This phase also includes preparing for external accountability, whether through regulatory compliance, funder reporting, or public transparency about your AI practices. By this stage, your governance framework should be a natural part of how your organization makes technology decisions.
- Conduct semi-annual comprehensive policy reviews and updates
- Perform bias and performance audits on high-risk AI applications
- Track and report governance metrics to the board quarterly
- Monitor regulatory developments and assess compliance readiness
- Publish transparency reports on your organization's AI practices
Staying Ahead of the Regulatory Landscape
The regulatory environment for AI is accelerating in ways that directly affect nonprofits. Colorado's AI Act, which takes effect in June 2026, is among the most comprehensive state-level AI regulations in the country. It imposes specific obligations on "deployers" of high-risk AI systems, including impact assessments, transparency requirements, and risk management duties. Nonprofits that use AI in decisions affecting employment, services, or resource allocation may fall under its scope. New York has pending legislation targeting automated decision-making in employment and housing, and California is already enforcing AI-related protections under existing consumer privacy law. For a detailed analysis of Colorado's requirements and what they mean for nonprofits, see our article on Colorado AI Act compliance for nonprofits.
Organizations that have already built governance frameworks will find regulatory compliance far more manageable than those starting from scratch. The documentation, risk assessments, and oversight structures you develop through proactive governance are exactly what regulators will look for as evidence of responsible AI use. A governance framework does not eliminate compliance work, but it dramatically reduces the gap between your current practices and what regulations require.
Your governance framework should include a regulatory monitoring component. Assign someone, whether the governance lead, legal counsel, or an external advisor, to track relevant AI legislation in the states where you operate. When new regulations are proposed or enacted, assess their impact on your current AI practices and update your policies accordingly. This ongoing vigilance transforms regulatory changes from crises into routine governance activities.
Moving from Policy to Practice
Building an AI governance framework is not a one-time project with a defined end date. It is an ongoing organizational practice, similar to financial oversight or program evaluation, that evolves as your AI usage matures and the external landscape shifts. The nonprofits that will navigate this transition most successfully are those that start now, even imperfectly, rather than waiting for a regulatory mandate or a crisis to force their hand.
The frameworks and structures described in this article are designed to be practical and proportionate. A small nonprofit does not need a ten-person governance committee, and a large organization does not need to rebuild its governance from scratch every quarter. What every nonprofit needs is intentionality: a deliberate, documented approach to making decisions about AI that reflects both your mission and your responsibilities to the people you serve.
Start with what you can do this month. Inventory your AI tools. Designate someone responsible. Write down your principles and boundaries. Then build from there. The goal is not to create a perfect governance framework on the first attempt. It is to establish the habits, structures, and documentation that make responsible AI use a natural part of how your organization operates. When regulators, funders, or stakeholders ask about your AI practices, you want to be able to point to a living framework that demonstrates thoughtful, mission-aligned governance, not scramble to create one after the fact.
Ready to Build Your AI Governance Framework?
We help nonprofits design practical, mission-aligned AI governance frameworks that prepare you for regulatory requirements while strengthening organizational accountability. Start with a conversation about where you are today and where you need to be.
