How Boards Should Oversee AI: The Forvis Mazars Framework for Nonprofit Governance
More than 80% of nonprofits now use AI in some capacity, yet fewer than 10% have formal governance policies. The gap between AI adoption and board oversight is one of the most significant governance risks facing nonprofit organizations in 2026.

At board meetings across the nonprofit sector, a quiet conversation is happening: directors who have heard about AI, who may even use it personally, are unsure what it means for the organizations they govern. Staff are experimenting with ChatGPT for grant writing, using AI tools to schedule volunteers, and piloting automated donor communications, all without formal guidance, policy frameworks, or board awareness. The gap between what's happening operationally and what boards understand and govern is growing rapidly, and that gap carries real risk.
In February 2026, Forvis Mazars, one of the leading advisory firms serving the nonprofit sector, published a framework for AI governance that directly addresses this challenge. Their analysis argues that AI governance has crossed a threshold: it is no longer an IT matter or a staff-level decision, but a fiduciary imperative that belongs squarely in the boardroom. The organizations that treat AI oversight as a core governance responsibility, rather than a technical side concern, will be better positioned to realize AI's genuine benefits while avoiding the reputational, legal, and mission risks that come with unmanaged adoption.
This article provides a comprehensive overview of how nonprofit boards can approach AI oversight effectively in 2026. It draws on the Forvis Mazars framework as an organizing structure, alongside guidance from the National Association of Corporate Directors (NACD), WilmerHale, BoardEffect, and BDO, which have each published relevant governance guidance in the past year. The goal is practical: board members who read this should leave with a clear understanding of their oversight responsibilities, the specific risks they need to monitor, and the questions they should be asking management.
The AI Governance Gap: Why Boards Must Act Now
The data on nonprofit AI governance is stark. Analysis from Whole Whale found that while more than 80% of nonprofits use AI in some capacity, fewer than 10% have formal policies governing that use, and nearly half have no AI policy at all. A 2025 NACD survey found that while 62% of boards hold regular AI discussions, only 27% have formally incorporated AI governance into committee charters. McKinsey reports that only 17% of organizations assign board-level responsibility for AI governance.
These numbers reveal an organization-wide pattern sometimes called "shadow AI": staff adopting AI tools independently, without disclosure, without policies, and without understanding how those tools handle the sensitive data they encounter. When staff feel that asking permission will result in a blanket "no," they don't ask. The resulting informal adoption creates compliance exposure, data security vulnerabilities, and potential bias in program delivery, all without any board visibility.
of nonprofits use AI in some capacity
have formal AI governance policies
assign board-level responsibility for AI
The consequences of ungoverned AI adoption are not hypothetical. IBM's 2025 Cost of a Data Breach Report found that 97% of organizations experiencing AI-related breaches lacked proper access controls, and 63% of breached organizations had no formal AI governance policies. Half of organizations using AI have experienced at least one negative consequence from generative AI adoption, ranging from data leaks to biased outputs to reputational incidents.
For nonprofit boards, this matters in two ways. First, the organizations they govern serve vulnerable populations, and AI failures in programs serving those populations can cause real harm. Second, donor trust is the foundation of nonprofit financial sustainability, and reputational damage from AI incidents can be severe and difficult to recover from. The fiduciary duty that board members accept when they join a nonprofit board now implicitly includes a responsibility to understand and govern AI.
AI Governance as Fiduciary Duty
The Forvis Mazars framework grounds board AI oversight in the three traditional duties that define nonprofit board governance. This framing is important because it transforms AI oversight from a technical nice-to-have into a legal and ethical imperative, on the same level as financial oversight or executive accountability.
Duty of Care: Boards Must Be Reasonably Informed
The duty of care requires board members to be reasonably informed about the organization's operations and risks. In the context of AI, this means understanding how AI is being used across the organization, what risks it creates, and whether management has implemented appropriate safeguards. Boards fulfill this duty not by becoming AI experts themselves, but by asking good questions, requesting regular briefings from management, and investing in the AI literacy needed to evaluate the answers they receive.
- Request a comprehensive AI inventory from management showing every tool in use
- Schedule regular AI briefings, at minimum quarterly, more frequently during active adoption periods
- Invest in board AI literacy through workshops, expert presentations, or certification programs
Duty of Loyalty: Mission and Beneficiaries Come First
The duty of loyalty requires board members to ensure the organization's actions serve its mission and beneficiaries, not vendor interests, staff convenience, or external pressures. When applied to AI, this duty means ensuring that AI adoption serves the populations the nonprofit was created to help. This is particularly important for organizations whose clients or beneficiaries are from marginalized communities that AI systems may treat unfairly due to bias in training data or algorithm design.
- Evaluate each AI initiative against mission alignment, not just operational efficiency
- Require bias audits for AI systems used in program delivery or beneficiary services
- Guard against vendor relationships that compromise the organization's independence or values
Duty of Obedience: Compliance with Mission and Law
The duty of obedience requires boards to ensure the organization complies with its stated mission, values, and applicable law. For AI, this means ensuring that AI activities don't drift from the organization's values, that they comply with emerging AI regulations at state and federal levels, and that they honor commitments made to donors, funders, and the communities served. As AI regulation continues to develop rapidly, boards need to ensure management is tracking compliance requirements and adapting accordingly.
- Verify that AI use complies with applicable state AI laws and privacy regulations
- Ensure AI activities align with donor and funder expectations and any relevant grant conditions
- Monitor for mission drift as AI efficiency gains shift organizational focus
The Forvis Mazars SAFE AI Framework
At the heart of the Forvis Mazars approach is a practical four-part framework that boards can use to evaluate and guide AI governance. The SAFE framework stands for Secure, Adaptable, Factual, and Ethical, and it provides a concise lens for assessing both individual AI initiatives and the organization's overall AI posture.
S: Secure
AI implementations must protect sensitive data, including donor information, beneficiary records, and organizational finances. Boards should verify that data security practices extend to AI tools, that staff understand what data may never be entered into AI systems, and that cyber insurance covers AI-related incidents.
- Data classification policy exists for AI tool use
- Cyber insurance covers AI-related breaches
- AI tool inventory exists and is maintained
A: Adaptable
AI systems and governance frameworks must be able to evolve as technology changes, regulations develop, and organizational needs shift. Static policies become outdated quickly in AI. Boards should ensure governance frameworks include built-in review cycles and that the organization isn't locked into AI tools or vendors that limit future flexibility.
- AI policy includes annual review requirement
- Staff training is ongoing, not a one-time event
- Vendor contracts allow for technology evolution
F: Factual
AI systems must produce reliable, accurate outputs, and organizations must have processes in place to catch errors before they cause harm. AI "hallucinations," where models generate confident but false information, are a particular concern for nonprofits producing grant reports, donor communications, and program documentation.
- Human review required for all AI-generated external content
- Staff trained to recognize and flag AI errors
- Critical decisions aren't made on AI outputs alone
E: Ethical
AI use must align with the organization's values and avoid harm to the communities served. This includes regular auditing for algorithmic bias, ensuring AI tools used for program delivery don't discriminate, and maintaining transparency with stakeholders about how AI is used in organizational decision-making.
- Bias audits conducted for AI used in programs
- AI use disclosed to stakeholders where appropriate
- AI initiatives evaluated for mission alignment
Alongside the SAFE framework, Forvis Mazars offers three core recommendations for boards. First, boards should resist the "efficiency trap," recognizing that adopting AI solely for speed or cost reduction, without evaluating mission alignment, can gradually shift organizational focus away from the people and communities the nonprofit serves. Second, boards should address algorithmic blind spots by requesting bias audits and fairness testing for AI tools used in program delivery. Third, boards should codify responsibility by adopting a recognized governance framework, such as the NIST AI Risk Management Framework, to provide structured guidance for privacy, consent, and fairness decisions.
Five AI Risks Every Nonprofit Board Must Monitor
Effective board AI oversight requires understanding the specific categories of risk that AI creates in nonprofit contexts. Different organizations will face different risk profiles depending on the nature of their programs, the populations they serve, and the AI tools they use, but five categories of risk appear consistently across sector research and warrant systematic board attention.
1. Data Security and Privacy
Donor data, beneficiary information, volunteer records, and financial data are all at risk when staff use public AI tools without clear guidelines about what may be shared. When sensitive information enters public AI platforms, it can become training data permanently, creating compliance exposure and potential harm to the individuals whose data was disclosed. Nonprofit organizations are particularly attractive targets for data breaches because they often hold large volumes of personal information without the cybersecurity infrastructure of larger institutions.
Boards should ask management to confirm that cyber insurance explicitly covers AI-related incidents and that staff have clear, written guidance about what categories of data may never be entered into AI tools.
2. Algorithmic Bias in Program Delivery
AI systems trained on historical data can perpetuate or amplify existing inequities, which is particularly dangerous for nonprofits serving marginalized communities. If an AI system used for case management, grant screening, or beneficiary selection has been trained on data that reflects historical biases, it can systematically disadvantage the very populations the organization exists to help. This risk is not hypothetical, AI systems across many domains have demonstrated bias that reflects patterns in their training data.
Boards should require management to conduct bias audits for any AI tool that affects decisions about program eligibility, resource allocation, or beneficiary services, and to report regularly on bias testing results.
3. AI Hallucinations and Content Accuracy
Generative AI models sometimes produce confidently stated but factually incorrect information, a phenomenon known as "hallucination." For nonprofits, this risk is acute in high-stakes documents: grant proposals that cite incorrect statistics, annual reports with inaccurate program data, or donor communications that make false claims about impact. A single significant inaccuracy in a major grant report can damage funder relationships and trigger compliance reviews.
Boards should ask management to confirm that all AI-generated content undergoes human expert review before publication, submission, or distribution.
4. Mission Drift and Ethical Risk
One of the subtler risks of AI adoption is gradual mission drift. When organizations optimize for AI-measurable outcomes, they may inadvertently shift attention toward the people and programs that show up most clearly in data, and away from harder-to-quantify but equally important work. An AI tool that predicts donor giving potential might focus the development team's attention on wealthy donors and cause the organization to deprioritize smaller-gift donors from marginalized communities who are more closely connected to the mission.
Boards should periodically evaluate AI implementations against a simple question: are these tools making us better at our mission, or are they subtly reshaping our mission toward what's easy to measure?
5. Vendor Sprawl and Compliance Exposure
Without clear policies and approval processes, organizations can accumulate a large number of AI tools across departments, each with different data handling practices, terms of service, and security postures. This vendor sprawl creates compliance complexity and security risk. It also makes it difficult for boards and management to maintain meaningful oversight of how AI is being used across the organization.
Boards should ask management to maintain and regularly report a comprehensive AI tool inventory, and to describe the approval process for staff adoption of new AI tools.
Questions Every Board Member Should Be Asking
Board members don't need to be AI experts to provide effective oversight. What they need is a set of substantive questions that reveal whether management has thought carefully about AI governance and implemented appropriate structures. The following questions, drawn from NACD, Forvis Mazars, and WilmerHale guidance, cover the four areas of board AI oversight most critical for nonprofits.
Strategic Questions
- Why is AI strategically relevant to our mission? What specific outcomes are we pursuing?
- Do we have a comprehensive inventory of every AI tool currently in use?
- How are we measuring AI's impact on mission outcomes, not just operational efficiency?
- Are our AI initiatives Mission-aligned with measurable outcomes, as Forvis Mazars recommends?
Risk and Compliance Questions
- Do we have a formal AI acceptable-use policy? Has it been communicated to all staff?
- Does our cyber insurance explicitly cover AI-related incidents and data breaches?
- Have we conducted bias audits on AI tools used in program delivery or beneficiary services?
- Are we tracking and complying with applicable state and federal AI regulations?
Operational Questions
- Has management translated board AI policies into clear, written staff procedures?
- Are staff roles and escalation paths for AI-related issues clearly defined?
- Is ongoing AI literacy training available? Is it required for relevant staff?
- Do our AI workflows align with board-approved governance frameworks?
Vendor Questions
- Which AI models do our software vendors use, and how are those models trained?
- What organizational data do our vendors have access to, and how is it protected?
- What is our approval process for staff adopting new AI tools?
- Do our vendor contracts include AI-specific terms about data use and security?
Building Your AI Oversight Structure
One practical question boards face is where AI oversight lives within existing committee structures. Creating entirely new committees is rarely the right answer for nonprofits with already-stretched board capacity. The more common and resource-efficient approach is to explicitly add AI responsibilities to existing committee charters, ensuring that AI governance is distributed across the appropriate oversight functions rather than siloed in one place.
Distributing AI Oversight Across Committees
How existing committees can incorporate AI governance responsibilities
Audit and Finance Committee
This committee's existing oversight of financial controls, cybersecurity, and risk management extends naturally to AI. Add explicit responsibilities for: AI-related cybersecurity review, vendor data protection assessment, cyber insurance coverage verification for AI incidents, and integration of AI risks into the organization's enterprise risk management framework.
Technology Committee (if exists)
Where a technology or innovation committee exists, this is the natural home for detailed AI oversight: maintaining the AI tool inventory, reviewing new tool adoption requests, evaluating vendor AI capabilities, and conducting or commissioning bias audits for program-facing AI tools.
Nominating and Governance Committee
Update the board skills matrix to include AI fluency as a desirable expertise when recruiting new directors. Develop a board AI literacy education plan. Review and update committee charters to incorporate AI governance responsibilities.
Full Board
The full board should retain oversight of AI strategic alignment, major AI investment decisions, significant AI incidents, and the approval and annual review of the organization's AI acceptable-use policy.
A core governance document that every nonprofit board should approve is an AI acceptable-use policy. This policy, recommended by both Forvis Mazars and the NIST AI Risk Management Framework, should define which AI tools are approved for organizational use, what data categories may never be entered into AI systems, when human review is required before AI-generated content is used, and how staff should report AI-related concerns or incidents. Importantly, the policy should be enabling rather than restrictive. Fear-based policies that prohibit AI use broadly drive adoption underground, where it happens without any oversight at all. Effective policies channel AI use toward approved tools with clear guidelines, creating visibility and accountability while still allowing staff to benefit from AI's genuine capabilities.
Boards that have already established AI champions within their organization are often better positioned for governance, because those champions can serve as a resource for both management and board members as they develop their governance approach. For organizations earlier in their AI journey, resources on getting started with AI can provide the foundational understanding board members need to ask effective oversight questions.
Building Board AI Literacy
Effective AI governance doesn't require board members to become AI experts, but it does require sufficient foundational understanding to evaluate management's decisions and ask substantive questions. The NACD has found that organizations with AI-literate boards outperform peers by nearly 11 percentage points in financial performance, suggesting that board AI literacy has measurable strategic value beyond governance risk reduction.
Building board AI literacy is an ongoing education process, not a one-time event. The technology is evolving rapidly, and governance norms are still developing. Boards that schedule regular management briefings on AI developments, bring in outside experts for annual sessions, and track the board's AI governance articles and guidance from organizations like NACD, BoardSource, and BoardEffect will stay better positioned to provide effective oversight.
When recruiting new board members, the nominating committee should explicitly consider AI fluency as a valuable skill in the board's overall expertise mix. This doesn't mean every board needs a data scientist. But having at least one director who has worked with AI implementation at an organizational level, and who can evaluate management's technical explanations critically, significantly improves governance quality. For organizations that have recognized the growing importance of AI in board meeting preparation and decision support, developing board AI literacy is particularly important.
Board AI Literacy Action Plan
- Schedule a dedicated board session on AI governance, with an outside expert presenter, within the next 60 days
- Add AI governance to the board's annual self-assessment and skills matrix
- Establish a regular management AI briefing cadence, at minimum quarterly
- Subscribe to NACD, BoardEffect, and Forvis Mazars governance publications for ongoing AI guidance
- Consider AI fluency explicitly when recruiting new board members
- Review and update AI governance responsibilities in committee charters at the next governance committee meeting
Governance Enables, Not Constrains, AI's Mission Impact
A common concern among board members approaching AI governance for the first time is that oversight will slow down innovation, create bureaucratic friction, or cause staff to feel micromanaged. The evidence suggests the opposite. Organizations with clear AI governance frameworks adopt AI more broadly and effectively than those without, because staff know what's permitted, what support is available, and how to escalate concerns without fear of judgment. Governance creates the psychological safety that enables responsible experimentation.
As WilmerHale's partners have articulated, responsible AI governance is "not an impediment to rapid innovation and growth, but a precondition." The same principle applies in the nonprofit context: boards that provide thoughtful AI oversight are not putting brakes on organizational capability, they're creating the conditions in which AI can be adopted confidently, deployed responsibly, and sustained over time.
The Forvis Mazars framework offers nonprofit boards a practical entry point. The SAFE framework provides a memorable lens for evaluating AI governance. The extension of fiduciary duties to AI oversight provides the ethical and legal grounding for board engagement. The specific questions and committee responsibilities outlined above provide immediate, actionable steps any board can take. The gap between AI adoption and AI governance in the nonprofit sector is significant, and it represents both a risk and an opportunity. Boards that close that gap will be better stewards of their organizations and better advocates for the communities they serve.
Strengthen Your Board's AI Governance
We help nonprofit boards develop practical AI governance frameworks, acceptable-use policies, and board education programs tailored to their organizations' specific contexts.
