Back to Articles
    Leadership & Strategy

    The Board AI Literacy Imperative: Why Every Nonprofit Director Needs AI Education in 2026

    More than 80% of nonprofits now use AI in some capacity, yet two-thirds of board directors across sectors report limited or no knowledge of the technology. This gap is no longer just an educational inconvenience. It is a governance failure with real legal, financial, and mission consequences.

    Published: March 10, 202616 min readLeadership & Strategy
    Board AI Literacy - Nonprofit Governance

    Nonprofit boards are built for oversight. Directors are appointed or elected to provide governance that protects the organization's mission, safeguards its assets, and ensures accountability to the communities it serves. This oversight function has always required directors to understand the major forces affecting the organization, whether those forces are financial, legal, operational, or strategic.

    Artificial intelligence has become one of those forces. Not as a future possibility but as a present operational reality. Organizations are using AI to write grant proposals, analyze donor behavior, assess program outcomes, draft communications, and in some cases make decisions about service delivery. The staff and leadership implementing these tools may understand them reasonably well. The boards responsible for overseeing those same organizations, in most cases, do not.

    The consequences of this gap are becoming concrete. Directors and officers liability exposure from AI-related failures is growing rapidly. Courts and regulators increasingly expect boards to be informed about the technologies their organizations deploy. Insurance carriers are incorporating AI governance into underwriting criteria. Funders are beginning to ask about AI oversight structures in grant applications. The legal and financial landscape has shifted, and board AI literacy is no longer a forward-looking aspiration but a current governance obligation.

    This article makes the case for why board AI literacy matters, outlines what nonprofit directors specifically need to understand, describes the governance frameworks that leading organizations are adopting, and provides practical steps for boards at any stage of AI knowledge to begin closing the gap.

    The Scale of the Gap

    The numbers describing the board AI literacy gap are stark. Two-thirds of board directors across sectors report limited or no knowledge of AI. Fewer than one in four companies have board-approved AI governance policies. Among nonprofits specifically, estimates suggest only 10 to 24 percent have formal AI governance frameworks in place, even as the majority report using AI tools in some capacity. Approximately 40 percent of nonprofits have no staff formally trained in AI.

    What makes these numbers particularly significant is what they represent in combination: organizations deploying AI tools broadly, often without formal policies, with boards that lack the knowledge to ask the right oversight questions and staff that may not have received systematic training. This creates governance vacuums where consequential technology decisions are being made without appropriate institutional oversight.

    The one signal of change in the data is that board attention to AI oversight tripled between 2024 and 2025. Boards are beginning to recognize the issue and move toward engagement. But recognition is not yet equivalent to competence, and the pace of AI deployment in nonprofit organizations is outrunning the pace of governance development. Closing this gap requires deliberate, structured investment in board AI education, not occasional briefings or passive awareness.

    2/3

    Board directors report limited or no knowledge of AI

    <25%

    Organizations have board-approved AI governance policies

    3x

    Board AI oversight activity increased between 2024 and 2025

    AI Oversight as Fiduciary Duty

    The argument for board AI literacy is not just practical. It is legal. The board's duty of care, one of the foundational fiduciary obligations in nonprofit governance, requires directors to make informed decisions and exercise appropriate oversight over the organization's operations. In 2026, that obligation extends to how the organization uses AI.

    The legal foundation for this position comes from principles established in corporate law. The 1996 Caremark decision in Delaware established that directors breach their duty of care when they fail to make a good-faith effort to oversee operations and legal compliance. While this case involved corporate defendants, the principles have been applied broadly across organizational governance. Directors who remain willfully uninformed about AI risks cannot claim they were exercising reasonable oversight, because reasonable oversight now requires understanding the technologies at play.

    For nonprofit corporations specifically, the board's fiduciary duties run to the charitable purposes of the organization, not to shareholders. This framing actually strengthens the AI governance obligation for nonprofits. AI misuse that harms the communities being served, discriminates against program participants, exposes beneficiary data, or diverts organizational focus from mission is not just a technical failure. It is a direct breach of the organization's fundamental obligations. Board directors who lacked the knowledge to prevent or detect such failures are accountable for that gap.

    Leading governance scholars and legal firms now explicitly frame AI oversight as an extension of the duty of care, describing it as "the new fiduciary duty." This framing has moved from academic discussion into the practical guidance being issued by major law firms, governance associations, and consulting firms. Directors who have not engaged with this framing are operating on an outdated understanding of their responsibilities.

    The D&O Liability Dimension

    Directors and officers liability exposure from AI-related failures is now described by governance analysts as one of the fastest-growing areas of D&O risk. AI-related litigation filings doubled in 2024, and the first half of 2025 alone produced twelve AI-related securities filings. Insurance carriers are responding by incorporating AI governance into underwriting criteria, asking applicants whether the organization has an AI governance policy, whether the board receives AI-related reporting, and whether AI risk assessments have been conducted.

    Boards without formal AI governance structures not only face increased liability exposure if something goes wrong, but may face coverage challenges if their insurance carrier's underwriting criteria include AI governance questions that the organization cannot answer affirmatively. The documentation of board AI education and training is increasingly what differentiates boards that can defend their oversight from those that cannot.

    What Nonprofit Directors Actually Need to Understand

    Board AI literacy does not require directors to become technical experts. It requires functional understanding of AI concepts sufficient to ask informed oversight questions and evaluate management's responses. The distinction between board AI literacy and staff AI literacy is important: boards need to understand AI at the governance level, not the implementation level.

    Based on guidance from the National Association of Corporate Directors, Forvis Mazars, WilmerHale, and Harvard Law School's corporate governance resources, the following areas represent the core of what nonprofit directors need to understand in 2026.

    Core AI Concepts for Directors

    Non-technical but functional understanding

    • What generative AI is and how it differs from traditional software decision systems
    • What AI hallucinations are and why they create organizational risk in grant writing, communications, and reporting
    • How AI models are trained and what training data means for bias in outputs
    • The distinction between internal AI tools built or configured by the organization and third-party AI vendors providing services
    • What large language models can and cannot reliably do, and where human oversight remains essential

    Governance-Level AI Knowledge

    The oversight concepts boards must own

    • AI risk categories: reputational, legal and regulatory, operational, financial, and ethical or bias risks
    • The distinction between AI strategy, which is leadership's domain, and AI oversight, which is the board's domain
    • What a board-approved AI governance policy should include: approved use cases, prohibited uses, vendor vetting, data handling, bias monitoring, and incident response
    • How AI intersects with existing obligations including data privacy, cybersecurity, employment law, and donor data protection
    • Explainability and auditability requirements for AI-assisted decisions, particularly those affecting service recipients

    Nonprofit-Specific AI Governance Concepts

    The mission-context dimensions unique to nonprofits

    • Mission drift risk: how AI tools can subtly skew resource allocation or service delivery in ways that conflict with the stated mission
    • Bias risk for the specific communities served: AI trained on general data may discriminate against exactly the populations the nonprofit exists to serve
    • Donor and beneficiary data protection: how AI tools may access, retain, or expose sensitive data from CRM systems and program databases
    • Regulatory landscape for nonprofits using AI, including emerging state-level AI laws, IRS implications, and grant compliance requirements
    • Funder expectations: how major foundations are beginning to evaluate AI governance in grant applications and due diligence processes

    The Forvis Mazars Framework for Nonprofit AI Governance

    Forvis Mazars published dedicated AI governance guidance for nonprofit boards in February 2026, providing one of the most practical and nonprofit-specific frameworks available. The framework organizes board AI governance responsibilities into four recommended actions, with specific implementation guidance for each.

    The Forvis Mazars framework reflects an emerging consensus in the governance field. WilmerHale's Key Governance Priorities for 2026 identifies the same four pillars independently, suggesting that the major governance advisory firms are converging on a consistent view of what board AI oversight requires. Boards that align their governance structures with this framework will be well-positioned relative to both regulatory expectations and D&O insurance criteria.

    Pillar 1: Comprehensive Assessment

    Conduct a comprehensive assessment of AI use and risks across the entire organization. Many boards are surprised to discover the extent of AI deployment already underway, often including tools adopted at the departmental level without formal organizational approval. This inventory is the foundation of any governance structure, because you cannot oversee what you don't know exists.

    The assessment should capture: which AI tools are in use, who authorized them, what data they access, what decisions they inform, and what risks they create. This is typically a management-led exercise with board oversight, but the board should review and approve the results and ensure the assessment is repeated at regular intervals as the AI landscape evolves.

    Pillar 2: Effective Oversight Structures

    Establish effective oversight structures for AI governance at the board level. This means designating committee responsibility for AI oversight, defining the reporting cadence from management to board on AI matters, and clarifying the escalation pathway for significant AI incidents or decisions.

    Most nonprofits will not need to create a new dedicated AI committee. The more common and practical approach is to assign AI oversight to an existing committee, typically Audit, Risk, or Finance, and ensure the committee has members with sufficient AI literacy to fulfill the oversight role. For organizations with significant AI use, recruiting at least one board member with technology or data governance expertise becomes a strategic priority.

    Pillar 3: Risk Management Protocols

    Implement protocols for identifying and managing AI-related risks aligned with recognized frameworks. Forvis Mazars specifically cites the NIST AI Risk Management Framework as the reference standard, which covers trustworthy AI characteristics: validity and reliability, safety, security and resilience, accountability, explainability, privacy, and fairness with managed bias.

    Practical implementation includes: integrating AI risks into enterprise risk management processes, establishing clear escalation protocols for ethical breaches or AI incidents, confirming that cyber insurance policies cover AI-related incidents, and developing vendor vetting questions that cover which AI models are used, how they are trained, and what data is shared with third-party vendors.

    Pillar 4: Empowering Ethical AI Opportunity

    Empower teams to proactively leverage AI opportunities within ethical guardrails. Board AI governance is not only about preventing harm. It includes actively enabling the organization to capture AI's potential for mission impact.

    Boards that focus exclusively on risk without also championing responsible AI adoption create a chilling effect on innovation that can disadvantage the organization competitively and limit its capacity to serve. The governance posture that serves nonprofits best is one that enables ethical AI use while maintaining robust oversight, not one that treats all AI as suspect. This requires board members who understand enough about AI to distinguish genuinely risky applications from beneficial ones.

    Questions Every Board Member Should Be Able to Ask

    One of the most practical ways to assess board AI literacy is to evaluate whether directors can formulate informed oversight questions, understand the answers, and follow up appropriately. The following questions represent the minimum AI oversight competence nonprofit directors should develop. They are governance questions, not technical ones, requiring understanding of AI concepts at the organizational level.

    Current State Questions

    • What AI tools is our organization currently using, and who approved each one?
    • What decisions does each tool inform or make, and what is the human oversight process?
    • What data do our AI tools access, and how is that data protected?
    • Have we conducted any assessment of AI bias risk for the communities we serve?

    Governance and Risk Questions

    • Do we have a board-approved AI governance policy? When was it last reviewed?
    • What is our process for vetting AI vendors before granting data access?
    • Does our D&O insurance cover AI-related incidents? Have we confirmed this explicitly?
    • What is our incident response plan if an AI tool causes harm to a beneficiary or donor?

    Strategic AI Questions

    • How are we measuring AI's impact on our ability to deliver on our mission?
    • Are there AI opportunities we're not pursuing that could significantly advance our mission?
    • How are we ensuring that AI use aligns with our equity commitments and values?

    Regulatory Compliance Questions

    • Are we tracking state-level AI legislation that may affect our operations?
    • Do any of our funders have AI use requirements or restrictions in our grant agreements?
    • How are our AI practices disclosed to donors, clients, and the public?

    Building Board AI Literacy: A Practical Roadmap

    Most nonprofit boards will not arrive at AI literacy through passive exposure. The topic moves too quickly, the implications are too specific to governance contexts, and directors have too many competing demands to independently follow AI developments with sufficient depth. Building genuine board AI literacy requires deliberate structural investment.

    The following roadmap draws on recommendations from Forvis Mazars, the National Association of Corporate Directors, Deloitte's Board AI Governance Roadmap, and BoardEffect's nonprofit governance guidance. It is organized into immediate actions that any board can take regardless of current AI knowledge level, and longer-term investments that build sustained capability.

    1

    Conduct a board AI literacy assessment

    Before investing in education, establish a baseline. A simple survey asking directors to self-assess their AI knowledge across the key concept areas above takes thirty minutes and reveals where educational investment is most needed. This also helps governance and nominating committees identify AI expertise as a recruitment priority for future director searches.

    2

    Commission an AI inventory from management

    Ask the executive director to provide a complete inventory of all AI tools currently in use across the organization, including who authorized each, what data it accesses, and what decisions it informs. Many boards discover uses they were unaware of. This inventory should be reviewed at least annually and is the foundation of effective AI oversight.

    3

    Designate committee responsibility for AI oversight

    Assign AI governance oversight to an existing committee with a clear mandate. Update the committee's charter to include AI oversight responsibilities. Establish a reporting cadence so management presents an AI risk update at least annually, with flexibility to escalate significant developments at any time.

    4

    Invest in board-specific AI education

    Schedule an AI education session for the full board with an external facilitator who can present AI concepts in governance context. This is distinct from the AI training staff receives. Board education should focus on oversight responsibilities, governance frameworks, the questions boards should ask, and the red flags they should recognize. Schedule annual refreshers given the pace of change.

    5

    Develop and approve an AI governance policy

    Work with management to develop a board-approved AI governance policy covering: approved use cases, prohibited uses, vendor vetting requirements, data handling standards, bias monitoring processes, and incident response procedures. The existence of a board-approved policy is increasingly a requirement for insurance coverage and funder due diligence. See our guide on updating your AI policy for 2026 for a framework to work from.

    6

    Incorporate AI literacy in director recruitment

    Update your governance matrix to include AI, technology, and data governance expertise. At a minimum, identify one current or prospective board member who can serve as the board's resident AI resource. This does not require recruiting a technologist, but it does require recruiting someone with enough understanding to interpret AI developments in governance context and support fellow directors' learning.

    Resources for Board AI Education

    Several authoritative resources are specifically designed for board-level AI governance education:

    • NACD Director Essentials: Implementing AI Governance, a dedicated guide from the National Association of Corporate Directors covering board committee roles, questions to ask management, and disclosure considerations
    • Forvis Mazars AI Governance for Nonprofit Boards (February 2026) and their SAFE AI Framework: structured guidance specifically for nonprofit governance contexts
    • Deloitte Board AI Governance Roadmap: framework design, committee roles, and reporting cadence guidance
    • BoardEffect Nonprofit AI Governance Guide: nonprofit-specific framing of governance priorities and practical implementation steps
    • Harvard Law School Corporate Governance Blog: ongoing legal and governance analysis of board AI responsibility as the regulatory landscape evolves

    The Strategic Opportunity, Not Just the Risk

    It is important to frame board AI literacy as something larger than a risk mitigation exercise. Boards that develop genuine AI understanding become more capable strategic partners to their executive teams, not just more effective watchdogs.

    AI-literate boards can engage meaningfully in strategic conversations about how AI might advance mission delivery in ways that less informed boards cannot. They can ask substantive questions about whether the organization is capturing available opportunities, not just whether it is avoiding harms. They can champion responsible AI investment at the board level, which often unlocks staff capacity and organizational momentum that cautious or ignorant boards inadvertently suppress.

    Forvis Mazars explicitly makes this point: "Boards that ask strategic questions and champion ethical frameworks can position AI as a driver of mission impact rather than a source of unchecked risk." The governance framing that serves nonprofits best in 2026 is one that holds risk oversight and opportunity recognition simultaneously, neither dismissing AI concerns nor treating every AI application as suspect.

    This balanced posture requires knowledge. It requires directors who can distinguish genuine risk from theoretical concern, meaningful safeguards from security theater, and legitimate AI opportunity from vendor overstatement. None of this is beyond the reach of engaged nonprofit directors. But it does require deliberate investment in the literacy that makes it possible.

    The organizations that navigate the AI transition most successfully will be those whose boards understood the landscape, asked informed questions, established appropriate governance structures, and championed responsible innovation rather than passive observation. The time to build that capacity is now, while governance choices are still proactive rather than reactive to a crisis that has already occurred.

    Conclusion

    Board AI literacy is no longer a forward-looking governance enhancement. It is a present obligation with immediate legal, financial, and mission consequences. The gap between how widely nonprofits are deploying AI and how well their boards understand it represents one of the most significant governance vulnerabilities in the sector in 2026.

    Directors do not need to become technologists. They need to develop functional understanding of AI concepts, governance frameworks, the questions they should be asking, and the oversight structures that protect their organizations and the communities they serve. The resources to do this are available, the frameworks are well-developed, and the investment is manageable for any nonprofit board willing to treat AI governance with the same seriousness it applies to financial oversight or legal compliance.

    The alternative, continuing to oversee organizations deploying AI without the knowledge to do it effectively, creates exposure that grows with every new AI tool adopted. D&O liability risks, insurance coverage gaps, regulatory compliance failures, and harm to the communities nonprofits exist to serve are the concrete consequences of governance that does not keep pace with technology adoption.

    Boards that invest in AI literacy now will be better positioned to support their organizations' AI strategies, protect against the risks those strategies create, and fulfill the governance obligations that have always defined board service well. That is the imperative. The steps to meet it are within reach.

    Ready to Build Your Board's AI Governance Capacity?

    One Hundred Nights provides board AI education sessions, governance framework development, and AI policy support for nonprofit organizations navigating the 2026 AI landscape.