Back to Articles
    Leadership & Strategy

    D&O Liability and AI: How Nonprofit Board Members Can Protect Themselves from Personal Exposure

    AI governance has crossed from IT management into the boardroom, and the stakes are personal. Directors and officers of nonprofits face growing liability exposure when their organizations deploy AI without adequate oversight, and many are unaware that their D&O insurance may not protect them.

    Published: March 21, 202614 min readLeadership & Strategy
    Nonprofit board members discussing AI governance and liability

    For most of the past decade, artificial intelligence was an operational concern, something the technology team managed, the executive director approved, and the board occasionally heard about in a quarterly update. That era is over. Legal experts, insurers, and governance authorities now treat AI oversight as a core fiduciary responsibility, on par with financial controls and regulatory compliance. Board members who fail to exercise meaningful AI oversight are increasingly exposed to personal liability, and the legal frameworks to pursue them already exist.

    The situation is made more complicated by a parallel development in the insurance market. Directors and officers of nonprofits have historically relied on D&O insurance to protect against personal financial exposure from governance-related claims. But insurers are now adding sweeping AI exclusions to these policies, sometimes eliminating coverage for any claim that touches AI in any way. Nonprofit board members who believe they are protected may discover a significant gap only when they need coverage most.

    This article explains the legal frameworks creating personal liability exposure for nonprofit board members, what is and is not covered by D&O insurance for AI-related claims, and the specific governance actions that create a defensible posture. It is written for nonprofit trustees, board chairs, and executive directors who want to understand the risk and take concrete steps to protect their organizations and themselves. It builds on related guidance in our articles on AI insurance exclusions and building an AI governance framework.

    The goal is not to create fear around AI adoption. Most nonprofits are deploying AI responsibly and deriving real mission value from it. The goal is to ensure that responsible deployment is documented in ways that protect the individuals who govern these organizations and the communities they serve.

    Fiduciary Duty Now Extends to AI Oversight

    Nonprofit board members in the United States are bound by three foundational fiduciary duties: the duty of care, the duty of loyalty, and (for nonprofits specifically) the duty of obedience to the organization's mission. The duty of care, which requires directors to act with the diligence and prudence that a reasonably careful person would exercise in similar circumstances, has always encompassed major risks facing the organization. Legal commentary published in early 2026, including analysis from Forvis Mazars and WilmerHale, is now explicit that AI governance falls squarely within this duty.

    This represents a genuine shift. Three years ago, a board that delegated AI decisions entirely to its technology staff could reasonably argue it was exercising appropriate deference to operational expertise. Today, that argument is much harder to sustain. Courts and regulators are developing a body of precedent that treats AI as an enterprise-level risk requiring board-level visibility, not a technical matter that can be contained in the IT department.

    The most significant legal framework creating personal liability exposure for board members is the "Caremark" duty, derived from a 1996 Delaware Chancery Court decision and applied extensively since then to corporate directors. Under the Caremark standard, directors can be held personally liable in two situations: first, when they fail to implement any system for monitoring organizational risks; second, when, having implemented such a system, they ignore obvious warning signs of problems. Legal experts are now explicitly applying this framework to AI governance, and courts have begun to accept AI-related Caremark claims against boards of directors.

    In September 2025, the Delaware Court of Chancery declined to dismiss a Caremark claim against Regions Financial Corporation directors related to a $191 million settlement with the Consumer Financial Protection Bureau. While this is a for-profit case, the principle is directly applicable to nonprofits: directors who have no functioning oversight system for a major organizational risk area can be held personally liable when that risk materializes. AI now qualifies as that kind of risk area for most nonprofits.

    The Two Caremark Vulnerabilities for Nonprofit Boards

    The legal standard creating personal liability exposure applies to AI governance

    Caremark liability arises when a board can be shown to have committed one of two fundamental governance failures. Understanding these helps clarify exactly what a board needs to do to protect its members.

    • No oversight system: The board never established any mechanism for monitoring AI-related risks. No committee had AI in its charter, no policy existed, and management had no obligation to report AI incidents or failures to the board.
    • Ignoring red flags: The board had some system in place, but when staff raised concerns about AI bias, data misuse, or governance failures, the board took no meaningful action. Awareness of a problem followed by inaction is as damaging as no oversight at all.

    The first vulnerability is the most common and the easiest to eliminate. Simply establishing a formal oversight structure, even a minimal one, addresses the "no system" problem. The second vulnerability requires that the oversight system actually function, meaning the board must genuinely review AI risk information and respond when problems emerge.

    Where Personal Liability Risk Is Highest for Nonprofit Boards

    Not all AI deployments create equal liability exposure for board members. The risk is concentrated in several specific contexts where AI decisions have direct consequences for people, involve regulated activities, or create reliance by third parties on the accuracy of the organization's representations.

    AI tools that affect which beneficiaries receive services, which job applicants are considered, or which volunteers are screened represent some of the highest-risk deployments for nonprofit boards. Court cases in 2025 involving AI hiring systems illustrate the risk: a federal court conditionally certified a massive age discrimination collective action against a major HR software company, establishing that using a vendor's AI system does not insulate an organization from liability for discriminatory outcomes. When a nonprofit uses an AI hiring tool and that tool systematically disadvantages protected-class applicants, the board faces exposure regardless of whether the nonprofit built the tool.

    "AI-washing," the practice of misrepresenting AI capabilities or the extent of AI use to donors, funders, or grant-making foundations, represents another significant liability vector. When organizations claim AI capabilities they do not have, or claim AI-driven impact figures that the technology cannot substantiate, they create fraud exposure. For nonprofits that receive grant funding based on representations about their technology capabilities, this risk extends directly to board members who approved those representations.

    High-Risk AI Deployments

    • AI tools affecting beneficiary eligibility or service allocation
    • AI in hiring, volunteer screening, or performance evaluation
    • AI handling sensitive data (mental health, immigration, financial)
    • AI tools subject to EU AI Act if any EU operations or data subjects
    • Donor or grant communications involving AI capability claims

    Regulatory Exposure

    • EU AI Act: penalties up to €30 million or 6% of global turnover
    • GDPR: up to 4% of global annual revenue for data violations
    • CCPA and state privacy laws: up to $7,500 per violation
    • EEOC guidance on AI hiring discrimination (Title VII, ADA, ADEA)
    • State-level AI laws proliferating rapidly in 2025-2026

    The regulatory landscape compounds the litigation risk. The EU AI Act took full effect in August 2025, and its risk-based framework applies to organizations worldwide that operate in or serve beneficiaries in EU member states. State-level AI legislation in the United States is proliferating rapidly, with employment-related AI disclosure requirements in effect or pending in several jurisdictions. Nonprofit boards that have not conducted even a basic AI compliance review for their jurisdiction face a governance gap that creates direct personal exposure.

    The D&O Insurance Coverage Gap Nobody Is Talking About

    Nonprofit board members typically rely on directors and officers insurance as their primary protection against personal financial exposure from governance-related claims. This coverage is often taken for granted, renewed annually without detailed review, and assumed to provide broad protection for fiduciary decisions. The AI exclusion trend threatens to upend that assumption in ways that could leave board members personally exposed for costs they assumed would be covered.

    The insurance market is moving faster than most organizations realize. Berkley Insurance has introduced a broad "absolute" AI exclusion for D&O and other professional liability policies that eliminates coverage for any claim "based upon, arising out of, or attributable to" the use, deployment, or development of artificial intelligence. The language is sweeping: it specifically covers AI-generated content, failure to detect AI-produced materials, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight. Hamilton Insurance Group has implemented a similar generative AI exclusion for professional liability policies.

    The critical concern, highlighted by Harvard Law School's Corporate Governance Forum in September 2025, is what they call the "cascade risk." Because AI exclusion language is often written to apply when AI is involved "in whole or in part," even a minor role for AI in an underlying incident could void coverage. A nonprofit facing a discrimination claim related to an AI-assisted hiring process, or a beneficiary service dispute where an AI tool played any role in the decision, might find that its D&O coverage is unavailable because of the AI connection, even if the claim itself is fundamentally about board oversight, not the technology.

    Critical Insurance Review Questions

    Ask your insurance broker these questions at your next D&O policy renewal

    • Does our policy contain any AI exclusion language? Can you provide the exact exclusion text?
    • Is the exclusion absolute (applying to any AI connection) or more narrowly written?
    • Does our cyber policy contain exclusions that could eliminate coverage for AI-related data incidents?
    • Can we obtain an affirmative AI endorsement that provides coverage where the exclusion would otherwise apply?
    • What AI governance documentation would allow us to negotiate narrower exclusions or lower premiums?
    • Is standalone AI liability insurance available for our organization's risk profile?

    There is a positive dynamic embedded in the insurance trend that nonprofits should understand. Underwriters are increasingly scrutinizing AI maturity during D&O policy renewals, asking specifically whether organizations have board-approved AI governance policies, designated oversight committees, regular AI risk reporting to the board, and documented vendor due diligence processes. Organizations with strong governance documentation may be able to negotiate narrower exclusions or obtain affirmative AI coverage. Organizations without governance documentation face broader exclusions and, increasingly, higher premiums. This means that building the governance infrastructure described below is not merely a legal protection, it is also a practical strategy for maintaining insurance coverage.

    Building a Defensible AI Governance Posture

    The standard that courts and insurers apply when evaluating whether a board exercised adequate AI oversight is not perfection. It is defensibility. A defensible governance posture means that when a problem occurs, the board can demonstrate that it was informed, that it asked the right questions, that it established appropriate oversight structures, and that it acted reasonably on the information it had. The documentation that creates this demonstration is specific, and building it is entirely within reach for nonprofits at any budget level.

    The NIST AI Risk Management Framework has emerged in 2025-2026 as the consensus standard that legal experts, governance advisors, and insurance underwriters reference when evaluating organizational AI governance. Adopting the NIST AI RMF as the foundation of a nonprofit's AI governance structure provides a recognized external framework that demonstrates the board exercised care in choosing its approach. Multiple legal and insurance sources now specifically recommend the NIST AI RMF for nonprofits as the basis for governance documentation.

    Governance Structure Requirements

    • Assign AI oversight to a named committee (Audit, Risk, Technology, or dedicated)
    • Include AI oversight explicitly in the committee charter, in writing
    • Designate a staff AI lead or AI governance officer
    • Establish a clear escalation path from staff to committee to full board
    • Make AI a standing agenda item, reviewed at least quarterly

    Policy Foundation Requirements

    • Written, board-approved AI governance policy (not just delegated to management)
    • AI tool inventory documenting every tool in use, who approved it, and what data it touches
    • Vendor due diligence standards for AI tool procurement
    • Annual policy review cycle with documented board approval
    • Integration with enterprise risk management framework

    The distinction between a board-approved policy and a management-delegated one is significant from a liability perspective. When management alone owns the AI governance policy, a claim can be made that the board never exercised oversight at all, which is precisely the "no system" Caremark vulnerability. When the full board has reviewed and approved the policy, that action creates a documented record of board-level engagement with AI governance. The policy does not need to be lengthy or technically detailed. It needs to reflect genuine board deliberation and approval.

    An AI tool inventory is the operational foundation of defensible governance. When the board can demonstrate that it knew which AI tools were in use, what data they handled, who had approved their deployment, and what oversight mechanisms were in place, it becomes substantially harder for a plaintiff or regulator to argue that the board was ignorant of AI risks. Starting an inventory is simple: identify every tool currently in use across the organization that uses AI, including tools embedded in platforms the organization may not think of as "AI" such as donor management systems with predictive features, HR platforms with automated screening, or scheduling tools with optimization algorithms.

    The Documentation That Protects Individual Board Members

    The ultimate protection for individual board members in an AI-related claim is the paper trail demonstrating that they personally exercised the duty of care. This trail is built through meeting minutes, committee charters, policy approvals, and documented education activities. Each of these elements addresses a specific vector of potential liability.

    Board meeting minutes are more legally important than many board members realize. When board members ask specific questions about AI risk, bias, or compliance during board meetings, those questions and management's responses should be recorded in the minutes. When the board reviews and approves the AI governance policy, that vote should be recorded with the date. When a board member or the full board receives an AI literacy briefing, that education activity should be noted. Courts evaluating Caremark claims look specifically at board minutes for evidence of informed, active oversight.

    The absence of AI-related content from board minutes is itself evidence of a governance gap. If a nonprofit has been deploying AI tools for two years and the board minutes from that period contain no AI-related discussion, no policy approvals, and no risk review, a plaintiff or regulator can point to that absence as proof that the board established no oversight system. Conversely, minutes that document regular, substantive engagement with AI risk over the same period create powerful evidence of appropriate governance.

    Documentation Checklist for Board Members

    The paper trail that demonstrates individual duty of care

    Board Meeting Minutes Should Document:

    • Specific questions board members asked about AI risks, bias, and compliance
    • Management's responses and any follow-up action items
    • Board review and approval of AI governance policies, with dates
    • Votes on significant AI deployments or major AI vendor contracts
    • Board AI education activities, briefings, or training sessions

    Supporting Documentation:

    • Written AI governance policy with board approval date and signatures
    • Committee charter with AI oversight in the written scope
    • AI tool inventory with approval dates and data classifications
    • Vendor due diligence records for AI tools
    • Bias audit records and remediation actions for high-risk AI tools

    Vendor Due Diligence: Why Using Someone Else's AI Is Not a Defense

    One of the most important legal developments in 2025 for nonprofits using AI vendors is the court ruling in the Workday case, which conditionally certified a massive class action against the HR software company for AI hiring discrimination. The significance for nonprofits is not the claim against Workday itself, but the legal principle it reinforces: organizations that use a vendor's AI system can face liability for discriminatory outcomes produced by that system, even if they did not build or configure the underlying model.

    The Massachusetts Attorney General's July 2025 settlement with Earnest Operations LLC for $2.5 million over AI-driven lending discrimination established a complementary principle on the regulatory side: regulators expect organizations to proactively test AI systems for bias and disparate impact, not simply deploy and observe. A nonprofit that deploys an AI tool affecting beneficiary eligibility without conducting any bias assessment, and that later discovers the tool systematically disadvantaged certain populations, is exposed to both the substantive discrimination claim and the governance failure claim.

    Vendor due diligence for AI tools should therefore go beyond the standard security and data privacy review that nonprofits conduct for technology vendors. For any AI tool that affects people directly, boards should ensure management can answer specific questions about how the model works, what data it was trained on, whether bias testing has been conducted, and who is responsible for ongoing monitoring of the model's outputs. The nonprofit AI vendor evaluation checklist provides a framework for this assessment that can be adapted to different tool types and risk levels.

    For the highest-risk AI deployments, including any tool that affects beneficiary services, hiring decisions, or sensitive personal data, boards should require that management conduct an independent bias audit before deployment and on a regular basis thereafter. Many vendors will resist this request, citing intellectual property protections for their model details. That resistance is itself a governance signal. Organizations should insist on contractual language that provides audit rights, or should evaluate whether the tool's risk profile justifies using a vendor that will not provide them.

    Eight Actions Boards Should Take This Quarter

    The steps that protect board members from personal AI liability are not technically complex. They are primarily organizational and procedural. The following eight actions represent the minimum defensible posture for a nonprofit board in 2026, based on the legal frameworks, insurance market dynamics, and regulatory standards described in this article.

    01

    Review your D&O policy for AI exclusion language

    Request the complete policy exclusions section from your broker and ask specifically about AI exclusions. Do this before the next renewal, not at the renewal. If absolute exclusions exist, explore affirmative AI endorsements or alternative carriers.

    02

    Create a board-approved AI governance policy

    The policy does not need to be lengthy. It needs to be written, board-approved, and dated. This single action eliminates the Caremark 'no system' vulnerability. A one-page policy that establishes mission alignment, oversight structure, and basic ethical principles is substantially better than no policy.

    03

    Assign AI oversight to a named committee

    Update the committee charter in writing to include AI oversight within its defined responsibilities. This can be the Audit Committee, Risk Committee, or a dedicated Technology Committee. The key is that AI oversight lives in a committee with board-level reporting.

    04

    Commission an AI tool inventory

    Ask management to produce a list of every AI-enabled tool currently in use across the organization, who approved each tool, what data it accesses, and what oversight mechanisms are in place. Review this inventory at the next board or committee meeting and ensure it is updated regularly.

    05

    Add AI as a standing agenda item

    Board minutes documenting regular AI risk discussion are a primary defense in Caremark claims. Even brief quarterly updates on AI usage, incidents, and governance changes create the paper trail of active oversight.

    06

    Conduct board AI literacy education

    Document this education in board minutes. Courts look favorably on boards that actively sought to become informed about major organizational risks. A single briefing from an AI governance expert provides both the knowledge board members need and the documentation that demonstrates they sought it.

    07

    Require vendor due diligence documentation for AI tools

    For any AI tool affecting beneficiaries, hiring, or sensitive data, require management to complete a formal due diligence process before deployment. Document the results and board or committee review of the findings.

    08

    Integrate AI risk into enterprise risk management

    Ask management to add AI-specific risk categories (ethical, operational, legal, reputational) to the organization's existing risk register and to report on them in regular risk updates to the board.

    Conclusion

    The liability landscape for nonprofit board members has shifted in ways that most governance advisors did not anticipate even three years ago. AI has moved from operational concern to fiduciary responsibility, and the legal frameworks to pursue board members who fail to exercise adequate AI oversight already exist and are being applied. The Caremark standard does not require that boards prevent all AI-related problems. It requires that boards establish functioning oversight systems and respond to warning signs. Meeting that standard is achievable for any organization, regardless of size or technical sophistication.

    The insurance market development adds urgency to the governance conversation. D&O policies that board members have relied on for decades may contain AI exclusions that fundamentally alter the coverage. Reviewing those exclusions, engaging with your broker about affirmative AI coverage, and building the governance documentation that allows you to negotiate from a position of strength are all concrete actions available right now. Organizations with strong AI governance records are finding that insurers respond positively, both in policy terms and in pricing.

    The good news embedded in this landscape is that the actions required for defensible AI governance are largely the same actions that make AI adoption more effective, ethical, and mission-aligned. An AI tool inventory helps organizations make better decisions about which tools to keep and which to eliminate. A vendor due diligence process catches problems before they become claims. Board education on AI risk improves the quality of strategic oversight across the organization. Building the governance structures that protect individual board members also builds the organizational capacity to deploy AI responsibly for the communities nonprofits serve.

    For organizations ready to go deeper on governance frameworks, our articles on building an AI governance framework and AI vendor evaluation provide implementation detail. For organizations that need to start the conversation at the board level, the eight actions outlined in this article represent a practical first quarter agenda that addresses the most significant personal liability exposures.

    Strengthen Your Board's AI Governance

    We help nonprofit boards build AI governance frameworks that protect individual directors and advance mission-aligned technology adoption. Contact us to discuss your organization's current governance posture and where to start.