Back to Articles
    Risk & Governance

    AI Insurance Exclusions Are Coming: What Every Nonprofit Board Needs to Know in 2026

    Your nonprofit's insurance policies may no longer cover AI-related claims, and most boards don't know it yet. New exclusions are appearing in general liability, cyber, and D&O policies, creating gaps that documented AI governance can help close.

    Published: March 21, 202614 min readRisk & Governance
    AI insurance exclusions and nonprofit board governance

    Most nonprofit boards have been focused on adopting AI, training staff to use it, and exploring what it might do for their mission. Far fewer have been paying attention to a quieter but more urgent development: the insurance market is changing the rules. New AI-related exclusions are appearing in standard policies, and they are not being announced with fanfare. They are showing up as endorsements at renewal, buried in the fine print, and many organizations won't notice them until they need to file a claim.

    The shift began accelerating in early 2026 with Verisk, the insurance industry's primary forms standardization body, releasing new general liability exclusion forms (CG 40 47 and CG 40 48) specifically targeting generative AI. These optional endorsements, effective January 1, 2026, allow carriers to exclude bodily injury, property damage, and personal and advertising injury arising from AI. Because carriers across the industry commonly adopt Verisk templates, these exclusions are likely to spread quickly. Policies renewing in 2026 are the first wave to be affected.

    For nonprofit boards, the implications are significant. Boards are the ultimate stewards of organizational risk, and the AI coverage gap is a material risk that most boards have not yet assessed. Organizations that deploy AI tools without understanding what their insurance actually covers may discover they have significant uninsured exposure at the worst possible moment. This article explains what is changing, where the gaps are, and what boards can do to protect their organizations before a claim arises.

    Importantly, this is not simply a story about insurance. It is a story about governance. The insurance market is sending a clear signal: documented AI governance directly reduces risk, and carriers are beginning to tie coverage terms and premiums to demonstrated governance maturity. The organizations that build robust AI oversight structures today will be better positioned on multiple dimensions, including insurability, board liability protection, and the ability to negotiate narrower exclusions at renewal.

    What Is Changing in the Insurance Market

    The insurance industry's response to AI is unfolding across multiple policy lines simultaneously, and the coverage implications vary by policy type. Understanding which policies are affected, and how, is the starting point for any board-level risk assessment.

    General Liability

    The most immediate and widespread change

    The Verisk CG 40 47 and CG 40 48 endorsements, effective January 2026, allow carriers to exclude bodily injury, property damage, and personal and advertising injury arising from generative AI. The triggering language is broad: "based upon, attributable to, arising out of, or related to, in whole or in part." This means AI exclusions can be invoked even when AI played only a minor role in the harm.

    • Bodily injury or property damage from AI-generated decisions
    • Defamation or misappropriation from AI-generated content
    • Privacy violations from AI processing of personal data

    Cyber Insurance

    Emerging exclusions targeting AI-specific threats

    Cyber policies have not yet broadly excluded AI risks outright, but targeted exclusions are appearing, particularly around AI-driven fraud and algorithmic misconduct. Some carriers are proposing sweeping endorsements that exclude any claim arising from AI use, output, training, advice, or decision-making.

    • AI-driven fraud, including deepfake and voice-clone social engineering
    • Algorithmic misconduct and discriminatory outputs
    • Unauthorized use of donor or beneficiary data in AI training

    Directors & Officers (D&O)

    The most serious personal liability exposure for board members

    AI-related D&O liability is the fastest-growing exposure for board members. AI-related securities class actions have doubled in recent years, with 12 such lawsuits filed in the first half of 2025 alone, and the trend is accelerating. Average D&O claim settlements have risen significantly.

    • "AI-washing": inflated claims about AI capabilities to donors or funders
    • Failure to disclose material AI risks (bias, data breaches)
    • Deploying AI without adequate governance and human oversight

    Professional Liability / E&O

    Coverage gaps for AI-assisted service delivery

    Professional liability policies are beginning to exclude errors stemming from AI tools used in service delivery. For nonprofits that provide direct services, including social services, healthcare-adjacent programs, legal aid, or counseling, this creates a meaningful gap in professional error coverage.

    • AI-assisted professional advice errors
    • AI-generated work product used in service delivery
    • Errors from unreviewed AI outputs used with beneficiaries

    The Governance Gap That Makes Boards Vulnerable

    The core problem is a profound mismatch between AI adoption and AI governance across the nonprofit sector. While the vast majority of nonprofits now use AI in some capacity, only a small fraction have the governance structures that protect them from liability when something goes wrong. This gap is not just an abstract risk, it is precisely the kind of discrepancy that plaintiffs' attorneys and insurance adjusters are trained to look for.

    Consider what "deploying AI without adequate governance" looks like in practice for a nonprofit. Staff are using AI tools, often free consumer products like ChatGPT, to draft donor communications, summarize case notes, analyze data, and write grant proposals. These tools are integrated into daily work informally, without written policies, without data handling guidelines, without documentation of which tools are approved or prohibited, and without board awareness. From an insurance and liability standpoint, this is the scenario that creates the greatest exposure.

    D&O underwriters are now explicitly asking about AI governance during renewals. The presence or absence of a documented AI policy is influencing coverage terms. Two-thirds of board directors report limited or no knowledge of AI, and only a small fraction of nonprofits have formal board-approved AI policies. This governance gap is not just a strategic weakness, it is a documented liability factor that is entering the insurance underwriting conversation.

    The emerging legal standard for board members in the AI era requires what some governance scholars describe as "AI due care," meaning the board must exercise informed, technologically literate oversight of AI deployment. Boards cannot fully delegate AI risk to staff and then claim ignorance if something goes wrong. The duty of care requires that directors engage with AI risk at a meaningful level, not as technical experts, but as thoughtful fiduciaries asking informed questions and ensuring governance structures are in place.

    What "Adequate Governance" Looks Like in Practice

    Insurance underwriters and governance experts have begun to converge on a set of practices that constitute demonstrable AI governance maturity. These practices are not primarily about technical controls, they are about documentation, oversight, and accountability structures that boards can verify and that carriers can assess. The NIST AI Risk Management Framework (AI RMF) has emerged as the structural baseline most aligned with underwriter expectations in the U.S., but implementation does not require deep technical expertise.

    Board-Level Governance Actions

    Governance begins at the board level, and the record of board engagement with AI risk matters enormously. When a claim arises, board minutes that reflect informed AI risk discussions are a meaningful legal protection. Silence in the record is dangerous.

    • Adopt a formal board-approved AI policy (even a basic one signals governance maturity to underwriters)
    • Assign AI oversight to a specific committee or designate a board liaison for AI risk
    • Add AI risk to the annual audit and enterprise risk management review cycle
    • Build minimum AI literacy across the board sufficient to ask informed oversight questions
    • Ensure board minutes document AI risk discussions, not just operational updates

    Organizational Governance Actions

    Staff and operational AI governance is where most of the day-to-day risk lives. Unmanaged staff use of AI tools, particularly consumer products that may process donor or beneficiary data, is among the highest-risk scenarios for a nonprofit. Organizational governance turns informal behavior into documented, defensible practice.

    • Maintain a written inventory of all AI tools in use, including free tools staff use informally
    • Document each tool's purpose, data inputs, vendor terms, and approved use cases
    • Establish human oversight and review protocols for AI-assisted decisions affecting beneficiaries or donors
    • Create escalation protocols for AI-related incidents or ethical concerns
    • Prohibit entering personally identifiable beneficiary or donor data into public AI tools without explicit authorization

    Insurance-Specific Actions

    Many nonprofits have not reviewed their insurance policies for AI-related language since these exclusions began appearing. The insurance-specific governance actions are largely about awareness, communication with your broker, and deliberate negotiation rather than passive renewal.

    • Request a side-by-side comparison of any new AI exclusion language at next renewal across GL, cyber, D&O, and E&O
    • Ask your broker explicitly: what AI-related claims are now excluded from each policy?
    • Confirm that cyber policies address AI-related incidents, including AI-enabled phishing and hallucination-driven data exposures
    • Explore whether affirmative AI endorsements are available, and what governance requirements unlock them
    • Document governance controls in writing before renewal negotiations, as carriers tie premiums and exclusions to demonstrable governance

    The AI-Washing Risk Nonprofits Are Overlooking

    One of the more surprising liability risks emerging in the AI era is what governance experts call "AI-washing": making claims about AI capabilities that are inflated, misleading, or simply unverified. In the commercial sector, this has led to enforcement actions and securities litigation. In the nonprofit sector, the same dynamics can play out in grant reports, donor communications, impact statements, and annual reports.

    Consider a common scenario: a nonprofit receives a grant to implement an AI system for beneficiary intake, writes a grant report describing AI-powered outcomes, and the funder later questions whether the described AI capabilities were actually deployed as represented. Or a major donor gives based on a presentation that described the nonprofit's AI-enabled programs in glowing terms, only to find the AI component was minimal or aspirational rather than operational. These scenarios carry real liability risk, and the D&O coverage that would normally protect board members may include exclusions for AI misrepresentation.

    The practical guidance here is straightforward but requires organizational discipline: any public claim about AI use must be accurate and verifiable. Board members who review and approve annual reports, grant applications, donor communications, and impact statements should be specifically attentive to AI-related claims. The threshold for AI-related assertions should be the same as for any material factual claim: you must be able to substantiate it. When your organization is discussing AI capabilities with donors or funders, lean toward accuracy over enthusiasm. The reputational and legal cost of overstatement far exceeds the cost of measured, honest communication.

    This also applies to how staff describe AI tools to beneficiaries. If AI is used in any decision that affects a beneficiary's access to services, the organization should have a clear policy on disclosure and explanation. Some states are beginning to require disclosure when AI influences service-delivery decisions, and the absence of such a policy creates both legal and ethical exposure. See our article on building an AI governance framework for nonprofits for guidance on how to structure these policies.

    The "Swiss Cheese" Coverage Problem

    Insurance professionals are using the term "Swiss cheese" to describe the current state of AI coverage: no single policy covers all AI perils, and specialty AI policies often carry high premiums and extensive governance requirements. A single AI-related incident could trigger disputes across multiple policies, each pointing to exclusions in the others. This is not a hypothetical concern, it is the current structure of the market.

    Imagine a scenario where a nonprofit's AI tool for case management produces biased recommendations that harm a beneficiary. The resulting claim might simultaneously involve general liability (harm from AI-generated advice), cyber liability (if the incident involved a data exposure), professional liability (if the harm arose from professional service delivery), and D&O (if board oversight is challenged). In the current market, each of these policies may contain exclusions that route the claim to another policy, creating a coverage dispute that leaves the organization bearing costs it believed were insured.

    The practical implication for nonprofits is not to avoid AI, but to ensure that risk management thinking keeps pace with AI adoption. Organizations should work with a broker who understands both nonprofit risk and the evolving AI exclusion landscape. The broker conversation at renewal should not be a passive review, it should be a deliberate negotiation where your organization's governance maturity is presented as a substantive factor in coverage terms.

    Smaller nonprofits with modest D&O coverage should be particularly attentive to rising settlement values. Median nonprofit D&O coverage has historically been around $1 million, an amount that was once adequate for most nonprofit liability scenarios but may be inadequate against AI-related claims where settlements have risen substantially. This is worth a specific conversation with your broker about whether current limits are appropriate given the organization's AI use profile.

    A Practical Roadmap for Boards

    The goal is not to create an expensive compliance exercise, but to take targeted, high-value governance actions that reduce liability exposure and improve the organization's insurance position. The following sequence is designed to be achievable for boards and organizations of any size.

    Immediate Actions (Before Your Next Renewal)

    • Contact your broker and request a review of all policies for new AI-related exclusion language. Do this before your next renewal, not during it.
    • Ask management to provide a written inventory of every AI tool currently in use, including free tools staff use informally.
    • Determine whether the organization has a written AI policy. If not, task leadership with drafting one before the next policy renewal.
    • Place AI risk on the next board meeting agenda for a dedicated discussion and ensure it is documented in the minutes.

    Short-Term Actions (Next 90 Days)

    • Adopt a formal board-approved AI policy that covers approved use cases, data handling rules, and oversight responsibilities.
    • Assign AI oversight responsibility to a specific board committee or designated director.
    • Review all public-facing communications, grant reports, and donor materials for AI-related claims that may constitute AI-washing.
    • Begin documenting vendor due diligence for third-party AI tools, including data processing agreements and security terms.

    Ongoing Governance Integration

    • Add AI risk to the annual enterprise risk management review alongside financial, reputational, and operational risks.
    • Implement the NIST AI Risk Management Framework as a structural governance baseline, working through the Govern, Map, Measure, and Manage functions over time.
    • Build board-level AI literacy through brief, focused education at board meetings, not deep technical training, but enough to ask informed questions about model limitations and oversight.
    • Bring documented governance evidence to insurance renewal conversations as a substantive negotiating asset.

    Governance as a Strategic Asset, Not Just a Compliance Burden

    It is easy to frame AI governance primarily as a risk mitigation exercise, a set of policies and procedures designed to prevent bad outcomes. That framing is accurate but incomplete. The organizations building robust AI governance structures today are also building a strategic asset that will differentiate them in multiple competitive dimensions: insurability, donor and funder trust, regulatory positioning, and the ability to attract talent who want to work in organizations that take ethics seriously.

    Funders are beginning to ask about AI governance in due diligence. Some are actively rewarding organizations with documented governance maturity through preferred treatment in grant decisions. Donors who care about responsible technology use, a growing constituency in the philanthropic sector, view AI governance as a signal of organizational integrity. And as AI regulations continue to develop at the state and federal level, organizations that have built governance structures proactively will adapt more easily than those scrambling to comply reactively.

    For boards specifically, the governance-as-asset framing changes the conversation from "how do we avoid liability" to "what kind of organization do we want to be in the AI era." The answer to that question, expressed through documented policies, board engagement, staff training, and vendor oversight, is increasingly visible to the constituencies your organization depends on. See our resources on AI fundamentals for nonprofit leaders and integrating AI into your strategic plan for additional context on building organizational AI capacity alongside governance.

    The insurance market is not waiting for nonprofits to catch up. Exclusions are appearing now, in policies that are renewing now, and the coverage gaps they create are real. But the tools to address them are accessible: a written AI policy, a board conversation documented in minutes, a vendor inventory, and a renewal conversation with your broker that reflects your organization's governance maturity. None of these require technical expertise. They require the same disciplined stewardship boards apply to every other material organizational risk.

    Conclusion

    AI insurance exclusions represent a new category of organizational risk that most nonprofit boards have not yet assessed. The changes underway in the insurance market are not speculative, they are documented in Verisk filing forms, D&O underwriting questionnaires, and carrier renewal endorsements. The patchwork of coverage that exists today means that no single policy fully addresses AI risk, and organizations that assume they are covered may discover they are not when it matters most.

    The good news is that the same governance practices that protect boards from personal liability also improve the organization's insurance position, build trust with donors and funders, and prepare the organization for the regulatory environment that is developing around AI. These are not separate workstreams, they are the same work. The board that prioritizes AI governance is simultaneously protecting individual directors, strengthening organizational coverage, and positioning the nonprofit as a responsible leader in an era when that credibility will matter more, not less.

    Start with what is actionable: a broker conversation about current policy language, an inventory of AI tools in use, and a board agenda item to document the conversation. These steps are available to every nonprofit, at every budget level, and they represent the minimum due diligence that boards owe their organizations in 2026.

    Build AI Governance That Protects Your Organization

    One Hundred Nights works with nonprofit boards and leadership teams to develop AI governance frameworks that reduce risk, satisfy underwriter requirements, and build stakeholder trust.