The Silent Coverage Gap: Why Your Nonprofit's Insurance May Not Cover AI-Related Claims
Major insurers are quietly adding AI exclusions to the policies nonprofits rely on most. As policies renew in 2026, organizations deploying AI tools without reviewing their coverage face a serious and largely invisible risk: a costly claim denied on the grounds that AI was involved.

For years, nonprofits deploying new technologies operated under what the insurance industry calls "silent coverage." AI incidents were never explicitly addressed in policy language, so claims involving AI generally fell under existing general liability, directors and officers, or professional liability coverage by default. That era is ending, and it's ending quietly, at policy renewal, with exclusion endorsements that few nonprofit leaders know to look for.
Starting with policies renewing in 2026, major carriers are incorporating new AI exclusion language developed through Verisk (ISO), the standard-setting body whose policy templates are adopted by insurers nationwide. These new exclusion forms define "generative artificial intelligence" broadly as any "machine-based learning system or model that is trained on data with the ability to create content or responses, including but not limited to text, images, audio, video or code," and explicitly exclude claims arising from its use from both bodily injury and property damage coverage, as well as personal and advertising injury coverage.
This is not theoretical. W.R. Berkley has introduced what legal analysts describe as an "absolute" AI exclusion for D&O, E&O, and fiduciary liability policies, eliminating coverage for any claim "based upon, arising out of, or attributable to" the use, deployment, or development of artificial intelligence. AIG has told regulators that generative AI is a "wide-ranging technology" where claims "will likely increase over time" and has filed to limit its liability. The trend is systemic, not isolated, and it is moving faster than most nonprofit leaders realize.
The risk is especially acute for nonprofits because of where they deploy AI. Unlike commercial businesses that primarily use AI for internal operations, many nonprofits deploy AI in direct service delivery: intake assessments, crisis support chatbots, eligibility determinations, youth programs, and healthcare coordination. When AI tools make errors in these contexts, the people harmed are often among the most vulnerable populations in society, and the potential for litigation is significant. The question is whether your insurance will respond when it matters most.
This article explains how the "silent coverage" era is ending, which policy lines are most affected, what the exclusion language actually says, and the practical steps your organization can take before your next renewal to avoid discovering your coverage gap the hard way.
The End of "Silent AI" Coverage
"Silent AI" was the insurance equivalent of a gray area. Because AI tools were not mentioned anywhere in legacy policies, there was genuine ambiguity about whether AI-related incidents were covered. In practice, claims often proceeded under existing coverage frameworks, and insurers had limited ability to deny them solely on the grounds that AI was involved. The silence worked, imperfectly but reliably, in policyholders' favor.
The industry has been methodically closing this gray area. Carriers that initially tolerated silent AI coverage have been filing for explicit exclusions as they gain better insight into the frequency and severity of potential claims. The Verisk filing represents the industry's first coordinated, standardized approach to excluding generative AI from general liability policies at scale. When ISO files a new exclusion form, it is made available to all member carriers simultaneously, and most will incorporate it into their policy language at the next available renewal cycle.
The significance for nonprofits is timing. Organizations that renewed their general liability policies in early 2025 may still have the older, pre-exclusion language in place. But as policies come up for renewal throughout 2026, carriers are adding these endorsements. A nonprofit that deploys a beneficiary-facing AI chatbot today under a policy that was written before the Verisk forms may find that when that policy renews, the coverage it was counting on is simply no longer there.
What the Verisk 2026 Forms Actually Say
Key exclusion language now appearing in new general liability policy forms
The new ISO general liability exclusion forms define generative AI broadly and exclude it from two major coverage areas:
- Coverage A (bodily injury and property damage): Excluded for losses "arising out of generative artificial intelligence"
- Coverage B (personal and advertising injury): Excluded for losses arising from AI-generated content and AI-assisted communications
- Scope: Applies to any "machine-based learning system or model that is trained on data with the ability to create content," a definition broad enough to include most modern AI tools
- Optional endorsement: Carriers can add these forms; they are not yet universally mandatory, meaning coverage varies by carrier and policy
The Swiss Cheese Problem Across Policy Lines
One of the most dangerous misconceptions nonprofits have about insurance is the assumption that multiple policies provide redundant protection. In practice, when an AI incident occurs, each policy may have exclusions that apply, and the "holes" in coverage can align in a way that leaves the organization with nothing. Insurance and legal analysts describe this as the "Swiss cheese" effect: each individual policy has gaps, and when those gaps overlap, claims fall through entirely.
Consider a realistic scenario. A nonprofit operates a mental health resource center and deploys an AI chatbot to provide after-hours support and crisis resource referrals. The chatbot, responding to a user in crisis, provides incorrect information about a local resource. The user cannot get help when they need it, and the organization faces a negligence claim. Which policy responds? The general liability policy covers bodily injury from business operations, but the new Verisk exclusions may apply if the carrier has adopted them. The cyber policy covers data breaches but was not designed for AI performance failures or harmful outputs. The E&O policy covers professional service failures, but AI exclusion language from Berkley and others may bar coverage if AI was involved in the professional advice.
This scenario is not hypothetical. The A.F. v. Character Technologies litigation (ongoing through 2025 and 2026), involving multiple wrongful death and self-harm lawsuits related to an AI chatbot's interactions with minors, has exposed precisely this ambiguity at the highest-stakes level. Legal analysts reviewing the cases have noted that while general liability "should" traditionally cover these claims, new exclusion language and coverage ambiguity mean that response is no longer guaranteed.
General Liability
Traditionally covers bodily injury and property damage from operations.
Gap: Verisk 2026 exclusion forms now allow carriers to exclude AI-related claims from both Coverage A and Coverage B. Applies to beneficiary harm from AI tools.
Directors & Officers (D&O)
Covers board and executive decisions, including strategic technology choices.
Gap: W.R. Berkley's "absolute" AI exclusion eliminates D&O coverage for any claim arising from AI use, deployment, or development, including regulatory actions for inadequate AI governance.
Errors & Omissions (E&O)
Covers professional service failures, errors in advice, and negligent acts.
Gap: AI exclusion endorsements increasingly bar coverage when AI tools contributed to the professional error. Claims for AI hallucinations that cause harm may not be covered.
Cyber Insurance
Covers data breaches, ransomware, and network security failures.
Gap: Cyber policies cover breach mechanics but typically not AI performance failures, hallucinated outputs, model manipulation, or reputational harm from AI errors. AI-specific perils remain largely uncovered.
Where AI Harms Beneficiaries: Nonprofit-Specific Scenarios
Nonprofits are not merely at risk because they use AI. They are at heightened risk because of how they use AI. The populations nonprofits serve, including people experiencing homelessness, individuals in mental health crises, children, people with disabilities, and those navigating complex benefits systems, are among those most harmed when AI systems fail. The magnitude of potential claims is higher, and the moral stakes are more visible.
Healthcare insurers have become the most prominent example of AI-related litigation in analogous contexts. UnitedHealth Group faced a federal class action lawsuit alleging it used the nH Predict AI model to deny extended care claims for elderly Medicare Advantage patients. Cigna faced litigation for using its PXDX algorithm to automatically deny hundreds of thousands of claims with minimal human review. Both cases illustrate a principle that extends directly to nonprofits: when AI systems make decisions that harm people who rely on the organization's services, litigation follows, and the question of whether insurance responds becomes urgent.
For nonprofits, the scenarios where this risk materializes include intake AI that systematically deprioritizes certain populations based on biased training data; crisis chatbots that provide harmful advice to users in acute distress; eligibility determination systems that wrongly deny services; youth-serving programs where AI exposes minors to inappropriate content; and mental health organizations where AI-assisted assessments contribute to inadequate care recommendations. In each case, the potential for harm to vulnerable individuals is real, the potential for litigation is growing, and the question of insurance coverage is now deeply uncertain.
High-Risk AI Use Cases for Nonprofits
Scenarios where AI failures are most likely to produce claims that may fall in coverage gaps
- Crisis support chatbots: AI providing mental health or crisis resources to individuals in acute distress. Errors can cause direct harm and generate significant litigation exposure.
- Eligibility and intake assessment AI: Automated tools that screen or prioritize beneficiaries for services, housing, or benefits. Biased outputs can constitute discrimination.
- Youth-facing AI tools: Any AI interacting with minors in educational, enrichment, or social service contexts. Heightened duty of care and regulatory exposure.
- Healthcare coordination AI: AI used to schedule care, triage needs, or recommend services for health-related programs. HIPAA compliance and patient safety intersect with AI failure risk.
- AI-assisted professional advice: Legal aid, financial counseling, or social work services augmented by AI tools. If AI contributes to harmful professional advice, E&O coverage may not respond.
Understanding What the Exclusion Language Actually Says
The practical challenge for nonprofits is that AI exclusion language is rarely highlighted in renewal materials. It arrives as an endorsement, often buried in the policy packet, and most nonprofit leaders lack the legal background to recognize its significance. Understanding what to look for, and what it means when you find it, is the first step to managing this risk.
The broadest exclusion currently in circulation is W.R. Berkley's Form PC 51380, which legal analysts have called an "absolute" AI exclusion. The language eliminates coverage for any claim "based upon, arising out of, or attributable to" the use, deployment, or development of artificial intelligence. The form explicitly enumerates the following as excluded: AI-generated content, failure to detect AI-produced materials, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight. The breadth of this language is significant because it means a regulatory investigation of your organization's AI practices, not just a third-party lawsuit, could be excluded from coverage.
Not all exclusion language is this absolute. Some carriers use narrower language that excludes specific AI categories or limits exclusions to particular coverage lines. The critical task is reading the actual endorsement language on your policy, not relying on broker summaries or renewal checklists that may not flag new exclusion forms. This requires a deliberate policy review process, not a passive one.
Red Flag Exclusion Language
Language patterns indicating broad or absolute AI exclusions
- "arising out of, based upon, or attributable to... the use, deployment, or development of artificial intelligence"
- "any machine-based learning system or model"
- "AI-generated content" or "chatbot communications"
- "failure to detect AI-produced materials"
- "inadequate AI governance" or "regulatory actions related to AI oversight"
Affirmative Coverage Language to Seek
Policy language that explicitly confirms AI coverage
- Explicit definitions of "artificial intelligence," "machine learning," and "generative AI"
- Coverage for "errors, omissions, or inaccuracies in AI-generated outputs" (hallucinations)
- Explicit coverage for deepfake-enabled fraud and social engineering
- Supply chain and vendor AI liability coverage
- Coverage for "AI model degradation" or "AI performance failure"
New Coverage Options Emerging for AI Risks
The good news is that the insurance market is beginning to respond to AI-specific risks with purpose-built coverage products. While these products are newer and in some cases more expensive than traditional insurance lines, they represent a genuine path to closing gaps that traditional policies are leaving open.
Armilla Insurance, underwritten by Chaucer Group through Lloyd's and launched in April 2025, offers an AI liability policy specifically designed to cover AI-specific perils including hallucinations, degrading model performance, and algorithmic failures. Relm Insurance has introduced PONTAAI, an excess difference-in-conditions wrap policy designed for organizations with third-party liability exposure from AI use, specifically designed to respond when existing liability programs exclude AI. These products represent the leading edge of a market that will grow significantly as AI incidents generate more claims data.
For most nonprofits, the immediate practical priority is not necessarily purchasing standalone AI insurance (though organizations with significant beneficiary-facing AI deployments should explore this). The priority is conducting an honest assessment of current coverage, identifying specific gaps, and having an informed conversation with your broker about options including affirmative AI endorsements on existing policies, increased limits on cyber coverage that does respond to some AI risks, and whether standalone coverage is warranted given your organization's AI use profile.
The connection between AI governance documentation and insurance is also important. Carriers writing AI liability coverage are beginning to use governance documentation as an underwriting factor: organizations that can demonstrate a documented AI governance framework, regular risk assessments, staff training on appropriate AI use, and defined oversight processes are more attractive risks and may qualify for better coverage terms. The NIST AI Risk Management Framework is a recognized standard that both insurers and regulators reference, and implementing it serves both risk management and insurability goals. This is explored further in our article on AI governance as risk mitigation.
Emerging AI Insurance Products (2025-2026)
New coverage options specifically designed for AI-related risks
- Armilla Insurance (Lloyd's/Chaucer): AI liability policy launched April 2025. Covers hallucinations, model degradation, and algorithmic failures. Designed for organizations using AI in operations or service delivery.
- Relm PONTAAI: Excess difference-in-conditions wrap policy for third-party AI liability exposure. Specifically designed to respond when primary programs exclude AI claims.
- Testudo (Lloyd's): Claims-made policy for generative AI errors, launched late 2025. Covers errors in AI-generated outputs that cause third-party harm.
- Cyber AI affirmations: Several major cyber carriers (including Google/Beazley/Munich Re joint products) have introduced explicit AI threat affirmations covering deepfakes, automated phishing, and social engineering fraud.
- AI endorsements on existing policies: Some carriers will add affirmative AI coverage endorsements to existing D&O, E&O, and GL policies on request. Ask explicitly; these are not typically offered proactively.
A Seven-Step AI Coverage Review Process for Nonprofits
The organizations most exposed to the silent coverage gap problem are those that have not yet had a deliberate, AI-specific conversation with their insurance broker. Most nonprofit insurance reviews focus on limits, deductibles, and premium comparisons. Very few include a systematic review of exclusion endorsements, and almost none specifically address how current AI tool use intersects with policy language. Changing this requires a new kind of policy review conversation.
The following process is designed for nonprofit executive directors, finance officers, and board members who want to understand their current exposure and take practical steps to address it before their next renewal. This is not a substitute for qualified legal advice or a conversation with a licensed insurance broker who understands nonprofit AI risks, but it provides a framework for approaching that conversation productively.
1Conduct a Complete AI Tool Inventory
Before you can assess coverage, you need to know what you are covering. Document every AI tool your organization uses, including embedded AI in existing platforms (CRMs, case management systems, email tools), generative AI tools used by staff, beneficiary-facing AI chatbots or intake tools, AI used in program delivery or service recommendations, and AI used by vendors who process your data. The inventory should include the tool name, vendor, what data it accesses, who uses it, and how it affects beneficiaries or staff.
2Pull and Read All Current Policy Documents
Request complete policy documents, not just declarations pages, for your general liability, D&O, E&O/professional liability, cyber, and crime policies. Look specifically for endorsements and exclusion forms added at or after the most recent renewal. New AI exclusion forms are added as endorsements and may not appear in the primary policy language. If you received renewal documents without reading the endorsements section, you may not know what was added.
3Look for AI-Specific Language in Every Policy
Search each policy document for the terms 'artificial intelligence,' 'machine learning,' 'generative AI,' 'chatbot,' and 'automated decision.' When you find these terms, read the surrounding language carefully to determine whether it is an exclusion, a definition, an affirmation of coverage, or a conditions clause. If you cannot determine the effect of the language, ask your broker to explain it explicitly.
4Map Coverage Gaps Against Your AI Use Cases
Using your AI tool inventory and your coverage review, identify specific combinations of tool use and coverage exclusion that create exposure. Focus especially on AI tools that interact directly with beneficiaries, AI tools that make or inform service delivery decisions, and AI tools used in program areas where harm to participants could generate litigation. These are the highest-risk coverage gaps.
5Have an Explicit AI Conversation with Your Broker
Most nonprofit insurance brokers have not proactively raised AI coverage issues with their clients. You may need to initiate this conversation explicitly. Ask whether your current policies have any AI exclusion endorsements, whether affirmative AI endorsements are available from your current carriers, whether your current coverage would respond to specific scenarios (use examples from your AI tool inventory), and whether your broker recommends any coverage changes given current market developments.
6Explore Affirmative Coverage and Standalone Products
If your broker confirms coverage gaps, explore options including affirmative AI endorsements on existing policies, increased limits on cyber policies that do respond to some AI risks, and standalone AI liability products. For organizations operating beneficiary-facing AI tools in high-stakes contexts, a standalone AI liability product may be warranted. Get quotes and compare coverage terms carefully, particularly around what constitutes an AI 'event' and what the claims process requires.
7Implement Governance Documentation
Whether or not you purchase additional coverage, documenting your AI governance practices serves two purposes: it reduces the likelihood of a covered incident, and it positions you as a better risk to underwriters. Document your AI policies, approval processes for new AI tool adoption, staff training requirements, oversight mechanisms, and incident response procedures. Review and update this documentation at least annually. For a practical framework, review our article on building an AI governance framework.
Board Responsibility: AI Oversight as a Fiduciary Duty
The intersection of AI, insurance, and governance is most acute at the board level. Directors and officers have a fiduciary duty to ensure that organizational risk is understood and appropriately managed. When AI tools create coverage gaps that have not been identified, reviewed, or addressed by organizational leadership, that failure can itself become a source of liability, and one that the D&O policy may not cover if AI exclusions apply.
Harvard Law School's Forum on Corporate Governance identified this as a "hidden C-suite risk" in 2025, noting that AI failures can expose executives to derivative suits, regulatory enforcement, and reputational harm that their D&O policies may no longer cover. For nonprofit boards, the analogy applies with additional force: board members who authorized AI deployments without reviewing the coverage implications, and who did not require management to report on AI-related insurance gaps, may face questions about the adequacy of their oversight in the event of a significant incident.
The practical implication is that AI insurance coverage should be a standing agenda item at board risk committee meetings, not a topic that surfaces only when a claim is filed. Boards should require management to report annually on AI tools in use, the coverage position for each, and any changes in coverage terms at renewal. This is not an onerous requirement; it is the same level of oversight boards routinely apply to other organizational risks, and it is now equally important to apply to AI.
For more on building effective board oversight of AI risk, see our articles on D&O liability and AI for nonprofit boards and what boards need to know about AI insurance exclusions. For a broader view of AI risk management frameworks, our article on building an AI governance framework provides a practical starting point.
Board-Level AI Insurance Oversight Checklist
Questions boards should ask at least annually
- Does management maintain a current inventory of all AI tools in use, including vendor-embedded AI?
- Has the insurance broker been explicitly asked about AI exclusions in all current policies?
- Were any AI exclusion endorsements added at the most recent policy renewal?
- Are there identified coverage gaps that have not yet been addressed?
- Does the organization have documented AI governance policies that would support a coverage claim if needed?
- Is there a defined process for reviewing coverage implications before deploying new AI tools?
The Window to Act Is Narrow
The "silent coverage" era for AI is ending. The question is whether your organization will discover this at renewal, when you can still negotiate, or at the moment a claim is denied, when the coverage gap has already materialized. The Verisk 2026 exclusion forms, the W.R. Berkley absolute exclusion, and similar filings from AIG and other carriers represent a coordinated, systemic shift in how the insurance industry treats AI risk. This shift is happening now, and it is working its way through the market one renewal cycle at a time.
Nonprofits that act before their next renewal have real options: they can negotiate affirmative endorsements, increase limits on policies that respond to AI risks, explore standalone AI liability products, and document governance practices that both reduce risk and improve insurability. Nonprofits that wait until after a claim is filed have far fewer options, and the populations they serve pay the price.
The organizations most at risk are not those using AI aggressively. They are those using AI without realizing what their insurance does and does not cover. The silent coverage gap is not inevitable. It is addressable, but only if addressed before it matters.
Know Your Coverage Before You Need It
Our team helps nonprofits assess AI risk, review governance practices, and prepare for the operational realities of responsible AI adoption.
