AI Insurance Liability for Nonprofits: Understanding Your Coverage Gaps
As nonprofits embrace AI tools for fundraising, program delivery, and operations, a critical blind spot is emerging: traditional insurance policies may not cover AI-related incidents. With insurers introducing sweeping AI exclusions across D&O, E&O, and cyber liability policies, your organization could face significant uninsured exposure. This guide examines the evolving insurance landscape and provides actionable steps to protect your mission.

Your nonprofit recently implemented an AI tool to help screen program applicants, and everything seemed to be working smoothly—until a complaint arrived alleging that the system discriminated against applicants based on their zip code, which correlated strongly with race. As you reach for your insurance policy, you discover a clause you've never noticed before: an exclusion for claims "arising out of or attributable to artificial intelligence." Suddenly, your organization faces the prospect of defending a discrimination claim without insurance coverage.
This scenario isn't hypothetical. As AI adoption accelerates across the nonprofit sector, the insurance industry is racing to limit its exposure to AI-related risks—often faster than nonprofit leaders realize. Major insurers have begun introducing broad AI exclusions that could leave organizations unprotected precisely when they're most vulnerable. Understanding these coverage gaps isn't just a technical insurance matter; it's a fiduciary responsibility for nonprofit boards and executives who must protect their organizations' ability to fulfill their missions.
The insurance landscape for AI is evolving rapidly. In January 2026, the Insurance Services Office (ISO) released new general liability endorsements allowing insurers to exclude generative AI exposures entirely. Some insurers have introduced what they call "absolute" AI exclusions that eliminate coverage for any claim connected to AI use—including AI-generated content, chatbot communications, algorithmic failures, and regulatory actions related to AI oversight. These exclusions can appear in Directors and Officers (D&O) liability, Errors and Omissions (E&O), cyber liability, and general liability policies.
For nonprofits, the stakes are particularly high. Organizations serving vulnerable populations face heightened risks from AI bias and discrimination claims. Those handling sensitive beneficiary data could be liable for privacy breaches caused by AI systems. And unlike large corporations with deep pockets and risk management departments, most nonprofits lack the resources to self-insure significant claims. A single uncovered incident could threaten an organization's very existence.
This guide helps nonprofit leaders navigate the complex intersection of AI adoption and insurance protection. You'll learn which types of coverage are most affected by AI exclusions, how to audit your current policies for gaps, what questions to ask your insurance broker, and how to structure AI governance to minimize liability exposure. Whether your organization is just beginning to explore AI or has already integrated multiple AI tools into operations, understanding your insurance position is essential for responsible AI adoption.
The Growing AI Liability Landscape: Why Insurers Are Worried
To understand why insurers are adding AI exclusions so aggressively, it helps to understand what's keeping their actuaries up at night. AI presents a fundamentally different risk profile than most technologies insurance was designed to cover, and the insurance industry is scrambling to adapt.
Emerging AI Claims: The Cases Shaping Liability
Real lawsuits demonstrating the scope of AI-related liability
The wave of AI-related litigation in 2025 has given insurers concrete reasons for concern. In May 2025, a federal court took the precedent-setting step of certifying a collective action in an AI bias case against Workday, Inc. Plaintiffs alleged that Workday's AI-powered applicant screening system discriminated against job seekers based on race, age, and disability. The court ruled that employers can be held liable for discrimination resulting from AI tools—even when they didn't develop those tools themselves.
Similar claims are emerging across sectors. Civil rights organizations filed complaints against companies using AI hiring tools that allegedly disadvantaged deaf applicants and those who speak non-standard English dialects. Amazon faced allegations that its AI systems systematically denied disability accommodation requests. In each case, organizations are discovering that AI adoption creates liability exposure they may not have anticipated—and that their existing insurance policies may not cover.
For nonprofits, these cases illustrate that AI liability isn't limited to tech companies or large corporations. Any organization using AI for screening, eligibility determination, or decision support faces similar exposure. A social services nonprofit using AI to prioritize cases could face claims if the algorithm disadvantages certain communities. A foundation using AI to evaluate grant applications could be liable if the tool exhibits bias. The technology that promised efficiency may also bring legal risk.
Why Traditional Insurance Wasn't Built for AI Risks
Fundamental mismatches between AI and existing coverage frameworks
Insurance products developed over decades to address known, quantifiable risks. Fire, theft, professional negligence, and even cyber attacks follow patterns that actuaries can model. AI introduces uncertainty that breaks these models. How do you price the risk of an algorithm that might gradually drift from its intended behavior? How do you assess liability when an AI's decision-making process is opaque even to its developers?
Experts note that "you have got some of these gaps that really don't fit nicely into either program." Cyber insurance, which is relatively young at just 20 years old, wasn't designed for AI-specific failures like algorithmic bias, hallucinations, or model degradation. Professional liability policies assume services are delivered by humans who can be supervised and trained—not by software that operates at scale beyond human review.
- Unpredictable behavior: AI systems can produce unexpected outputs, especially as underlying models update or encounter novel inputs
- Distributed responsibility: When AI causes harm, liability may span the organization using it, the vendor who built it, and the data providers who trained it
- Delayed manifestation: AI-related harms may not become apparent until long after the AI was deployed, complicating claims-made coverage
- Regulatory uncertainty: Rapidly evolving AI regulations create compliance risks that insurers struggle to anticipate
The Hidden C-Suite Risk
According to research from Harvard Law School, AI failures present hidden risks specifically for executives and board members. High-profile AI incidents—from biased hiring algorithms to chatbots providing harmful advice—have triggered costly lawsuits and regulatory investigations. Directors and officers may face personal liability for AI-related failures, particularly if governance was inadequate. Yet many nonprofit boards remain unaware of their AI exposure or how their D&O coverage might respond.
This makes AI governance not just an operational concern but a personal fiduciary matter for board members. Understanding your organization's AI use and ensuring adequate oversight isn't optional—it's a core responsibility of nonprofit leadership. Learn more about how boards without tech expertise can govern AI effectively.
Types of Insurance Coverage Affected by AI Exclusions
AI exclusions aren't limited to one type of policy—they're appearing across multiple coverage lines that nonprofits rely on for protection. Understanding which policies in your portfolio might contain AI limitations is the first step toward addressing potential gaps.
Directors and Officers (D&O) Liability Insurance
Personal protection for nonprofit leaders facing AI-related claims
D&O insurance protects nonprofit board members and executives from personal liability arising from decisions made in their governance roles. This coverage is essential for attracting qualified board members who might otherwise decline to serve. However, AI exclusions in D&O policies could leave directors personally exposed when AI-related claims arise.
Some insurers have introduced "absolute" AI exclusions intended specifically for D&O policies. One major insurer's endorsement eliminates coverage for any claim "based upon, arising out of, or attributable to" AI use, including AI-generated content, inadequate AI governance, chatbot communications, and regulatory actions related to AI oversight. This breadth means a board member could face personal liability for approving an AI implementation that later causes harm—even if the decision seemed reasonable at the time.
Nonprofit D&O policies often include blended coverage with Employment Practices Liability (EPL) insurance. Given that many AI lawsuits involve employment-related discrimination claims, this intersection is particularly concerning. If your D&O policy contains an AI exclusion, it may also limit your EPL coverage for AI-related employment claims.
Errors and Omissions (E&O) / Professional Liability
Coverage for harm caused by professional services or advice
E&O insurance covers claims arising from mistakes or negligence in professional services. For nonprofits that provide counseling, legal aid, healthcare, or other professional services, E&O coverage is critical. But E&O policies face unique challenges with AI because many define covered services as those provided by "natural persons"—not software systems.
If a nonprofit uses AI to assist with service delivery—for example, an AI tool that helps case managers assess client needs—claims arising from AI errors might fall outside traditional E&O coverage. The policy may only cover mistakes made by human professionals, leaving AI-related errors uninsured. Even without explicit AI exclusions, this "natural persons" limitation can create significant gaps.
Additionally, E&O policies may restrict coverage to failures of software "developed or created by the insured organization." If your nonprofit uses a third-party AI tool that malfunctions, the policy might not respond. This is especially relevant for nonprofits relying on commercial AI platforms rather than custom-built solutions.
Cyber Liability Insurance
Protection for data breaches, privacy violations, and digital incidents
As AI tools increasingly handle sensitive data, cyber liability coverage becomes essential. Yet many cyber policies were designed before AI became prevalent and may not adequately address AI-specific risks. Standard policies typically cover data breaches and network intrusions, but AI introduces new exposures: algorithmic bias, AI-generated misinformation, model failures, and privacy violations unique to machine learning systems.
Some cyber policies only cover incidents occurring on servers "owned or operated by the policyholder." If your nonprofit uses cloud-based AI services—which most do—incidents occurring on the vendor's infrastructure might not be covered. Similarly, policies may exclude incidents connected to third-party vendors, leaving you exposed when an AI service provider's systems fail.
There's positive movement in this space: some insurers have begun offering coverage for AI-specific incidents like deepfakes and AI-generated reputational harm. Coalition announced it would cover deepfake-related incidents including forensic analysis, legal support for takedowns, and crisis communications. But such coverage remains the exception rather than the rule, and nonprofits must actively seek it out.
Commercial General Liability (CGL)
Broad coverage that may or may not extend to AI-related claims
CGL policies provide broad protection against claims of bodily injury, property damage, and personal injury. While many AI risks fall outside traditional CGL triggers, there are scenarios where CGL could be implicated—for example, if an AI system in a nonprofit's building malfunctioned and caused physical harm.
In January 2026, the Insurance Services Office (ISO) developed new general liability endorsements that allow carriers to exclude generative AI exposures entirely. Under these exclusions, claims for bodily injury, property damage, and personal/advertising injury arising from generative AI would not be covered. While not all carriers have adopted these endorsements, their availability signals the industry's direction.
For nonprofits, the advertising injury component is particularly relevant. If your organization uses AI to generate marketing content, social media posts, or donor communications, claims of defamation, copyright infringement, or privacy violation arising from that content might be excluded under the new generative AI endorsements.
How to Audit Your Current Coverage for AI Gaps
Identifying AI coverage gaps requires a systematic review of your existing policies in light of your organization's AI use. This isn't a one-time exercise—it should become part of your annual insurance review process as both AI adoption and insurance market conditions evolve.
Step 1: Create an AI Inventory
Understanding what AI your organization actually uses
Before you can assess insurance coverage, you need to know what AI tools your organization uses. Many nonprofits underestimate their AI footprint because AI is increasingly embedded in software they don't think of as "AI." Create a comprehensive inventory that includes:
- Explicit AI tools: ChatGPT, Claude, Jasper, or other tools staff knowingly use for AI capabilities
- AI features in existing software: CRM systems with predictive analytics, email platforms with AI-generated subject lines, accounting software with AI categorization
- Third-party AI integrations: Vendors who use AI to process your data, even if you don't interact with the AI directly
- Planned AI implementations: Tools you're considering that should inform coverage decisions at your next renewal
For each tool, document: what data it accesses, what decisions it influences, who uses it, and what could go wrong. This inventory becomes the basis for discussing coverage needs with your insurance broker.
Step 2: Review Policy Language Carefully
What to look for in your current insurance documents
Request copies of all endorsements and exclusions that have been added to your policies—not just the base policy documents. AI exclusions are typically added as endorsements that may not be prominently featured in policy summaries. Look for language including:
- Explicit exclusions for "artificial intelligence," "machine learning," "algorithms," or "automated decision-making"
- Limitations on coverage for "AI-generated content" or "content produced by automated systems"
- Definitions of covered services that specify "natural persons" or exclude technology-assisted delivery
- Exclusions for claims arising from "regulatory actions related to AI" or "AI governance failures"
- Cyber policy limitations that only cover incidents on servers you own (excluding cloud AI services)
Insurance experts warn that AI is now so embedded in business processes that broad AI exclusions could "unintentionally cut off coverage for many routine operations." Policyholders may unknowingly use AI technologies without realizing it, creating hidden coverage gaps. This is why careful policy review is essential.
Step 3: Have the Right Conversation with Your Broker
Questions that reveal coverage gaps and opportunities
Many insurance brokers serving nonprofits aren't yet fully versed in AI-related coverage issues. You may need to drive the conversation and ask specific questions:
- "Do any of our current policies contain AI-related exclusions or limitations?" Ask for specific endorsement numbers and language.
- "Here's our AI inventory—which of these uses are covered under our current policies, and which aren't?" Go through your inventory tool by tool.
- "If a claim arose from [specific AI use], would it be covered? What would be the process?" Use concrete scenarios relevant to your organization.
- "Are there carriers offering AI-inclusive coverage or AI-specific products we should consider?" The market is evolving, and some insurers are moving toward coverage rather than exclusion.
- "What AI governance measures might improve our coverage terms or premiums?" Insurers increasingly consider AI practices in underwriting decisions.
Documentation Is Your Friend
Keep detailed records of your AI audit process, including the inventory you created, policy documents reviewed, and conversations with your broker. If a claim arises, this documentation demonstrates your organization's good-faith efforts to understand and manage AI risks—which could be relevant in coverage disputes or regulatory proceedings.
Share your AI audit findings with your board, even if the news is concerning. Boards can't fulfill their governance responsibilities if they're unaware of coverage gaps. Consider adding AI insurance status to your board's regular risk management reports.
Strategies for Addressing AI Coverage Gaps
Once you've identified gaps between your AI use and insurance coverage, you have several strategic options. The right approach depends on your organization's risk tolerance, AI reliance, and budget.
Negotiate Removal of AI Exclusions
Push back on blanket exclusions, especially with long-term carriers
AI exclusions aren't necessarily set in stone. Experts note that insurers hesitate to implement sweeping exclusions because of competitive pressure—policyholders might switch to competitors offering broader coverage. If you've been a long-term customer with a clean claims history, you have leverage to negotiate.
Request that your carrier remove AI exclusions, or at minimum narrow them to cover only specific high-risk AI uses while maintaining coverage for routine AI applications. Some carriers are willing to provide coverage with higher premiums or deductibles rather than excluding AI entirely. Be prepared to demonstrate your AI governance practices as evidence that you're managing AI responsibly.
If your current carrier won't budge, shop the market. Different insurers are taking different approaches to AI risk, and you may find more favorable terms elsewhere. Work with a broker who has relationships with multiple carriers and can identify those most open to AI coverage.
Explore Emerging AI-Specific Insurance Products
New products designed specifically for AI exposures
The insurance industry is developing products specifically designed for AI risks. In April 2025, Armilla AI Insurance Services introduced an AI liability insurance product underwritten by Lloyd's of London that explicitly addresses AI-specific perils including hallucinations, model degradation, and algorithmic failures. Other specialized insurers are emerging to serve this market.
These AI-specific products offer coverage that traditional policies exclude, including:
- Model governance protection for failures in AI oversight
- Technology assurance riders for AI-driven operational losses
- Coverage for data drift, model bias, and AI-driven errors
- Protection for both AI users and AI developers
While these products are still maturing and may carry higher premiums, they may be worth considering if your organization relies heavily on AI or faces elevated AI-related risks. Ask your broker about AI-specific coverage options becoming available in the market.
Strengthen AI Governance to Improve Coverage Terms
Good AI practices can translate to better insurance outcomes
Insurers are increasingly evaluating organizations' AI practices when making underwriting decisions. Organizations that can demonstrate robust AI governance may receive more favorable coverage terms—or at least be more attractive to carriers willing to write AI coverage. Governance measures that matter to insurers include:
- Written AI policies: Documented guidelines for AI adoption, use, and oversight. Learn more about creating AI policy templates for nonprofits.
- Approval and oversight processes: Clear procedures for vetting AI tools before adoption and monitoring them during use
- Human oversight requirements: Protocols ensuring human review of consequential AI-assisted decisions
- Vendor due diligence: Documentation of how you evaluate third-party AI tools and their liability provisions
- Training records: Evidence that staff using AI have received appropriate training
- Incident response plans: Procedures for responding to AI-related incidents or failures
Even if improved governance doesn't immediately translate to better insurance terms, it reduces the likelihood of claims arising in the first place—which is ultimately the best risk management strategy. Consider developing a comprehensive AI acceptable use policy that demonstrates your commitment to responsible AI use.
Review Vendor Contracts for Liability Provisions
Ensure AI vendors share appropriate liability burden
When your organization faces liability for AI-related harm, your insurance isn't the only potential source of protection. The vendors who provide AI tools may bear liability as well, depending on contractual provisions. Review your AI vendor agreements to understand:
- Indemnification clauses: Does the vendor agree to defend and indemnify you for claims arising from their AI tool's failures or defects?
- Limitation of liability: Are there caps on vendor liability that leave you exposed for significant claims?
- Insurance requirements: Does the vendor maintain insurance that would respond to claims involving their AI?
- Warranty provisions: What representations does the vendor make about their AI's accuracy, bias, or compliance?
Many standard AI vendor agreements heavily favor the vendor, limiting their liability while making you responsible for consequences of using their tool. When negotiating new AI vendor contracts, push for stronger protections—or at least understand what you're agreeing to accept. Your legal counsel should review AI vendor agreements with these liability questions in mind.
Building Organizational Resilience Beyond Insurance
Insurance is just one layer of protection against AI-related liability. Comprehensive risk management requires building organizational resilience that prevents claims from arising in the first place and positions your nonprofit to respond effectively when problems occur. This is especially important during this transitional period when insurance coverage remains uncertain.
Invest in AI Literacy and Training
Staff who understand AI's capabilities and limitations are less likely to misuse it in ways that create liability. Training should cover not just how to use AI tools, but when human oversight is required and how to recognize potential problems. Organizations with strong training cultures may also be viewed more favorably by insurers.
Building AI literacy from scratch is an investment that pays dividends in reduced risk as well as improved outcomes.
Implement Robust Human Oversight
Many AI-related harms could have been prevented with appropriate human review. Establish clear protocols specifying which AI outputs require human approval before action. Particularly for high-stakes decisions affecting beneficiaries, donors, or employees, ensure AI recommendations are reviewed by qualified staff before implementation.
Document your human oversight procedures—they may be valuable evidence if your organization's AI governance is ever questioned.
Create AI Audit Trails
If a claim arises, you'll want to understand exactly what the AI did and why. Maintain logs of AI decisions, inputs, outputs, and human reviews. This documentation supports both your defense against claims and your ability to identify and correct problems before they escalate.
Learn more about creating audit trails for AI decisions that support compliance and transparency.
Establish Incident Response Procedures
When AI-related problems occur, rapid and effective response can limit damage and demonstrate organizational responsibility. Develop procedures for identifying AI incidents, investigating root causes, notifying affected parties, and implementing corrections. Include triggers for escalating to leadership and legal counsel.
Your incident response plan should integrate with your existing crisis communications and AI crisis response protocols.
The Regulatory Dimension
Several states have enacted or are considering AI-related regulations that could affect nonprofit liability. California's regulations on automated decision systems, effective October 2025, hold employers responsible for discriminatory decisions made by AI tools. Illinois expanded civil rights protections to prohibit AI-based discrimination in employment.
Staying current on AI regulations in your operating jurisdictions is now part of responsible governance. Regulatory violations can trigger both liability and insurance coverage questions, making compliance a risk management priority. Consider consulting with legal counsel experienced in AI regulation to understand your obligations.
Conclusion: Proactive Protection in an Evolving Landscape
The intersection of AI adoption and insurance coverage represents one of the most significant emerging risks for nonprofit organizations. As AI becomes increasingly integral to fundraising, program delivery, and operations, the insurance industry's response to AI exposures directly affects your organization's risk profile. Broad AI exclusions, coverage gaps for AI-embedded software, and the absence of AI-specific policies create potential vulnerabilities that nonprofit leaders must address proactively.
The good news is that this is a solvable problem. By creating a comprehensive AI inventory, auditing current policies, having informed conversations with your insurance broker, and strengthening AI governance, you can significantly reduce your organization's exposure. Some of these steps—like implementing written AI policies and human oversight requirements—are beneficial regardless of insurance implications because they reduce the likelihood of harm in the first place.
The insurance market for AI is evolving rapidly. Carriers that established AI governance frameworks are entering 2026 better equipped to assess and price AI risks, which should eventually lead to more coverage options. Some insurers are already offering AI-specific products or AI-inclusive coverage at reasonable terms. By positioning your organization as a sophisticated, governance-focused AI user, you'll be better prepared to access these emerging coverage options.
Finally, remember that insurance is just one component of comprehensive risk management. Building organizational resilience through training, oversight, documentation, and incident response creates layers of protection that don't depend on insurance policy language. In an uncertain coverage environment, these internal practices become even more valuable.
Don't wait for a claim to discover your coverage gaps. Schedule a conversation with your insurance broker, share your AI inventory, and ask the hard questions about what is and isn't covered. Your board and stakeholders deserve to understand your organization's AI insurance position—and you have the power to improve it before problems arise.
Protect Your Nonprofit's AI Future
We help nonprofits develop comprehensive AI governance frameworks that reduce liability exposure and position organizations favorably with insurers. From policy development to risk assessment, we'll help you build the foundation for responsible AI adoption.
