AI Liability for Nonprofits: What You Need to Know About Insurance and Risk
As nonprofits increasingly adopt artificial intelligence tools, a complex question emerges: what happens when AI makes a mistake? The insurance industry is rapidly evolving to address AI-related risks, with major carriers introducing new exclusions and specialized coverage options. For nonprofit leaders, understanding AI liability and ensuring adequate protection is no longer optional—it's essential. This comprehensive guide explores the emerging landscape of AI liability, explains what your current insurance may or may not cover, and provides practical strategies for managing risk as you integrate AI into your operations.

The rapid adoption of AI across the nonprofit sector has created an insurance puzzle that neither organizations nor carriers have fully solved. According to recent data, 50-75% of companies have incorporated AI into their operations, with nonprofits actually outpacing their private-sector counterparts at 58% versus 47%. Yet the insurance products designed to protect organizations from AI-related risks are still evolving, creating potential coverage gaps that many nonprofits don't realize exist.
Starting in January 2026, major insurers including AIG, Great American, and WR Berkley began introducing AI exclusion clauses into standard policies, fundamentally changing the insurance landscape. What was once covered by traditional liability, cyber, or errors and omissions policies may now require specialized AI-specific coverage. For nonprofits operating on tight budgets and relying on insurance to protect their assets and reputation, these changes demand immediate attention.
The liability risks from AI are both diverse and significant. An AI system that provides incorrect information to a beneficiary could lead to harm and subsequent lawsuits. A chatbot that inadvertently discriminates in service delivery could violate civil rights laws. An AI tool that mishandles donor data could trigger privacy violations and regulatory penalties. Directors and officers could face personal liability if they fail to adequately oversee AI implementation. These scenarios aren't hypothetical—they're already happening, and insurance carriers are responding by limiting their exposure.
This article provides nonprofit leaders with a practical understanding of AI liability risks, explains how the insurance landscape is changing, identifies coverage gaps in existing policies, and offers concrete strategies for protecting your organization. Whether you're just beginning to explore AI or have already implemented multiple AI tools, understanding these liability issues is crucial for responsible stewardship and organizational sustainability.
Understanding AI Liability Risks for Nonprofits
Before exploring insurance solutions, it's essential to understand the specific liability risks that AI creates for nonprofits. These risks differ from traditional technology liabilities and can arise from both obvious and unexpected sources.
Harm from AI Errors or Failures
When AI systems provide incorrect information, make flawed decisions, or fail to function properly, the consequences can cause real harm to the people your nonprofit serves.
- AI-powered eligibility screening that incorrectly denies services to qualified beneficiaries
- Chatbots providing harmful or dangerous advice about health, safety, or legal issues
- Automated systems prioritizing cases incorrectly, delaying critical services
- AI translation tools that miscommunicate essential information to non-English speakers
Discrimination and Bias
AI systems can perpetuate or amplify existing biases, leading to discriminatory outcomes that violate civil rights laws and harm vulnerable populations.
- Hiring algorithms that screen out qualified candidates based on protected characteristics
- Service allocation systems that disadvantage certain demographic groups
- Predictive models that reinforce systemic inequalities in resource distribution
- Automated communication systems that fail to accommodate disabilities
Privacy and Data Protection Violations
AI systems often process vast amounts of personal data, creating risks of privacy violations, data breaches, and regulatory penalties.
- Unauthorized access to sensitive beneficiary information in AI training data
- AI tools inadvertently exposing donor financial information or health records
- Violations of GDPR, CCPA, or HIPAA through improper AI data handling
- AI systems retaining personal data longer than legally permitted
Intellectual Property Infringement
Generative AI tools can create content that infringes on copyrights, trademarks, or other intellectual property rights, exposing your nonprofit to legal claims.
- AI-generated content that incorporates copyrighted material without permission
- Automated image creation that replicates proprietary designs or artwork
- AI-written grant proposals that inadvertently plagiarize existing documents
- Marketing materials generated by AI that violate trademark protections
Directors and Officers Liability
Personal liability risks for nonprofit leadership
There is an increasing risk of directors and officers operating with unrecognized liabilities under the false pretense that risks are fully insured under traditional D&O liability policies. Board members and executives can face personal liability for:
- Failure to implement adequate AI oversight: Not establishing governance frameworks or policies for AI use
- Inadequate due diligence: Approving AI implementations without properly understanding risks and limitations
- Breach of fiduciary duty: Failing to protect organizational assets from AI-related harms
- Negligent oversight: Not monitoring AI systems for bias, errors, or compliance violations
These risks aren't theoretical. Legal cases involving AI failures are already emerging, setting precedents that will shape liability standards for years to come. The key difference from traditional technology risks is that AI systems can make autonomous decisions, learn and change over time, and produce outcomes that even their creators didn't anticipate. This unpredictability makes traditional liability frameworks inadequate and insurance coverage uncertain.
The Changing Insurance Landscape for AI
The insurance industry is undergoing a significant transformation in how it approaches AI-related risks. What began as uncertainty about whether existing policies covered AI incidents has evolved into a proactive effort by insurers to limit exposure through new exclusions and specialized products.
AI Exclusion Clauses: The New Reality
In a pivotal shift, Verisk's ISO Core Lines developed new general liability endorsements effective January 2026 that allow carriers to exclude generative AI exposures from standard policies. These exclusionary forms are near absolute in scope, precluding coverage for any claim related to AI usage. Major insurers are rapidly adopting these exclusions, fundamentally changing what traditional liability policies cover.
This means that claims your organization might have assumed were covered—such as injuries resulting from AI-generated advice, discrimination from AI decision-making, or damages from AI content creation—may now be explicitly excluded. Nonprofits renewing policies in 2026 and beyond must carefully review new exclusion language to understand what protection, if any, remains.
Coverage Confusion Across Policy Types
Even before explicit AI exclusions, there was significant confusion about where AI-related claims fit within existing insurance frameworks. For example, if an AI program causes bodily injury, does it fall under commercial general liability (which typically covers physical harm) or cyber coverage (which addresses technology failures)? Different insurers interpret this differently, creating uncertainty for policyholders.
Technology errors and omissions (Tech E&O) policies provide coverage for third-party claims alleging wrongful acts, errors, or omissions in technology services. However, many cyber policies exclude or narrowly define losses involving AI systems, and may not cover failures in AI-generated content, unauthorized access to machine learning models, or AI decision-making tools. Directors and officers (D&O) insurance typically covers leadership decisions, but may not extend to AI-specific governance failures, especially as carriers add AI exclusions.
Emerging AI-Specific Insurance Products
New insurance options designed specifically for AI risks
As traditional coverage shrinks, specialized AI insurance products are emerging to fill the gaps. These products are still evolving, but several models are becoming available:
- AI Performance Guarantee Insurance: Munich Re's aiSure and similar products provide performance warranties that indemnify clients for financial losses or legal liabilities directly related to AI errors
- AI Liability Riders: Endorsements that can be added to existing policies to cover specific AI applications or use cases
- Model Governance Protection: Coverage for claims arising from inadequate AI oversight, testing, or governance practices
- Technology Assurance Products: Insurance specifically designed for technology providers offering AI solutions, which may cover downstream users
However, these specialized products typically come with higher premiums and more stringent risk management requirements. Insurers are learning from the cyber insurance market, where carriers initially offered broad coverage only to experience significant losses, prompting them to demand strict security practices and implement substantial exclusions. The AI insurance market is following a similar trajectory, with insurers requiring documented AI governance, employee training, and risk mitigation practices as prerequisites for coverage.
For nonprofits, this evolving landscape means that insurance protection can no longer be assumed. Organizations must proactively assess their coverage, understand new exclusions, and potentially negotiate specialized protection—all while balancing budget constraints with the need for adequate risk transfer.
Identifying Coverage Gaps in Your Current Policies
Most nonprofits carry several types of insurance that, in theory, could cover various AI-related incidents. However, recent changes in policy language and the introduction of AI exclusions have created significant gaps that many organizations don't realize exist until they file a claim.
Critical Questions to Ask About Your Coverage
Essential inquiries for your insurance broker or carrier
- Do any of our policies include AI-specific exclusions? Request copies of all exclusionary language related to AI, algorithms, machine learning, or automated decision-making
- How does our general liability policy treat AI-related bodily injury or property damage? Confirm whether physical harm caused by AI decisions or recommendations would be covered
- Does our cyber insurance cover AI-related data breaches or privacy violations? Many cyber policies now explicitly exclude or limit AI-related claims
- Are discrimination claims arising from AI decision-making covered? Employment practices liability and general liability policies may exclude algorithmic discrimination
- Does our D&O policy cover failures in AI governance or oversight? Directors and officers may have personal exposure if AI-specific governance gaps aren't addressed
- Is intellectual property infringement by generative AI tools covered? Most traditional policies weren't designed to address AI-generated copyright violations
- What are our notification requirements if we experience an AI-related incident? Timely notification is crucial for coverage; understand what triggers the requirement
Common Coverage Gaps Nonprofits Face
Based on how insurance policies are evolving, nonprofits commonly face these specific coverage gaps:
Gap 1: Generative AI Content Liability
Traditional media liability or errors and omissions policies typically don't cover content created by AI systems, including copyright infringement, defamation, or privacy violations from AI-generated materials.
Nonprofit exposure: Grant proposals, donor communications, social media content, annual reports, or educational materials created using generative AI.
Gap 2: Algorithmic Discrimination
Employment practices liability insurance (EPLI) may exclude discrimination claims resulting from AI hiring tools, and general liability often won't cover discriminatory service delivery driven by algorithms.
Nonprofit exposure: AI-powered applicant screening, automated eligibility determinations, predictive models for resource allocation, or AI-assisted case prioritization.
Gap 3: AI Training Data Issues
Privacy policies and cyber insurance typically don't address liability from using personal data to train AI systems, even with permission, if the AI later produces harmful outputs.
Nonprofit exposure: Training custom AI models on beneficiary data, donor information, or organizational records that could be exposed or misused.
Gap 4: Third-Party AI Vendor Failures
Your policies may not cover damages when a third-party AI vendor's system fails, especially if the vendor's indemnification provisions are limited or inadequate.
Nonprofit exposure: Relying on AI-powered CRM systems, chatbots, fundraising platforms, or case management tools where the vendor's mistakes could harm your constituents.
Understanding these gaps is the first step toward addressing them. Organizations taking a proactive approach to managing AI risk and documenting their risk management practices may be better positioned to negotiate coverage or demonstrate to insurers that they represent lower-risk clients. For guidance on building comprehensive AI governance, see our article on building AI champions in your organization.
Practical Risk Management Strategies
While insurance is an important risk transfer mechanism, it shouldn't be your only—or even primary—strategy for managing AI liability. The most effective approach combines risk mitigation through good governance, contractual risk allocation, and appropriate insurance coverage. Here's how nonprofits can build a comprehensive AI risk management framework.
Implement Strong AI Governance Practices
The foundation of liability protection is preventing problems before they occur
Insurers are increasingly evaluating organizations' AI governance when determining coverage and premiums. Strong governance not only reduces your risk exposure but may also make you more insurable.
- Develop clear AI use policies: Document what AI tools can be used for, what approvals are required, and what restrictions apply, especially for high-risk applications
- Establish AI oversight committees: Create cross-functional teams that review AI implementations, monitor performance, and address ethical concerns
- Conduct AI impact assessments: Before deploying AI tools, systematically evaluate potential risks to beneficiaries, privacy, bias, and organizational reputation
- Implement human oversight requirements: Ensure that high-stakes decisions always involve human review, particularly for eligibility, service allocation, or employment matters
- Train staff on AI risks and limitations: Help team members understand when and how to use AI appropriately, and recognize when AI outputs require verification
Leverage Contractual Risk Allocation
Use contracts to shift liability to vendors and protect your organization
Indemnification agreements can temporarily fill risk management gaps while the insurance industry develops more comprehensive AI coverage. When contracting with AI vendors or service providers, rigorous contractual risk management is essential.
- Negotiate strong indemnification clauses: Require vendors to indemnify your organization for claims arising from their AI tools' errors, bias, or failures
- Verify vendor insurance coverage: Request certificates of insurance showing the vendor carries adequate E&O and AI-specific liability coverage
- Include AI-specific warranties: Get vendors to warrant that their AI systems comply with applicable laws, don't discriminate, and meet documented performance standards
- Define liability limits carefully: Don't accept blanket limitations on vendor liability for AI failures; negotiate appropriate caps and exceptions
- Establish clear data responsibilities: Specify who owns data, how it can be used, what happens if it's mishandled, and who bears liability for data incidents
Conduct Regular AI Audits and Testing
Proactive monitoring helps identify and fix problems before they cause harm
Organizations that can demonstrate ongoing AI monitoring and testing are better positioned to defend against liability claims and may receive more favorable insurance terms.
- Test for bias and discrimination: Regularly analyze AI outputs across demographic groups to identify disparate impacts or discriminatory patterns
- Monitor AI accuracy and performance: Track error rates, false positives/negatives, and user complaints to catch degrading performance early
- Document governance activities: Maintain records of oversight committee meetings, impact assessments, testing results, and corrective actions taken
- Establish incident response protocols: Have clear procedures for what to do when AI systems fail, produce harmful outputs, or violate policies
- Conduct periodic external reviews: Consider bringing in third-party experts to audit high-risk AI systems and validate your governance approach
Building an Insurance Strategy for AI
Once you've implemented strong governance and contractual protections, work with your insurance broker or risk manager to develop an appropriate insurance strategy. This should include:
Review all existing policies at renewal: Don't automatically renew without carefully examining new exclusions or limitations related to AI. Ask your broker to identify all AI-related language changes and explain their implications for your coverage.
Assess your AI risk profile: Work with your broker to categorize your AI uses by risk level. Low-risk applications like content generation might not require specialized coverage, while high-risk uses like automated eligibility screening definitely do.
Explore specialized AI coverage: For high-risk AI applications, investigate AI-specific insurance products or endorsements. While these may be expensive, they could be worthwhile for mission-critical systems. Ask about performance guarantee insurance, AI liability riders, or model governance protection products.
Document your risk management practices: Insurers that established AI governance frameworks are better positioned to obtain coverage. Prepare documentation of your AI policies, oversight processes, training programs, and testing protocols to share with insurers. This evidence of responsible AI management can improve your insurability and potentially reduce premiums.
Consider captive or group insurance options: For nonprofits struggling to obtain affordable AI coverage, explore whether professional associations or nonprofit groups are developing captive insurance programs or group purchasing options that might provide better terms. For more on strategic collaboration, see our article on building strategic partnerships for AI.
What Boards and Leadership Need to Know
Nonprofit boards and executive leadership have specific responsibilities regarding AI liability and risk management. Understanding these obligations is crucial for fulfilling fiduciary duties and protecting against personal liability.
Board Oversight Responsibilities
Key areas where board governance is essential
- Policy approval: Board should review and approve organizational AI use policies, ensuring they address liability risks and ethical concerns
- Risk assessment oversight: Require regular reports on AI implementations, including risk assessments for high-stakes applications
- Insurance adequacy: Review insurance coverage annually, specifically addressing AI liability gaps and determining acceptable risk retention levels
- Incident response: Establish protocols for board notification when AI systems cause harm or create significant liability exposure
- Compliance monitoring: Ensure the organization complies with emerging AI regulations and industry standards relevant to your sector
Questions Board Members Should Ask
To fulfill oversight responsibilities, board members should regularly ask these questions:
Strategic Questions
- What AI tools are we currently using, and what are their potential liability risks?
- Do we have adequate insurance coverage for AI-related claims?
- What is our organization's appetite for AI-related risk?
- How are we ensuring AI systems align with our mission and values?
Operational Questions
- Who is responsible for AI governance and oversight in our organization?
- How are we testing AI systems for bias, accuracy, and safety?
- What contractual protections do we have with AI vendors?
- What would we do if an AI system we use causes harm to someone?
Board members who can demonstrate they asked these questions, received adequate information, and made informed decisions about AI risks are better protected against personal liability claims. Documentation of board deliberations on AI governance is essential evidence of proper oversight. For guidance on board communication and reporting, see our article on using AI to prepare board meeting packets.
Conclusion: Building a Comprehensive AI Risk Strategy
The emergence of AI liability risks and the insurance industry's response through new exclusions and specialized products represents a fundamental shift in how nonprofits must think about risk management. The days of assuming traditional insurance policies will cover AI-related incidents are over. Organizations that fail to proactively address these risks face potential financial exposure that could threaten their sustainability and mission.
However, this challenge also presents an opportunity for nonprofits to develop more sophisticated, comprehensive risk management practices. By implementing strong AI governance, establishing robust contractual protections, conducting regular monitoring and testing, and working strategically with insurance brokers to secure appropriate coverage, nonprofits can both reduce their liability exposure and position themselves as responsible AI adopters.
The key is understanding that managing AI liability requires a multilayered approach. Insurance is one important component, but it cannot be your only strategy. The most effective protection comes from preventing problems through good governance, allocating risk contractually where possible, maintaining thorough documentation of your risk management efforts, and then using insurance to address residual exposures.
For nonprofit boards and leadership, this means elevating AI governance to a strategic priority. The risks are too significant and the insurance landscape too uncertain to treat AI as simply another technology implementation. Board oversight, clear policies, documented decision-making, and regular risk assessments are essential not just for liability protection but for responsible stewardship of organizational resources and mission.
As the insurance market continues to evolve and AI capabilities expand, nonprofits that establish strong risk management foundations now will be better positioned to adapt to future changes. Start by assessing your current AI uses and liability exposures, review your insurance coverage thoroughly at renewal, and implement the governance practices that will both protect your organization and demonstrate to insurers that you're a responsible risk. The investment in proper AI risk management is an investment in your nonprofit's long-term sustainability and ability to serve your mission effectively.
Need Help Managing AI Liability Risks?
Let us help you develop a comprehensive AI risk management strategy, implement governance frameworks, and navigate the complex insurance landscape to protect your organization.
