Back to Articles
    Risk & Insurance

    Cyber Insurance and AI: How Nonprofits Should Update Their Coverage for 2026

    AI is fundamentally changing both the cyber threat landscape and the insurance policies designed to protect against it. Many nonprofits now face a dangerous gap: their cyber coverage was written for a world that no longer exists, while the AI tools they use and the AI-powered attacks they face are creating risks that standard policies may not cover.

    Published: March 22, 202614 min readRisk & Insurance
    Cyber insurance and AI risk management for nonprofits

    Picture this scenario: your nonprofit's finance director receives an urgent video call from what appears to be your executive director, requesting an immediate wire transfer to cover an emergency expense. The video looks authentic, the voice sounds right, and the request has enough detail to seem plausible. The wire goes out. You later discover the call was generated entirely by AI, a deepfake sophisticated enough to fool experienced staff. When you file a claim with your cyber insurer, you discover that your policy's social engineering coverage doesn't explicitly cover AI-generated impersonation. You're on your own for the loss.

    This scenario is no longer hypothetical. Deepfake-enabled fraud is escalating rapidly, and nonprofits, which typically operate with lean IT teams, limited cybersecurity budgets, and high staff turnover, are among the most exposed organizations. At the same time, the AI tools nonprofits are adopting for fundraising, communications, and operations are creating an entirely new category of liability risk that most existing cyber policies don't address clearly.

    The cyber insurance market is responding, but it's doing so unevenly. Some carriers are adding explicit AI endorsements and deepfake coverage. Others are quietly introducing exclusions that could leave nonprofits without coverage for AI-related losses. Understanding where your policy stands and how to update it for the AI era is no longer an optional task for nonprofit risk managers. It's urgent work that belongs on the board's agenda.

    This article walks nonprofit leaders through what's changing in the cyber insurance landscape, where coverage gaps are most likely to appear, how insurers are evaluating AI risk in organizations like yours, and the concrete steps you can take before your next renewal to make sure your coverage actually matches the world you're operating in. It builds on previous guidance about coverage gaps in nonprofit AI insurance and how documented AI governance can reduce your premiums.

    The Dual AI Risk Problem Nonprofits Face

    To understand why cyber insurance is becoming more complicated for nonprofits, it helps to recognize that AI creates risk on two distinct fronts simultaneously. Most people think primarily about AI as a weapon attackers use against organizations. That's real and growing. But nonprofits adopting AI tools are also creating a second category of risk: liability arising from their own use of AI. Insurance markets treat these two categories very differently, and the gap between them is where many nonprofits will find themselves underinsured.

    AI as a Weapon Against You

    Threats where your organization is the victim

    • AI-generated phishing emails that are nearly indistinguishable from legitimate communications
    • Deepfake voice and video calls impersonating executives, board members, or donors
    • AI-accelerated ransomware that deploys faster and spreads more aggressively
    • Business email compromise where AI clones the writing style of trusted contacts
    • Automated vulnerability scanning that identifies and exploits weaknesses faster than humans can respond

    AI as a Tool You Use

    Liability arising from your own AI adoption

    • Staff pasting donor or beneficiary data into AI tools, creating unintended data exposure
    • AI-generated content that creates copyright, defamation, or privacy liability
    • Donor-facing chatbots that capture or process personal information without proper consent
    • Shadow AI, staff using unapproved tools outside organizational policy
    • Third-party AI vendors experiencing breaches that expose your organization's data

    Standard cyber policies were primarily designed to cover the first category: your organization as a victim of external attacks. As AI tools proliferate inside nonprofits, insurers are actively debating how to handle the second category. Many are choosing to exclude it, push it to specialty products, or leave it in ambiguous territory where claims may or may not succeed depending on how the loss is characterized. This is the coverage gap that nonprofit leaders need to understand and close.

    What's Actually Changing in Cyber Insurance Policies

    The cyber insurance market is in active flux. After several years of premium increases following major ransomware events, rates have stabilized in many segments. But the policy language itself is being rewritten rapidly as carriers try to get ahead of AI-related exposures they don't yet fully understand. The changes fall into two categories: new exclusions being added and new endorsements being offered.

    Exclusions That Are Appearing in 2026 Policies

    The most significant shift is the introduction of AI-related exclusions, primarily in professional liability, directors and officers, and errors and omissions policies, though they are beginning to appear in cyber policies as well. Some carriers have introduced broad endorsements that exclude "any claim arising out of AI use, output, training, advice, or decision-making." These sweeping exclusions could affect a wide range of claims if your organization uses AI in any operational capacity.

    More targeted exclusions are also emerging: losses arising from employees using AI tools outside approved organizational policies, also known as unauthorized AI use, are increasingly excluded. Claims related to AI-generated intellectual property disputes, including training data liability and generated content claims, are being carved out. And some policies are beginning to exclude losses from algorithmic errors or biased AI outputs.

    The critical distinction to understand is that most cyber policies are still covering AI as a threat vector: if an attacker uses AI to compromise your systems, you're generally still covered. The exclusions are primarily targeting scenarios where your organization's own AI use creates liability. But the language is not always clear, and ambiguous policy wording tends to favor insurers at claim time.

    Endorsements That Are Expanding Coverage

    On the positive side, some forward-thinking insurers are explicitly adding AI-related coverage. Coalition has added an affirmative AI endorsement that expands the definition of a security failure to include AI security events, explicitly covering AI-originated attacks against the insured. Coalition also added deepfake-specific coverage in late 2025, covering reputational harm from deepfake incidents, forensic analysis costs, legal support for takedowns, and crisis communications.

    Social engineering endorsements, which protect against impersonation scams and fraudulent payment requests, have become far more important in the AI era. Nonprofit leaders should treat these as essential rather than optional additions. AI makes social engineering dramatically more convincing, and a social engineering endorsement with an appropriate sublimit is basic protection for any organization with check-writing authority or wire transfer capabilities. Given the growing threat of deepfake fraud, having explicit coverage language here is increasingly critical.

    Understanding the AI Threat Landscape for Nonprofits

    To make informed decisions about coverage, nonprofit leaders need to understand how AI is changing the specific threats their organizations face. The risk profile for nonprofits in 2026 looks meaningfully different from even two years ago, and standard security training and controls haven't kept pace.

    AI-Generated Phishing

    The vast majority of phishing emails are now AI-generated, according to security researchers. AI allows attackers to craft highly personalized messages at scale, using publicly available information from LinkedIn, websites, and social media to tailor each attack to the specific recipient. The grammar errors and awkward phrasing that used to signal phishing attempts are largely gone.

    For nonprofits, this means staff who work with donors, grant makers, and partner organizations are receiving extremely convincing requests that appear to come from people they know and trust. Training staff to recognize phishing is harder when the phishing looks identical to legitimate communications.

    Deepfake Executive Fraud

    AI voice cloning and video deepfakes have moved from science fiction to accessible tools. High-profile incidents have demonstrated that even experienced executives can be fooled by AI-generated video calls that appear to show trusted colleagues. For nonprofits, the most common scenario involves fake executive calls or video meetings requesting urgent wire transfers or changes to vendor payment details.

    Standard verification procedures, calling back on a known phone number, can defeat these attacks. But many nonprofits haven't yet updated their financial controls to require callback verification as standard practice, and staff may not know the organization has deepfake exposure.

    Shadow AI and Data Leakage

    Shadow AI refers to staff using AI tools without organizational knowledge or approval. Many nonprofits are discovering that staff are routinely pasting sensitive information, donor records, beneficiary data, grant proposals, and financial details, into free-tier AI chatbots without understanding the data handling implications.

    Free AI tiers often have different data retention and usage policies than enterprise versions. Information pasted into a free chatbot may be retained, reviewed by humans, or potentially used for model training, depending on the service's terms. This creates genuine data exposure that could trigger regulatory obligations and claims under your cyber policy, if the policy doesn't have a shadow AI exclusion.

    Third-Party AI Vendor Risk

    Many nonprofit tools now include AI features: CRM platforms with predictive analytics, email platforms with AI-generated content suggestions, and donor management systems with built-in AI scoring. If any of these vendors experiences a breach that exposes your data, whether and how much your cyber policy responds depends on policy language and sublimits that vary significantly across carriers.

    Third-party vendor liability coverage is often sublimited in cyber policies, meaning the cap on what your insurer will pay for vendor-caused losses may be significantly lower than your overall policy limit. With AI embedded in so many vendor platforms, this sublimit deserves attention at renewal.

    What makes these threats particularly acute for nonprofits is the combination of factors that increase exposure: high reliance on email for donor communications, limited IT security resources, significant staff and volunteer turnover that limits institutional security awareness, and a culture of trust that attackers can exploit. These aren't reasons to avoid AI adoption, but they are reasons to think carefully about risk management as your organization's AI footprint grows.

    How Insurers Are Evaluating AI Risk in Your Organization

    Cyber insurance underwriting is changing. The checkbox questionnaires that characterized early cyber applications are giving way to more detailed assessments of an organization's actual security posture. For AI specifically, underwriters are developing new evaluation frameworks that nonprofits will increasingly encounter at renewal.

    The trajectory is clear: just as multi-factor authentication and backup procedures became prerequisites for cyber coverage a few years ago, AI governance is becoming a factor in how carriers assess and price AI-related risk. Organizations that can demonstrate intentional, documented governance around AI tools are likely to receive better terms and face fewer coverage disputes than those that can't. This connects directly to the broader point made in our coverage of how AI governance reduces insurance premiums.

    AI Governance Documentation

    The most common new underwriting questions center on whether your organization has formal governance around AI use.

    Underwriters are asking: Does the organization have a written AI acceptable-use policy? Are employees trained on AI misuse and social engineering? Does leadership have documented oversight of which AI tools are in use? Organizations that can answer yes to these questions, and produce documentation to support it, are demonstrating the kind of intentional governance that reduces insurer uncertainty.

    The acceptable-use policy is the most foundational document. It should specify which AI tools are approved, what categories of data employees may or may not input into AI systems, who is responsible for evaluating new AI tools before adoption, and what disciplinary process exists for violations. If your organization doesn't have this document, creating it is the single highest-leverage action you can take before your next renewal.

    Shadow AI Controls

    Carriers are increasingly asking about unauthorized AI tool use.

    Does the organization maintain an inventory of AI tools in use? Are there mechanisms for detecting or blocking unsanctioned AI use? Is there a process for evaluating new AI tools before employees adopt them? These questions reflect insurer concern that shadow AI creates unquantifiable risk: losses arising from tools the organization didn't know staff were using are harder to claim and harder to prevent.

    For most nonprofits, a perfect shadow AI detection system isn't realistic. But demonstrating that you've conducted an AI audit, surveyed staff about tool use, and have a policy framework for approvals goes a long way. Insurers are looking for evidence of intentional management, not perfection.

    Data Handling with AI Tools

    How your organization manages data within AI systems matters to underwriters.

    Are there clear policies about what data categories employees can input into AI systems? Is donor personally identifiable information excluded from AI tool inputs? Have third-party AI vendors been assessed for security compliance? Do vendor agreements include appropriate data processing addenda? These questions help underwriters understand whether your AI adoption creates additional data breach exposure.

    For nonprofits handling beneficiary data, donor records, or health-related information, the data handling question is particularly important. A clear policy prohibiting the use of sensitive personal data in AI tools, even with good intentions like asking an AI to help draft a personal follow-up letter using donor information, reduces both the risk of actual harm and the coverage complications that could follow.

    How to Update Your Coverage Before Your Next Renewal

    Cyber policy renewal is the moment of maximum leverage. Before your next renewal, there are concrete steps you can take to close coverage gaps, improve your terms, and ensure that the coverage you're buying actually matches the risks your organization faces. These steps don't require legal expertise, but they do require intentional preparation.

    1

    Inventory Your AI Tools

    Before speaking with your broker, document every AI tool your staff uses, both officially approved and informally adopted. Include ChatGPT, Microsoft Copilot, Grammarly, AI-assisted fundraising platforms, any AI features embedded in your CRM or email tools, and any donor-facing chatbots. This inventory is the foundation for both your governance documentation and your insurance conversation.

    2

    Read Every Exclusion in Your Current Policy

    Review your current cyber policy, along with your D&O and E&O policies, specifically for AI-related exclusions. Look for language about "AI use," "AI output," "algorithmic decisions," "unauthorized software," or "emerging technology." If any exclusion refers broadly to AI activities, flag it for discussion with your broker. Broad exclusions should be narrowed or removed if possible.

    3

    Request Affirmative AI Coverage Language

    Rather than relying on policy silence, ask your carrier to explicitly confirm that AI-initiated attacks against your organization are covered, and that your organization's use of approved AI tools does not automatically void coverage. Affirmative language is stronger than ambiguity, which tends to resolve in the insurer's favor at claim time. Coalition's affirmative AI endorsement is a model to ask your broker about.

    4

    Add or Upgrade Social Engineering Coverage

    Social engineering endorsements, which cover fraudulent payment requests and impersonation scams, should be treated as mandatory for nonprofits with wire transfer authority. Given that AI makes these attacks dramatically more convincing, evaluate whether your current sublimit is adequate for your organization's financial profile. Ask about deepfake-specific coverage if your carrier offers it.

    5

    Review Third-Party AI Vendor Sublimits

    Ask your broker specifically about coverage for losses caused by third-party AI vendor failures. Many policies sublimit this category significantly below your overall policy limit. With AI embedded in so many platforms your organization depends on, this sublimit may be unrealistically low. Understand the coverage, and consider whether it warrants adjustment.

    6

    Request a Cross-Policy AI Gap Analysis

    AI-related risks are scattered across cyber, D&O, E&O, and general liability policies. Ask your broker to review all your policies specifically for AI-related gaps and identify which policy would respond to various AI loss scenarios. This gap analysis often reveals overlaps in some areas and complete gaps in others, which is the information you need to make informed coverage decisions.

    7

    Disclose AI Use Proactively on Applications

    When completing your renewal application, proactively disclose which AI tools your organization uses and what governance you have in place. Hiding AI use to avoid underwriting scrutiny is a dangerous strategy: if a claim arises related to AI tools that weren't disclosed, the carrier may argue misrepresentation and deny coverage entirely. Demonstrating governance earns better terms; hiding usage creates claim risk.

    15 Questions to Ask Your Insurance Broker About AI Coverage

    Most nonprofit leaders don't know what questions to ask their broker about AI coverage, and most brokers don't proactively raise these issues unless prompted. The following questions will help you have a more productive coverage conversation and identify gaps before they become claim disputes. Consider sharing this list with your board's finance or risk committee as part of your AI governance framework.

    Coverage Scope

    • Does our cyber policy explicitly cover losses from AI-generated phishing, AI-powered business email compromise, and deepfake-enabled fraud against our organization?
    • Does our policy include or exclude losses arising from AI tools our staff uses? Where is the exact boundary?
    • If an employee uses an unsanctioned AI tool that causes a data breach, are we covered?
    • Does our policy cover deepfake-related reputational harm or impersonation, and is a specific endorsement available?
    • Are there any AI-related exclusions in our cyber, D&O, or E&O policies we should be aware of?

    Limits and Sublimits

    • Are there sublimits for AI-related claims specifically, or for social engineering and funds transfer fraud?
    • Has the carrier imposed lower limits for claims involving third-party AI vendor failures?
    • If a third-party AI vendor we rely on suffers a breach and exposes donor data, which policy responds and up to what limit?

    Underwriting Requirements

    • What AI governance documentation do you need from us at renewal to maintain current terms?
    • Will our premium or terms change if we adopt new AI tools, and should we notify you when we do?
    • What security controls related to AI use are you now requiring or expecting for coverage?

    Gap Analysis

    • Which of our current policies would respond to an AI-related loss, and where are the coverage gaps?
    • Should we consider a standalone AI liability policy, and what exposures would it cover that our existing policies do not?
    • Does our cyber policy cover regulatory defense costs under new state AI laws in Texas, California, Illinois, and Colorado?
    • Is our incident response coverage sufficient given that AI-accelerated attacks can escalate faster than traditional attacks?

    Operational Controls That Reduce AI Risk and Improve Coverage Terms

    Cyber insurance is not a substitute for risk management. It's a financial backstop for losses that security controls and operational procedures couldn't prevent. The most cost-effective approach is reducing the likelihood of loss in the first place, which also tends to earn better coverage terms. The following operational controls are specifically relevant to AI-related risks and are increasingly factored into underwriting assessments.

    Update Financial Verification Procedures

    Given the rise of AI-generated deepfake fraud, any verification procedure that relies solely on email or video confirmation is inadequate for financial transactions. Implement callback verification as standard procedure: before executing any wire transfer, change to vendor payment details, or significant financial commitment, staff should call back using a known, independently verified phone number, not a number provided in the request being verified.

    This simple procedural change defeats the vast majority of deepfake BEC attacks. It also demonstrates to your insurer that you have controls in place for the specific risk that social engineering endorsements are designed to cover. Document the procedure in writing and train all relevant staff, not just finance personnel.

    Conduct an AI Shadow Audit

    Before your next renewal, survey your staff about which AI tools they use for work-related tasks. Include questions about tools used on personal devices for work purposes. The results often reveal both the scale of shadow AI adoption and the categories of work where AI use is most prevalent. You may discover that tools you weren't aware of are handling sensitive organizational data.

    The goal isn't to prohibit AI use, which would be both impractical and counterproductive, but to understand your actual exposure and make informed decisions about which tools to officially approve, which to restrict, and what data handling policies need to be in place. Document the audit process and findings. This documentation demonstrates intentional governance to your insurer and creates a baseline for future assessments. If you're interested in a more comprehensive approach to AI governance, our article on building AI governance frameworks covers this in detail.

    Update Staff Training for AI-Enhanced Threats

    Standard phishing awareness training is increasingly inadequate because it teaches staff to spot the markers of unsophisticated phishing, poor grammar, suspicious links, unfamiliar senders. AI-generated phishing eliminates most of those markers. Training needs to shift from teaching staff to identify bad phishing to teaching them to apply procedural verification for high-risk actions regardless of how legitimate a communication appears.

    Include specific scenarios involving AI threats in your training program: what to do if an executive call requests an urgent transfer, how to verify that a video call is genuine, how to recognize that a request is designed to create urgency and bypass verification. Most cyber insurers include access to security training resources; check whether your carrier's training content has been updated for AI threats. Insurance discounts are sometimes available for documented training completion.

    Review Vendor Data Processing Agreements

    Every vendor that uses AI and handles your organization's data should have a current data processing agreement or addendum that specifies what data they collect, how it's used, whether it's used for AI training, what security standards apply, and how breaches are handled and communicated. Many legacy vendor contracts were signed before AI was a consideration and don't address these questions.

    This review is both a risk management exercise and a coverage protection measure. If a vendor breach triggers a claim under your cyber policy, documentation of due diligence, including appropriate vendor agreements, demonstrates that your organization took reasonable precautions. Absence of documentation can complicate claim handling and, in some cases, provide grounds for coverage disputes.

    Special Considerations for Health-Related and Multi-State Nonprofits

    Not all nonprofits face the same regulatory environment, and the intersection of AI with specific regulatory frameworks creates coverage considerations that go beyond standard cyber policy analysis. Two categories of nonprofits deserve particular attention.

    Health-Related Organizations and HIPAA

    Nonprofits in healthcare, social services, mental health, and community health contexts handle electronic protected health information (ePHI) subject to HIPAA. The HHS Office for Civil Rights finalized major updates to the HIPAA Security Rule in 2025, removing the distinction between required and addressable safeguards and introducing mandatory multi-factor authentication for all access to ePHI.

    For these organizations, any AI tool that handles, processes, or has access to ePHI must meet HIPAA's security requirements. Staff using a general-purpose AI chatbot to assist with documentation or care coordination, even with good intentions, creates HIPAA compliance risk if the tool isn't covered by a business associate agreement. A breach caused by an unapproved AI tool handling ePHI could result in both regulatory fines and a challenged insurance claim if the insurer argues that HIPAA non-compliance contributed to the loss.

    Multi-State Operations and New AI Laws

    Texas, California, Illinois, and Colorado are enforcing AI-related statutes in 2026, with more states likely to follow. These laws have varying requirements around disclosure of AI use, algorithmic accountability, and training data transparency. Nonprofits operating across multiple states need to understand whether their AI tool use triggers disclosure obligations in any of these jurisdictions.

    The insurance implication is coverage for regulatory defense costs: if a state agency investigates your organization's AI practices under one of these statutes, you'll want your cyber or D&O policy to cover defense costs. Not all policies include this coverage for all regulatory proceedings. Ask your broker specifically whether your policy responds to state-level AI regulatory investigations, not just HIPAA or GDPR proceedings.

    The Window to Act Is Now

    The cyber insurance market is in transition, and the window to update your coverage on favorable terms is open now. Carriers are still building their AI underwriting frameworks, and organizations that demonstrate good AI governance can influence their coverage terms. Once exclusions become standard and rates reprice for AI risk, organizations without established governance may face coverage gaps that are expensive to close.

    The practical steps are manageable: create an AI acceptable-use policy, conduct a shadow AI audit, update your financial verification procedures for the deepfake era, and have a direct conversation with your broker about AI coverage using the questions in this article. None of these steps require specialized expertise or significant resources. They require intentional leadership attention, which is exactly what your board and executive team can provide.

    Cyber insurance exists to ensure that a significant security incident doesn't threaten your organization's ability to continue its mission. The AI era is creating new and evolving threats that your current coverage may not adequately address. Treating coverage review as ongoing work, not a once-a-year checkbox, positions your organization to adapt as the risk landscape continues to change.

    Your donors, beneficiaries, and staff are counting on you to manage these risks thoughtfully. Updating your cyber coverage for the AI era is an act of organizational stewardship as important as any other risk management decision you make. Start the conversation at your next board meeting, and follow it with a dedicated discussion with your insurance broker before your next renewal.

    Get Expert Guidance on AI Risk for Your Nonprofit

    Navigating the intersection of AI adoption and insurance coverage is complex. One Hundred Nights helps nonprofits build AI governance frameworks that reduce risk and improve coverage terms. Let's talk about your organization's specific situation.