Back to Articles
    Security & Trust

    Crisis Communication Plans for the Deepfake Era: A Template for Nonprofits

    Deepfake attacks on nonprofit leaders and organizations are no longer theoretical. Build a structured response plan that protects your reputation, guides your team, and helps your organization recover quickly when fabricated content targets your mission.

    Published: February 28, 202614 min readSecurity & Trust
    Crisis communication plan for deepfake era nonprofit guide

    A video surfaces on social media showing your executive director announcing a controversial policy change. The problem is, your ED never said any such thing. Within hours, donors are calling to withdraw support, board members are demanding explanations, and journalists are requesting comment. This is the new reality of deepfake attacks, and nonprofits are increasingly in the crosshairs.

    Nonprofits are particularly vulnerable to deepfake-driven reputation damage because their credibility rests so heavily on trust. Unlike corporations that can sometimes weather controversy through financial strength, mission-driven organizations depend on public confidence for every dollar raised and every program delivered. A convincing fake video or audio clip can do in hours what years of work went into building. The question is not whether your organization might face a deepfake incident, but whether you have a plan for when it happens.

    A deepfake crisis communication plan is different from a standard crisis plan in important ways. Traditional crises typically involve real events where the facts are disputed. Deepfake crises involve fabricated events where the core reality itself is under attack. This requires different verification steps, different messaging strategies, and different stakeholder management approaches. This article provides a practical template that nonprofits of all sizes can adapt to their specific context.

    We will walk through every stage of a deepfake crisis: the warning signs that an attack may be coming, the immediate steps to take in the first 60 minutes, the stakeholder communication sequence, the technical verification process, and the longer-term reputation recovery work. By the end, you will have a clear blueprint that your team can customize and keep ready so that a crisis does not find you unprepared.

    Understanding the Deepfake Threat to Nonprofits

    Before building a response plan, it helps to understand what kinds of deepfake attacks nonprofits are most likely to encounter. The threat landscape has expanded significantly in 2026, moving well beyond crudely edited videos into sophisticated audio clones, AI-generated images, and convincing text-based impersonation through fake social accounts.

    Executive voice cloning is among the most immediately dangerous threats. Attackers can use as little as a few seconds of audio from a public speech or podcast appearance to generate realistic voice imitations. These are then used to make phone calls to staff requesting wire transfers, to leave fake voicemails for donors, or to generate audio that appears in fabricated video content. Organizations whose leaders are frequent public speakers or who publish audio content are at particular risk.

    Common Attack Vectors

    How deepfakes typically target nonprofit organizations

    • Fabricated video of executive making controversial statements
    • AI-cloned voice calls impersonating leadership to staff
    • Fake fundraising appeals using authentic-looking branding
    • Fabricated scandals involving leadership or program staff
    • Misinformation campaigns using fake news stories about the organization

    Why Nonprofits Are Targeted

    Factors that make mission-driven organizations vulnerable

    • High public trust makes reputation damage more impactful
    • Advocacy positions create political adversaries with motivation to attack
    • Limited IT security budgets compared to corporate sector
    • Abundant public audio and video content of leaders online
    • Donor bases that respond quickly to perceived scandal

    Understanding these vectors helps organizations build the right defenses. Advocacy organizations that take public positions on controversial topics, service organizations that work with vulnerable populations, and any organization whose leaders are media-visible should treat deepfake preparedness as a strategic priority rather than a hypothetical concern. You can read more about the broader threat landscape in our related article on deepfake protection for nonprofits.

    Pre-Crisis Preparation: Building Your Defense Before You Need It

    The most effective crisis communication happens before the crisis begins. Organizations that have built response infrastructure in advance can act in minutes instead of hours, and the difference in reputation outcomes is substantial. The time to build your deepfake response capacity is during periods of calm, not when fabricated content is already circulating.

    Your first pre-crisis investment is establishing a Crisis Response Team with clearly defined roles. This team should include your executive director or CEO, your communications director or head of external relations, your technology lead or IT manager, legal counsel (even if on retainer rather than on staff), and at least one board representative who can be reached quickly. Each person should know their role before a crisis occurs, and the team should be small enough to convene rapidly but broad enough to cover all the bases.

    Crisis Response Team Structure

    Roles and responsibilities for deepfake incident response

    Incident Commander (Executive Director or Designee)

    Authorizes all public statements, makes final decisions on response strategy, communicates with the board, and serves as the organizational face of the response.

    Communications Lead

    Drafts all public messaging, manages social media channels, coordinates media inquiries, and monitors narrative spread across platforms.

    Technology Analyst

    Conducts technical verification of suspect content, documents digital evidence, coordinates with platform trust and safety teams, and implements security countermeasures.

    Legal Liaison

    Advises on defamation and legal remedies, guides evidence preservation, reviews public statements for liability, and pursues formal removal requests where warranted.

    Stakeholder Coordinator

    Manages communication with donors, board members, partner organizations, and program participants, ensuring key relationships receive personal outreach.

    Beyond team structure, organizations need two specific pre-crisis tools: a verification code system and pre-approved message templates. Verification codes are simple shared secrets, such as a daily code phrase or a personal question only the real executive would know, used to confirm identity in high-stakes situations. When a staff member receives an unexpected call from someone claiming to be the executive director requesting an urgent action, they use the code to verify authenticity before proceeding.

    Pre-approved message templates save critical time during a crisis. Work with legal counsel to draft holding statements, denial templates, and stakeholder communication scripts in advance. These do not need to be specific to any particular scenario; they are frameworks that can be quickly customized when an incident occurs. The time spent drafting these templates before a crisis will easily pay for itself when you need to respond within the hour rather than within the day.

    Pre-Crisis Checklist: Do These Now

    • Identify and brief your Crisis Response Team on their roles
    • Create a verification code system for executive communications
    • Draft holding statements and denial templates with legal review
    • Set up monitoring alerts for organization name, executive names, and key terms
    • Audit and reduce publicly available audio and video content of leaders
    • Establish relationships with social platform trust and safety contacts
    • Train all staff on deepfake verification and reporting procedures
    • Identify external forensics or PR firm you would engage during an incident

    The First 60 Minutes: Your Critical Response Window

    When fabricated content about your organization surfaces, the first hour is your most critical window. Deepfakes spread fastest in the early period before debunking can keep pace, and the narrative that forms during that initial window is often what lingers in public memory. This does not mean you should rush out unverified denials; it means you need a clear protocol for moving quickly through the assessment and initial response phases.

    The first step is confirming the incident and its scope. Someone on your team has flagged content that appears to show your organization or leadership in a false light. Before activating your full crisis response, verify that what you are dealing with is indeed a deepfake or fabricated content, not a misunderstanding about real content. This distinction matters enormously for how you respond.

    The First 60 Minutes: Minute-by-Minute Protocol

    A structured timeline for immediate crisis response

    Minutes 0-10: Assess and Activate

    Receive the alert and do a preliminary review. Is this content spreading? On which platforms? How many views or shares? Notify the Incident Commander immediately. Do not attempt to debunk or respond publicly yet.

    Minutes 10-20: Convene the Team

    Call or text all Crisis Response Team members. Use your emergency communication channel (not email, which may be slow). Brief the team on what is known. Assign initial tasks: Communications Lead monitors spread, Technology Analyst begins forensic review, Legal Liaison is notified and begins reviewing response options.

    Minutes 20-35: Verify and Preserve

    Technology Analyst downloads and preserves copies of the fabricated content for evidence and forensic analysis. Confirm the content is not authentic footage taken out of context. Document URLs, timestamps, and spread metrics as part of the incident record.

    Minutes 35-50: Issue the Holding Statement

    Post a brief, clear holding statement on your primary social channels and website. This does not need to be detailed. Its purpose is to establish that you are aware of the content and that it is under review. Use your pre-drafted template. Get Incident Commander approval before posting.

    Minutes 50-60: Notify Key Stakeholders

    Board chair and key board members should receive a personal call or text from the Incident Commander. Major donors who may have seen the content should receive outreach from the Stakeholder Coordinator. This personal notification prevents key supporters from learning about the incident only from media coverage.

    One principle governs the first 60 minutes above all others: speed with accuracy. Do not sacrifice accuracy for speed, and do not sacrifice speed for the perfect message. A clear holding statement that acknowledges the situation without over-committing is far better than silence while your team crafts the ideal response. Silence in a crisis is interpreted as confirmation or guilt; a holding statement buys you time while signaling that you are engaged and in control.

    If the Incident Commander is the target of the deepfake, have them be visible immediately. The most powerful counter to a fabricated video of your executive director saying something false is your real executive director, on camera, clearly and calmly addressing the situation. Platform algorithms and human psychology both respond well to authentic counter-messaging from the actual person being impersonated. This real-time visibility is something deepfakes cannot replicate when the real person is present and engaged.

    Sample Holding Statement Templates

    A holding statement is your first public word on the situation. It needs to accomplish three things: acknowledge that you are aware of the content, clearly state that it is fabricated, and indicate that a fuller response is forthcoming. It should be brief, factual, and authoritative in tone.

    Template A: Video or Audio Deepfake

    For incidents involving fabricated video or audio of a leader

    "We are aware of a video [or audio recording] circulating on social media that falsely depicts [Name/Title]. This content is fabricated. [Name] did not make these statements, and this does not represent the position of [Organization Name]. We are actively working to have this content removed and will share a full statement shortly. If you have questions, contact us directly at [email/phone]."

    Template B: Fake Fundraising Appeal

    For incidents involving fraudulent solicitations using your brand

    "Attention: We have been made aware of fraudulent donation requests circulating that use [Organization Name]'s name and branding. These are not from us. [Organization Name] only accepts donations at [official URL] or by contacting [official contact info]. Please do not respond to any other solicitation. We have reported this to the appropriate authorities and are working to stop this fraud. Your trust is everything to us."

    Template C: Misinformation Campaign

    For AI-generated false news stories or fabricated documents

    "False information about [Organization Name] is being shared online. [Specific claim being addressed] is entirely untrue. [Organization Name] [state what is actually true in one sentence]. We are committed to transparency and will address this fully in the coming hours. For verified information about our work, visit [official URL] or contact us at [contact info]."

    Technical Verification and Platform Response

    While your communications team manages the public response, your technology analyst needs to pursue two parallel tracks: gathering technical evidence that the content is fabricated, and working with platforms to remove it. These processes can be slow, particularly with major social platforms, but starting them immediately gives you the best chance of a timely removal.

    Technical verification involves analyzing the suspect content for signs of AI generation. Common indicators in deepfake video include inconsistent lighting between the face and background, unnatural blinking patterns or eye movement, artifacts around the hairline and edges of the face, and audio that does not quite sync with lip movement. For audio deepfakes, listen for unusual cadence, missing environmental sounds that should be present, and subtle tonal inconsistencies. While a thorough forensic analysis may require professional tools, even preliminary observations can support your public statement that the content is fabricated.

    Deepfake Detection Resources

    Tools and approaches for verifying suspect content

    Free Verification Tools

    • Microsoft's Video Authenticator (analyzes images and video frame by frame)
    • Deepware Scanner (free deepfake detection for video)
    • Hive Moderation API (has a free tier for content authenticity checking)
    • InVID/WeVerify browser extension (video verification and reverse image search)

    Professional Forensics Services

    • Reality Defender (enterprise deepfake detection platform)
    • Sensity AI (media forensics and deepfake detection)
    • Digital forensics firms who can provide expert witness reports for legal proceedings

    Platform Reporting Paths

    • Most platforms now have specific "synthetic media" or "deepfake" reporting categories
    • Organizations with verified accounts typically receive priority review
    • Legal counsel can file Digital Millennium Copyright Act (DMCA) takedown notices when appropriate

    Platform removals can take anywhere from hours to days, and you cannot rely on them as your primary response. Your communications strategy must work in parallel with, not depend on, platform action. Continue your public messaging and stakeholder outreach even while waiting for platform responses. Document every removal request with timestamps, request numbers, and the names of any platform contacts you speak with. This documentation is valuable if you pursue legal remedies.

    Stakeholder Communication: Who Hears What and When

    Different stakeholders need different information delivered through different channels. Getting this sequence right prevents confusion, prevents key supporters from learning about the crisis from outside sources first, and ensures your narrative gets to the people who matter most before misinformation does.

    Board members come first among stakeholders, specifically the board chair who should receive a direct call from the Incident Commander within the first hour. Board members are often targeted for independent comment by journalists, so they need accurate information quickly and a clear message to convey when reached. Prepare a one-page briefing document that can be shared with the full board within a few hours.

    Stakeholder Communication Sequence

    Priority order and channel guidance for different audiences

    1

    Board Chair (within 30 minutes)

    Direct phone call from Incident Commander. Cover: what happened, what we are doing, what they should say if contacted.

    2

    Full Board (within 2 hours)

    Email or group message with the board briefing document. Include the public statement. Let them know they do not need to respond to media inquiries; direct all inquiries to the Communications Lead.

    3

    Major Donors (within 2-4 hours)

    Personal call or text from a senior relationship manager for your top donors. These supporters have the deepest emotional investment and the most resources. Proactive outreach demonstrates your confidence and care for the relationship.

    4

    Partner Organizations (within 4 hours)

    Email from a relevant program or leadership contact. Briefly explain the situation, provide the public statement, and emphasize continued commitment to the partnership.

    5

    All Staff (within 4 hours)

    Internal message from the Incident Commander explaining the situation, what staff should do if contacted, and how to escalate any questions. Staff who are confused or uninformed can inadvertently worsen a crisis.

    6

    General Donor Base and Subscribers (within 24 hours)

    Email newsletter or broadcast message with the full public statement once it has been finalized and vetted. Include reassurance about your mission and impact continuing without disruption.

    Media inquiries should be handled consistently by the Communications Lead only. All other staff should be instructed to say "I'm not the right person to speak to this, but I can connect you with our Communications Director" and immediately provide that contact information. This prevents inconsistent statements from reaching journalists and ensures that all media messaging is coordinated. If you receive significant media attention, consider whether to schedule a brief press availability with the Incident Commander to address questions directly and visibly.

    Reputation Recovery: The Days and Weeks After

    Effective crisis response does not end when the immediate incident is contained. The reputation recovery phase, which begins once the false content is removed or debunked and the immediate crisis has passed, is where organizations either rebuild trust fully or allow residual doubt to linger. This phase requires deliberate, sustained communication over the weeks following the incident.

    The most powerful reputation recovery tool is authentic proof of your ongoing work. Increased transparency in the weeks following an incident, through more frequent mission updates, impact stories, program highlights, and direct leader communications, demonstrates that the real organization is vibrant, trustworthy, and mission-focused. This flood of authentic content also helps search results and social feeds surface real information about your organization rather than lingering discussion of the incident.

    Short-Term Recovery (Days 1-14)

    • Issue a detailed public statement with the full facts and any technical verification obtained
    • Increase authentic social media posting with mission-focused content
    • Follow up personally with board members and major donors
    • Continue monitoring for continued spread or new iterations of the fake content
    • Assess whether legal action is warranted and viable

    Long-Term Recovery (Weeks 2-8)

    • Conduct stakeholder surveys to assess residual trust impact
    • Publish a transparent account of what happened and how you responded
    • Strengthen deepfake defenses based on lessons from the incident
    • Share your experience (appropriately) with peer organizations to build sector resilience
    • Update your crisis communication plan with the lessons learned

    Organizations that handle deepfake crises well can sometimes emerge with stronger relationships than before. Donors and supporters who see an organization respond to adversity with transparency, competence, and dignity often feel an increased sense of loyalty. The way your organization responds to a false attack reveals your character and capabilities in ways that normal operations never do. Make that revelation a positive one. For more on building trust with stakeholders around AI and technology, see our article on building public trust in your nonprofit AI implementation.

    Integrating Deepfake Planning with Your Broader AI Governance

    A deepfake crisis communication plan should not exist as an isolated document. It belongs within a broader framework of AI governance and organizational resilience. Nonprofits that are building AI policies and oversight structures in 2026 should explicitly address deepfake threats as part of that work, integrating response planning with data security, staff training, and board oversight.

    Your board's role in deepfake preparedness deserves particular attention. Board members are both potential targets of impersonation attacks and the first line of stakeholder communication during a crisis. Ensuring that your board understands the threat, has reviewed the crisis communication plan, and knows their specific role in responding is as important as any technical defense. Consider including a deepfake scenario in your annual board training or risk review.

    Staff training is equally critical. Every person in your organization is a potential vector through which a deepfake attack succeeds, whether through an employee who acts on a fake voice call from an executive, a front-desk worker who confirms false information to a journalist, or a social media manager who engages with fabricated content in a way that amplifies it. Annual training on deepfake recognition and response protocols is now a reasonable expectation for any organization operating in a digital environment.

    Finally, connect your deepfake preparedness to your existing crisis communication infrastructure. Many nonprofits already have communication plans for financial crises, program failures, or leadership transitions. Adding deepfake response as a scenario within that existing structure is more effective than creating a completely separate document. The fundamentals of crisis communication, speed, accuracy, transparency, stakeholder prioritization, are the same regardless of whether the crisis is real or fabricated. The deepfake-specific elements, such as verification procedures, technical evidence gathering, and counter-messaging strategies, layer on top of those fundamentals. We also recommend reading our article on AI misinformation and organizational resilience for a broader perspective on protecting your organization from information threats.

    Conclusion: Preparedness Is Your Best Defense

    Deepfake attacks are a new kind of threat, but they are not an unstoppable one. Organizations that invest in preparedness, build their Crisis Response Team before they need it, draft their templates before they use them, and train their staff before an incident occurs, are in a fundamentally different position than those who are caught without a plan. The gap between these two groups in terms of recovery speed and reputation outcomes is substantial.

    The deepfake era does require new skills and new vigilance. It requires thinking carefully about how much audio and video content of your leaders exists online and how it might be used. It requires establishing verification systems that would have seemed paranoid just a few years ago but are now simply prudent. And it requires helping your team understand that a surprising call, an unexpected request, or content that feels off might be exactly what it seems: an artificial attempt to exploit your trust.

    The mission of your organization is too important to leave vulnerable to these attacks. Building a deepfake crisis communication plan is a concrete, achievable step that any nonprofit can take, regardless of size or technical capacity. Start with the pre-crisis checklist, convene a conversation with your leadership team, and begin drafting your templates today. When the time comes, you will be ready.

    Strengthen Your Organization's AI Resilience

    From deepfake defenses to AI governance frameworks, One Hundred Nights helps nonprofits navigate the opportunities and risks of emerging technology with confidence.