When AI Fakes Your Executive Director: Preparing for Deepfake Attacks on Nonprofit Leaders
A finance staff member receives a video call from what appears to be your executive director requesting an urgent wire transfer. The voice, the face, the mannerisms all match. But it is not your ED. It is an AI-generated deepfake. This attack has already cost organizations tens of millions of dollars. Nonprofits are uniquely vulnerable, and most have no defenses in place.

In February 2024, a finance worker at global engineering firm Arup joined a video call with colleagues who appeared to include the company's CFO and several other senior executives. The meeting felt routine. The faces matched. The voices matched. Over the course of the call, the worker was persuaded to authorize wire transfers totaling $25 million. Every person on that call except the finance worker was an AI-generated deepfake.
That attack represents a landmark case, but it is not an isolated one. Fraudsters used a cloned voice of the WPP CEO's voice alongside YouTube footage in a Teams meeting to attempt to extract money and personal details from a senior executive. A Singapore finance director authorized a half-million dollar transfer after a fake Zoom call where every participant's face had been replaced in real time. A UK subsidiary executive wired 220,000 euros on instructions from what sounded exactly like the CEO of the parent company, executed through early voice cloning technology. These attacks are accelerating in frequency, sophistication, and scale.
Nonprofits face this threat with a set of structural vulnerabilities that for-profit companies do not share. Executive directors appear frequently in public videos, donor webinars, conference presentations, and fundraising appeals, providing attackers with abundant training data for voice and video cloning. IRS Form 990 filings are public documents that list board member names, organizational structure, and financial details. Lean operating models often mean a single administrator has authority to approve significant transactions. High staff and volunteer turnover means authentication norms are less established. And the trust relationships that nonprofits depend on for fundraising create the exact emotional leverage attackers exploit.
Most nonprofit organizations are not prepared for this threat. This article explains how deepfake attacks work, why nonprofits are particularly attractive targets, and what practical steps organizations can take immediately to protect themselves without requiring significant technical investment.
The Scale of the Deepfake Threat in 2026
Deepfake technology has moved from a novelty to a criminal tool with remarkable speed. Deepfake files surged from around 500,000 in 2023 to a projected 8 million in 2025, representing a fifteen-fold increase in two years. Voice phishing attacks using deepfaked audio surged dramatically in early 2025 compared to the previous year. These are not niche attacks targeting large financial institutions. They are increasingly automated, affordable, and directed at organizations of all sizes.
The financial consequences are severe. Deepfake fraud losses in the United States reached over a billion dollars in 2025 according to Surfshark research, tripling from the year before. The average per-incident business loss for deepfake-enabled fraud in 2024 was nearly $500,000. Some of this loss is recoverable through banks and insurance, but much is not, and the reputational damage that follows a successful attack on a nonprofit, including donor trust erosion and funder concerns, extends well beyond the immediate financial impact.
What makes the current moment particularly dangerous is the accessibility of deepfake creation tools. Attackers need as little as three seconds of audio to create a voice clone with high match accuracy. Commercial voice synthesis tools that were initially designed for legitimate uses can generate convincing speech in another person's voice from a short audio sample. The cost of mounting a deepfake attack has dropped dramatically, and the dark web marketplace for deepfake-as-a-service has grown substantially. Attacks that once required significant technical expertise and resources can now be launched by anyone willing to pay a modest fee.
How Deepfake Attacks Work: What Nonprofits Need to Understand
Understanding the mechanics of deepfake attacks helps organizations recognize their vulnerabilities and understand why certain protective measures are effective. The attacks do not require extraordinary technical sophistication. They require publicly available tools, patience, and knowledge of how nonprofit organizations work.
For voice cloning attacks, attackers begin by collecting audio samples of the target executive from public sources. Nonprofit executive directors are often prolific public speakers: conference presentations, donor webinars, podcast appearances, organizational videos, and fundraising appeals all generate usable audio. The collected audio is fed into a voice synthesis model that learns the speaker's distinctive characteristics, including pitch, cadence, rhythm, breathing patterns, and emotional tone. The resulting model can then generate new speech in that voice saying anything the attacker types. Modern tools can produce convincing results from a few seconds of audio.
For video deepfake attacks, the process requires more source material but is equally accessible. Attackers collect video from conference talks, organizational YouTube channels, LinkedIn profiles, and news coverage. A face-swap model maps the target's facial geometry and expressions onto an actor's face during a live video call, replacing the attacker's appearance with the executive's likeness in real time. The Arup case used this technique: an entire video meeting populated with deepfaked participants, all generated simultaneously to create an overwhelming sense of authentic organizational context.
Simpler attacks use pre-recorded deepfake video delivered via email, messaging platforms, or social media. The executive appears to deliver a message, announce a policy change, or authorize a transfer. These do not require real-time capability and can be produced for a fraction of the cost of live deepfake attacks.
Voice Cloning Attacks
Most common attack type for financial fraud
Attacker scrapes audio from public sources, feeds it into voice synthesis software, then calls a finance staff member posing as the executive director, board treasurer, or a major donor. Creates urgency, requests secrecy, and instructs a wire transfer to a new account.
- Works via phone calls, voicemails, and WhatsApp voice messages
- Can be launched with minimal audio from public sources
- Does not require technical expertise to deploy
Real-Time Video Deepfakes
More sophisticated but increasingly accessible
During a Zoom or Teams call, the attacker's face is replaced in real time with the target executive's likeness using face-swap software. The target can "ask questions" of the fake executive and receive responses, making detection extremely difficult.
- Can include multiple fake participants simultaneously
- Attackers often claim "bad connection" to explain artifacts
- Used in the Arup $25M case and Singapore $499K case
Synthetic Fundraising Appeals
Reputational threat targeting your donors
Attackers create fake videos featuring your executive director or a celebrity board member launching an urgent fundraising campaign, directing donors to a fraudulent website that captures donations. Even after identification as fake, these attacks erode donor trust.
- Particularly effective on social media and email
- Damages brand reputation even after debunking
- May target donor database contacts directly
Grant Funder Impersonation
Targeting nonprofit fund management
Attackers impersonate a foundation program officer to extract banking information, issue fake "grant approval" notifications, or redirect existing grant disbursements to fraudulent accounts. High-dollar, trust-based transactions make this particularly effective.
- Exploits the authority and trust of funder relationships
- Foundation contacts are often semi-public information
- May target banking detail updates for existing grants
Why Nonprofits Are Particularly Vulnerable
The structural characteristics that make nonprofits effective at their missions also create specific vulnerabilities to deepfake attacks. Understanding these vulnerabilities is the first step toward addressing them, because the goal is not to eliminate the openness and accessibility that define nonprofit culture, but to add targeted protections around the specific processes that attackers exploit.
Abundant Public Audio and Video of Leaders
Nonprofit executive directors appear in conference talks, donor webinars, podcast interviews, organizational YouTube channels, fundraising appeals, and media interviews far more frequently than comparable executives at private companies. Every public appearance creates audio and video that attackers can use as training data for voice cloning and face-swap models. The more visibility your executive has cultivated, the larger their attack surface becomes. This does not mean leaders should stop communicating publicly, but it does mean the organization needs compensating controls.
Public Organizational Information (Form 990)
IRS Form 990 filings are public documents. They list the names and compensation of executives and board members, identify the organization's largest contractors and service providers, disclose financial information including revenue, expenses, and assets, and describe the organization's programs and governance structure. Attackers use this information to construct convincing impersonations: they know the board treasurer's name, the executive director's compensation, and the organization's largest grants. A well-researched attack sounds like it comes from someone who genuinely knows the organization from the inside.
Lean Operations and Limited Oversight Controls
Resource constraints mean many nonprofits operate with a single administrator who has authority to approve significant financial transactions. When that person receives what appears to be an urgent, legitimate request from executive leadership, there may be no second approval requirement, no out-of-band verification protocol, and no policy clearly stating what to do. The attacker only needs to deceive one person who has the authority to act unilaterally.
Culture of Trust and Responsiveness
Nonprofits cultivate organizational cultures built on trust, responsiveness, and deference to leadership. When an executive director appears to call with an urgent request, the organizational culture often discourages questioning the request too aggressively. Attackers exploit this by creating urgency and invoking authority. "I need this done before the board meeting tomorrow and it needs to stay confidential" is a social engineering script that lands differently in a trust-based nonprofit culture than it would in a compliance-heavy financial institution.
Underfunded Cybersecurity
The nonprofit sector systematically underinvests in cybersecurity training, technology, and infrastructure compared to for-profit organizations of similar size. Staff often have not received training on deepfake awareness, social engineering, or financial fraud prevention. There are typically no technical controls that would flag unusual video call metadata or detect synthetic audio in real time. And when a deepfake attack succeeds, the organization may lack the forensic expertise to understand what happened and prevent recurrence.
How to Detect Deepfakes: Visual and Behavioral Signs
Detection of deepfakes has become significantly harder as the technology has improved. Studies show that detection system accuracy has dropped from around 98 percent in 2023 to approximately 65 percent in 2025 as attackers use techniques specifically designed to evade detection tools. More importantly, human ability to visually identify deepfakes is very poor on average. Research suggests that very few people can accurately distinguish high-quality deepfakes from real video under normal viewing conditions.
This is why procedural controls, the organizational policies and processes that make verification mandatory regardless of how convincing a communication appears, are more reliable than visual detection alone. That said, training staff on visual detection signs provides a useful layer of awareness, particularly for identifying lower-quality deepfakes and flagging anomalies worth investigating.
Visual Warning Signs in Video
Technical artifacts to watch for during video calls
- Blinking abnormalities: Deepfakes may stare unnaturally or blink in rapid, irregular patterns. Natural blinking is consistent and smooth.
- Facial boundary blur: Blurring or flickering where the face meets hair, ears, or neck. Watch the edges of the face carefully.
- Lighting inconsistencies: The reflection in both eyes should match room light sources. Mismatched reflections indicate manipulation.
- Lip-sync errors: Sounds like "m," "f," and "th" require specific mouth shapes. Deepfakes frequently approximate these incorrectly.
- Missing micro-expressions: Genuine emotions trigger subtle muscle movements. Deepfakes often show emotionally flat or inconsistent expressions.
- Accessory anomalies: Hair, earrings, and glasses tend to flicker or appear inconsistent at the edges in deepfake video.
Behavioral Red Flags (Most Reliable)
Social engineering patterns that often signal fraud
- Unusual urgency: "This has to happen before end of day" or "I need you to do this right now" applied to financial transactions.
- Requests for secrecy: "Don't tell anyone about this yet" or "keep this between us" is a classic social engineering tactic. Legitimate executives rarely request secrecy for financial transactions.
- Unusual communication channels: A request from the executive director via WhatsApp, personal email, or an unfamiliar platform should raise questions.
- New account instructions: Any request to add a new bank account, change payment details, or send to an unfamiliar account is a major warning sign.
- Quality excuses: Claiming a "bad connection" to explain visual or audio artifacts is a documented attacker tactic for covering deepfake imperfections.
The behavioral red flags are often more reliable than visual detection for preventing successful attacks. A sophisticated real-time deepfake may show no obvious visual artifacts. But a legitimate executive director will never object to being verified through an independent channel, will never request secrecy around a wire transfer, and will never demand immediate financial action without following established organizational procedures. Training staff to recognize and act on behavioral signals is more immediately protective than training them to detect visual artifacts.
Practical Protections Every Nonprofit Should Implement
Effective protection against deepfake attacks does not require sophisticated technology. The most reliable defenses are procedural, the organizational policies and habits that make verification mandatory regardless of how convincing a communication appears. These can be implemented quickly and without significant investment.
Essential Protection #1: Pre-Shared Verification Codes
Establish challenge phrases or code words known only to your executive director and authorized finance staff. These rotate on a regular schedule, perhaps monthly or quarterly. Any financial request from an executive that does not include the correct current code word triggers automatic independent verification before any action is taken.
This is simple, costs nothing, and is remarkably effective. A deepfake attacker cannot know your internal code words unless your systems have already been compromised. The existence of a verification code system also signals to staff that they have permission to ask for it, removing the social pressure to comply with urgent requests without questioning them.
Implementation: Set this up in a 15-minute leadership conversation. Communicate the policy to relevant staff. Rotate codes regularly.
Essential Protection #2: Multi-Channel Verification for Financial Requests
Establish a clear policy that no wire transfer, change of bank account details, or addition of new vendors can be authorized based solely on a phone call, video call, or voice message, regardless of how the caller sounds or appears. All such requests require confirmation through at least two independent channels.
If a request comes via phone call, confirm by calling back the requester at a number pulled from your internal directory, not the number that placed the incoming call. If a request comes via video call, confirm via email from the organization's official email system and a second call to a known number. Email plus video call is insufficient if both could potentially be compromised.
Implementation: Draft a one-page financial authorization policy. Communicate it in an all-staff meeting. Post it near workstations that handle financial transactions.
Essential Protection #3: Dual Approval for High-Value Transactions
Require a second authorized signatory for all wire transfers above a defined dollar threshold. This structural control stops a single deepfake attack from succeeding regardless of how convincing it is, because no single individual, however well-intentioned and appropriately responsive to the apparent request, can unilaterally authorize the transfer.
Determine the appropriate threshold for your organization based on your typical transaction sizes and your risk tolerance. A threshold of $5,000 is common for small nonprofits; larger organizations may set it higher. The threshold does not need to be zero, but it should cover any transaction that would represent a significant financial loss if fraudulent.
Implementation: Update your financial policies and bank authorization settings. This change may require board approval. Make it a priority agenda item.
Essential Protection #4: Reduce the Executive's Attack Surface
Review what audio and video of your executive director is publicly available and in what formats. Consider whether all of it needs to be downloadable or can be streamed-only with copyright protection. Review privacy settings on leadership social media profiles. Audit what conference recordings and webinar archives contain extended, high-quality audio of your executive.
This is not about making your executive director invisible. It is about making the attacker's job harder. Restricting the availability of high-quality, easily downloadable audio and video reduces the training data available for cloning. Going forward, consider adding digital watermarks to published videos of leaders. Platforms like Adobe's Content Authenticity Initiative and the C2PA standard allow organizations to attach cryptographic provenance data to published media, providing a way to verify authentic organizational communications.
Implementation: Conduct a digital footprint audit. Review and adjust settings on LinkedIn, YouTube, conference recording platforms. No technical expertise required.
Essential Protection #5: Staff Training and Permission to Question
Train staff on what deepfake attacks are, how they work, and what behavioral signals indicate potential fraud. This training is not primarily about visual detection, which is difficult and unreliable, but about recognizing social engineering patterns: urgency, secrecy, unusual channels, new account instructions.
Equally important is explicitly giving staff permission to question and verify urgent financial requests, even from senior leadership. Establish clearly in policy and in culture that any staff member who asks to verify a request through an independent channel is doing exactly what the organization wants, and that no legitimate executive will ever object to verification. This shifts the burden: instead of staff feeling uncomfortable asking for verification, they feel uncomfortable not asking.
Implementation: Include deepfake awareness in your next all-staff meeting. Update your social engineering policy. Consider adding a simulated attack exercise annually.
If You Are Attacked: A Response Playbook
Even well-prepared organizations can be successfully attacked. Having a response playbook in place before an attack happens dramatically improves your ability to limit damage, recover funds, and manage communications effectively. The first hours after discovering a deepfake attack are critical: banks have narrow windows to recall wire transfers, evidence needs to be preserved, and stakeholder communications should be coordinated rather than ad hoc.
Many deepfake attacks are not discovered immediately. A convincing attack may not be identified until the transfer fails to arrive at an expected destination or until an executive becomes aware of instructions they never gave. Regular reconciliation of financial accounts and immediate investigation of unexplained transactions reduces the window between attack and discovery.
Deepfake Attack Response Playbook
Immediate (First 2 Hours)
- Stop any financial transactions that may have been initiated based on the suspected deepfake
- Call your bank immediately to halt or reverse wire transfers. Banks have a narrow window, often hours, to recall unauthorized transfers
- Escalate internally to executive director, board chair, and legal counsel
- Preserve all evidence: call recordings, voicemails, video files, screenshots, chat logs with timestamps
- Do not delete anything, even if it seems unimportant
Investigation (First 48 Hours)
- Assess what money was transferred, what information was shared, and what access may have been granted
- Identify how attackers obtained the audio or video used to create the deepfake
- Determine whether other staff were targeted or other communications were compromised
- Engage cybersecurity support for forensic analysis if needed
Reporting (Within 72 Hours)
- File a complaint with the FBI Internet Crime Complaint Center (IC3.gov): the primary reporting channel for wire fraud
- Report to the FTC at ReportFraud.ftc.gov
- Contact local law enforcement
- Notify your organization's insurance carrier (cyber liability policies may cover deepfake fraud)
- If fake content was posted publicly, report it to platforms for removal
Communications
- If donors or partners may have seen fake content, issue a clear public statement promptly through official channels
- Be transparent: explain what happened, confirm what was real and what was fake, and describe what you are doing to prevent recurrence
- Do not attempt to downplay or hide the incident. Transparency protects donor trust more than silence does
- Direct all communications to your verified official channels with clear indicators of authenticity
Post-Incident
- Conduct a full review of the protocols that failed or were absent
- Update verification procedures based on what the attack exploited
- Brief staff on what happened and how future attempts can be identified
- Review and reduce the executive's digital footprint to reduce future attack surface
Deepfake Detection Tools for Nonprofits
Automated deepfake detection tools provide a useful additional layer of protection, particularly for verifying suspicious media files after the fact. However, these tools should be understood as supplements to procedural controls, not replacements. Detection accuracy has declined as deepfake quality has improved, and tools designed to identify today's deepfakes may not keep pace with attacks generated six months from now.
Free Options
- TrueMedia.org: A nonprofit initiative providing free deepfake detection, particularly for political media. Achieved 90% accuracy during 2024 elections and is relaunching with enhanced capabilities in 2025-2026.
- Reality Defender Free Tier: Offers 50 detections per month at no cost. Multi-modal platform analyzing video, images, audio, and text. Suitable for occasional verification needs.
- MIT Media Lab Detect DeepFakes: Educational tools from research that provide some detection capability along with learning resources.
Paid / Enterprise Options
- Reality Defender (paid plans): Enterprise deepfake detection across modalities with API integration. Around 91% accuracy.
- Sensity AI: Specializes in visual deepfake detection. Has detected tens of thousands of malicious deepfakes. Requires custom implementation.
- KnowBe4 Security Awareness Training: Security training with deepfake simulation exercises. Appropriate for staff training rather than content analysis.
The most important thing nonprofits should understand about detection tools is that they are reactive by nature. They help you analyze content after you have already received it and are suspicious. The goal of your protective protocols should be to prevent the attack from reaching the point where detection tools become necessary. Procedural controls that require verification before any financial action is taken are far more reliable than trying to detect deepfakes in real time during a video call. For organizations building comprehensive AI security, our article on deepfake protection for nonprofits covers additional technical countermeasures and board-level governance considerations.
Preparation Is Protection
The trajectory of deepfake technology is clear. Attacks will become more common, more convincing, and more accessible to criminals who previously lacked the technical capability to mount them. The organizations that are protected will not primarily be those with the most sophisticated detection technology, but those that built strong procedural controls before the attack arrived.
For most nonprofits, the most important steps are entirely non-technical and can be implemented within the next two weeks: establish verification codes, require multi-channel confirmation for financial requests, implement dual approval for significant transfers, audit your executive's digital footprint, and train staff on the behavioral signals that indicate fraud. None of these require technology purchases or external expertise.
The nonprofit sector's dependence on trust is its greatest strength and its greatest vulnerability in the deepfake era. The same relational culture that makes donors give, volunteers serve, and communities engage is what attackers exploit when they impersonate the faces and voices that nonprofit communities trust most. Protecting that trust, by ensuring that the people and voices that speak for your organization are genuinely who they appear to be, is not a technical challenge. It is an organizational commitment.
As you build these protections, connect them to your broader approach to responsible AI use. The verification culture you build to defend against deepfakes also helps you address AI hallucinations and supports the kind of thoughtful, human-in-the-loop AI practices that characterize organizations using AI well. Security and responsible AI adoption are not separate concerns. They are different expressions of the same organizational commitment to using technology in ways that protect the people and mission you serve.
Strengthen Your Nonprofit's AI Security
We help nonprofits build the policies, training, and technical controls that protect against AI-enabled fraud and support responsible AI adoption. Talk to our team about an AI security assessment for your organization.
