The 900% Rise in Deepfakes: What Every Nonprofit Communicator Should Know
AI-generated fraud has moved from science fiction to operational reality. Deepfake volume grew from roughly 500,000 instances in 2023 to an estimated 8 million by 2025, and nonprofit organizations face a distinct threat profile that most security guides overlook entirely.

In February 2024, a finance worker at Arup, a global engineering firm, participated in a video conference call with colleagues and authorized 15 wire transfers totaling $25.5 million. Every other person on that call was a deepfake. Reconstructed from publicly available video of real employees, the AI-generated participants looked authentic, spoke convincingly, and gave no indication anything was wrong until the bank transfers had already cleared.
That attack targeted a large corporation with substantial IT security resources. Nonprofits, with leaner operations, smaller finance teams, and executive leaders who regularly appear in publicly accessible videos, podcasts, and webinars, are arguably more exposed to the same risk. And the financial and reputational stakes are just as severe.
Deepfake fraud losses reached $1.56 billion globally in 2025, according to Surfshark research, a threefold increase from $360 million in 2024. Deloitte projects that AI-generated fraud in the United States alone will reach $40 billion annually by 2027. These are not abstract statistics. They reflect a technology curve that has moved faster than most organizational security policies, training programs, or crisis communications plans.
This article is written specifically for nonprofit communicators, development staff, executive directors, and board members who need to understand the threat, assess their organization's exposure, and take practical steps to protect staff, donors, and institutional reputation. The goal is not to create alarm but to provide a clear-eyed framework for making your organization measurably harder to target.
Understanding the Deepfake Threat Landscape in 2026
The term "deepfake" covers a wide range of AI-generated or AI-manipulated content. For nonprofit communicators, the most relevant categories are face-swap video deepfakes, voice cloning audio deepfakes, and synthetic identity creation (entirely AI-generated people who do not exist). Each poses distinct risks, and each requires different protective measures.
Voice cloning has become particularly accessible and dangerous. Modern voice cloning tools can synthesize a convincing replica of someone's voice using as little as three seconds of recorded audio, according to McAfee research. Given that most nonprofit executive directors have recorded webinars, podcast interviews, or YouTube videos freely available online, the barrier to cloning their voice is effectively zero for a motivated attacker. Voice cloning fraud rose an estimated 680% in the past year, driven largely by this accessibility.
Face-swap video deepfakes require more processing power but are still achievable with consumer-grade hardware and freely available tools. The Arup attack demonstrated that real-time deepfake video on a standard business video call is no longer theoretical. For nonprofits, this means that a video call from your "executive director" requesting an urgent transfer, or from a "board member" asking for sensitive information, cannot be assumed to be authentic based on visual appearance alone.
Perhaps most concerning for nonprofit communicators is the fraudulent fundraising category. The FBI's Internet Crime Complaint Center documented approximately $96 million in losses from fraudulent charity campaigns in 2024. These scams often use AI-generated imagery and video to create convincing fake social media accounts mimicking real, known organizations, particularly in the aftermath of natural disasters or humanitarian crises when donor urgency is high.
Financial Deepfake Threats
Attacks targeting internal financial processes
- Video call impersonation of executive director requesting wire transfers
- Voice cloning in phone calls requesting gift cards or urgent payment
- Fake board member impersonation to extract financial credentials
- Spoofed donor calls asking to redirect existing pledges
Reputational Deepfake Threats
Attacks targeting public trust and donor confidence
- Fabricated video of executive director making controversial statements
- Fake fundraising campaigns using your organization's name and imagery
- Synthetic accounts impersonating your organization on social platforms
- AI-generated "donor testimonials" endorsing fraudulent campaigns
Why Nonprofits Face a Distinct Risk Profile
Most deepfake security guidance is written for corporate environments with dedicated cybersecurity teams, multi-layered financial controls, and significant technology budgets. Nonprofits operate under a fundamentally different set of constraints, which creates a more complex exposure profile than standard security frameworks typically acknowledge.
The first and most significant factor is the abundance of publicly available training material. Nonprofit leaders are, by necessity and design, public figures. Executive directors appear in news coverage, recorded webinars, YouTube fundraising appeals, conference presentations, and podcast interviews. Every minute of publicly accessible video or audio featuring your organization's leaders is raw material that voice cloning and face-swap systems can use. Unlike corporate executives who might appear in limited contexts, nonprofit leaders are often highly visible precisely because public storytelling is central to their mission.
The second factor is institutional trust. The entire value proposition of a nonprofit, particularly in fundraising, is trust. Donors give because they believe in the organization's leaders, its mission, and its integrity. A convincing deepfake attack need not succeed financially to cause serious damage. A fabricated video of an executive director making offensive or politically divisive statements, even one that is ultimately exposed as fake, can trigger donor withdrawal, board resignations, and media coverage that takes months or years to recover from. Research from trust studies suggests that approximately 63% of consumers make donation decisions based primarily on trust, meaning that a temporary erosion of credibility has long-term financial consequences.
The third factor is organizational structure. Many nonprofits rely on a single executive director as the primary decision-maker for financial and strategic matters, with limited redundancy in authorization processes. This creates a single point of failure for business email compromise and social engineering attacks, including deepfake-enhanced ones. Small organizations may lack even basic multi-person financial authorization requirements, the single most effective defense against wire transfer fraud.
The Arms Race Reality
Detection tools for deepfakes operate at 94-96% accuracy under controlled lab conditions, but drop below 50% accuracy in real-world deployment when encountering deepfakes from tools they were not trained on. Human detection ability hovers at 55-60%, barely better than guessing.
This means that no single technological solution will reliably identify deepfakes before harm occurs. The most effective defenses are process-based, not technology-based: verification workflows, dual authorization requirements, and a culture of skepticism around urgent requests.
Detection Tools: What's Available and What Actually Works
Despite the limitations of detection technology, having access to the right tools adds a meaningful layer of protection, particularly for verifying suspicious content before acting on it or sharing it publicly. Several tools are free or accessible at nonprofit-friendly price points.
TrueMedia.org was a nonprofit initiative providing free deepfake detection that achieved approximately 90% accuracy during the 2024 U.S. election cycle. After going offline for a relaunch, it returned to service in late 2025 and remains one of the strongest free options available, specifically designed for organizations rather than enterprise security teams. Reality Defender offers 50 free detections per month and is designed with journalists and smaller organizations in mind.
Adobe's Content Authenticity web app, launched in early 2025, provides a free tool that allows creators to attach Content Credentials to their own published material, creating a cryptographic record of provenance. The companion verification site at contentcredentials.org lets anyone upload media to check for this provenance metadata. This is the authentication standard, not detection after the fact: your organization proactively marks genuine content so that recipients can verify its origin.
For enterprise-level detection, Sensity AI and Reality Defender's paid tiers provide comprehensive video, audio, and image analysis. These are appropriate for larger organizations with significant public communications activity or those operating in particularly high-risk environments. Microsoft's free Content Integrity Check browser extension can also verify provenance information on media encountered while browsing.
Free Detection Resources
- TrueMedia.org - Nonprofit initiative, ~90% detection accuracy
- Reality Defender - 50 free detections per month
- Adobe Content Authenticity app - Free provenance marking for your content
- contentcredentials.org - Free verification of content provenance
- Microsoft Content Integrity Check - Free browser extension
C2PA and Content Credentials
The emerging standard for content authentication
The Coalition for Content Provenance and Authenticity (C2PA) functions like a nutrition label for digital media. Supported by Adobe, Microsoft, BBC, and over 3,700 organizations, it attaches a cryptographic record to media documenting who created it and how.
C2PA 2.1 added digital watermarking that persists even when metadata is stripped. Adopting Content Credentials for your organization's published video and audio content lets recipients verify authenticity, and missing credentials on a piece claiming to be from your organization is a significant warning signal.
Process-Based Defenses: The Most Reliable Protection
Because technological detection tools cannot reliably catch deepfakes at the point of contact, the most effective defenses are organizational policies and workflows that create friction against manipulation regardless of how convincing the content appears. These process-based controls require no technical expertise and can be implemented with existing staff.
The single most important financial control is dual authorization for wire transfers and unusual payments above a defined threshold. No one person should be able to authorize a significant transfer based solely on a phone call, video message, or video conference request, regardless of who appears to be making the request. This policy, already standard in many larger organizations, is the direct countermeasure to the pattern used in the Arup attack.
Out-of-band verification is the second critical control. If a staff member receives a voice or video call requesting sensitive action, they should confirm the request through a separate, pre-established channel before acting. This might mean hanging up and calling back on a known, verified number, sending a text to a confirmed phone number, or physically walking to the requestor's office if they are on-site. The key is that verification must happen through a different communication channel from the one in which the request arrived.
Executive teams should also establish passphrase or "safe word" verification protocols. A simple predetermined code word that can be requested in any sensitive communication gives staff a fast, low-friction way to authenticate urgent requests. This is particularly valuable for smaller organizations where interpersonal trust between staff and leadership is high but where that trust is exactly what fraudsters will attempt to exploit.
Essential Process Controls for Nonprofits
Implement these policies regardless of your technology budget
Financial Controls
- Dual authorization for all wire transfers above threshold
- Hold periods for any transfer requested via voice or video
- Out-of-band verification before acting on urgent requests
- Executive team safe word or passphrase system
Communications Controls
- Adopt C2PA Content Credentials for all published video and audio
- Publish official channel list prominently on your website
- Monitor for fake accounts impersonating your organization monthly
- Reserve your brand name on all major platforms even if unused
Training Staff to Respond, Not Just Detect
Staff training for deepfake threats needs to be framed carefully. If you tell people they need to detect deepfakes, you set an impossible standard: human detection accuracy sits at 55-60%, barely better than guessing. The goal of training is not to turn staff into reliable deepfake detectors but to build behavioral habits that reduce vulnerability even when content appears completely convincing.
The most effective training reframes the challenge as a verification process rather than a detection task. Instead of "can you tell if this is fake?", the question becomes "what is the verification process for this type of request?" This framing removes the burden of detection from the individual and places it on the organizational process, which is where the control actually lives.
Scenario-based simulations are the most effective training format. These are the equivalent of phishing simulations that have become standard in cybersecurity: run a simulated deepfake attack against your own staff, then debrief on how the experience felt and what the correct response would have been. A simulated voice call from a fake "executive director" requesting gift cards, followed by a staff debrief, builds muscle memory for the appropriate verification response in a way that a policy document cannot.
Training content also needs to cover the specific warning signs that remain relevant even as deepfake quality improves. Urgency is the most reliable indicator: legitimate requests for wire transfers, sensitive information, or unusual actions are almost never genuinely urgent in a way that precludes a 24-hour verification delay. The pressure to act immediately is itself the primary manipulation technique, and staff who recognize urgency as a manipulation signal are far better protected than those who are trying to evaluate the visual authenticity of a video call.
Scenario Simulations
Run simulated deepfake attacks with your own staff: fake voice calls requesting gift cards, fake executive video messages requesting transfers. Debrief on correct verification responses.
Verification Drills
Practice out-of-band verification workflows with staff regularly. The goal is automatic: "Any urgent financial request via voice or video triggers a verification call to a known number, every time."
Cross-Departmental Drills
Simulate coordinated attacks involving multiple departments, a fake CFO video call paired with a fake IT email, to build interdepartmental response habits and expose coordination gaps.
The Legal Landscape: What Nonprofits Need to Know in 2026
The regulatory environment around deepfakes shifted significantly in 2025. The TAKE IT DOWN Act, signed into law on May 19, 2025, prohibits knowingly publishing intimate visual depictions of minors or non-consenting adults, including AI-generated deepfakes of real individuals. Covered platforms, including public websites and mobile apps, must establish notice-and-takedown processes by May 19, 2026.
A provision that received less attention but directly affects nonprofits is the Act's explicit extension of FTC jurisdiction to nonprofit organizations for these purposes. Nonprofits are normally exempt from FTC Act coverage, but the TAKE IT DOWN Act removes that exemption for covered platforms. If your organization operates a public website or community platform where such content could appear, compliance obligations apply to you directly.
At the state level, the pace of legislation has accelerated substantially. Since 2022, 169 state laws have been enacted targeting deepfake use. California's AI Transparency Act (AB853) requires watermarking and transparency standards. Colorado's AI Act took enforcement effect on February 1, 2026, requiring risk and impact assessments for high-risk AI systems. If your organization operates across multiple states, as many national nonprofits do, the patchwork of state requirements creates a complex compliance landscape that benefits from legal review.
No federal law yet specifically prohibits deepfake content designed to defraud donors or damage nonprofit reputation outside the intimate imagery context. Fraud prosecutions currently rely on existing wire fraud, identity theft, and impersonation statutes. The COPIED Act of 2025 would address deepfake fraud more directly but had not been signed as of early 2026. Nonprofits should monitor its progress, as its passage would create clearer legal recourse for organizastions targeted by fraudulent deepfake campaigns. For more on the broader regulatory environment your organization must navigate, see our coverage of new state AI laws taking effect in 2026.
Reporting a Deepfake Attack
If your organization is targeted by a deepfake-enabled fraud attempt or a fraudulent campaign using your brand, report it through the following channels:
- FBI IC3 (ic3.gov) - Primary filing point for cybercrime and deepfake fraud
- FTC ReportFraud.ftc.gov - Consumer and organizational fraud reporting
- Platform abuse reporting - Each major social media platform has trust-and-safety contacts for expedited removal
- Local FBI field office - For significant financial losses, direct FBI contact accelerates response
Crisis Communication: Responding When a Deepfake Targets Your Organization
Approximately half of nonprofits lack any formal crisis communication plan, and an even smaller fraction have plans that specifically address AI-generated misinformation or deepfake attacks. This means that most organizations are improvising their response during the highest-stress, highest-visibility moment of a crisis, when improvisation is most likely to compound the original damage.
The most important pre-crisis action is drafting template response communications now, before an incident occurs. One template for a financial fraud attempt (where your organization's brand was used to steal from donors), one for a reputational deepfake (a fake video of your leadership), and one for a fake fundraising campaign. These templates should be written, approved by leadership and legal counsel, and stored in an accessible location so that response time is measured in minutes rather than hours.
When a deepfake incident occurs, the guiding principle from crisis communications research is: respond quickly, with transparency, and across all owned channels simultaneously. The gap between a deepfake going viral and your public response is the window during which the most reputational damage accumulates. Silence reads as confirmation in the absence of information.
For donor-specific response, direct personal communication is more effective than broadcast channels. Major donors should receive personal calls from the executive director, not emails or social media posts, for significant incidents. This personal outreach demonstrates the organizational integrity that deepfakes are designed to undermine, and it gives donors a direct opportunity to ask questions and receive reassurance before they make decisions about their giving relationship with your organization.
First 15 Minutes of an Incident
- Activate predefined crisis team: ED, legal, communications, IT
- Preserve the deepfake content as evidence before requesting removal
- Contact relevant platforms immediately for expedited takedown
- File report with FBI IC3 to create official documentation
Public Response Strategy
- Publish authenticated response video using C2PA Content Credentials
- Distribute simultaneously across all owned channels
- Contact major donors personally before they see media coverage
- Update stakeholders regularly; gaps in communication read as confirmation
Proactive Authentication: Building Verification Into Your Communications
The most sustainable defense against deepfake reputation attacks is establishing your authentic communications so clearly and consistently that impersonations become easier to identify. This is not primarily a technology task; it is a communications strategy.
Start by publishing a clear and permanent statement on your website identifying your official communication channels, the platforms where your organization has authentic presence, and how donors and stakeholders can verify they are interacting with your real accounts. This statement should be linked prominently from your homepage and referenced in your communications.
Adopting Content Credentials for your published video and audio content creates a verifiable record that distinguishes your authentic communications. Adobe's free Content Authenticity app makes this accessible to organizations without technical staff. When supporters can check that a video has verified Content Credentials linked to your organization, a deepfake that lacks these credentials becomes immediately suspect.
Consistent visual and production identity across your communications also helps. If your genuine executive director videos always appear with consistent lighting, branded backgrounds, and a recognizable presentation format, deviations from that format are easier for informed supporters to notice. This is not foolproof, but it raises the cost and complexity for attackers who need to replicate your specific presentation style, not just your leader's face and voice.
Finally, proactive monitoring for impersonation attempts is essential. Set up Google Alerts for your organization's name, your executive director's name, and your key programs or campaigns. Perform monthly searches on major social media platforms for accounts using your name or logo. Catching impersonation attempts early, before they have time to build audiences or collect donations, dramatically limits the damage they can cause. This connects to the broader challenge of protecting your organization's reputation discussed in our article on preparing for deepfake attacks on nonprofit leaders.
The Bottom Line for Nonprofit Communicators
Deepfakes are not a theoretical future risk. They are a present operational challenge with documented financial and reputational consequences for organizations across sectors. Nonprofits face particular exposure because of their public-facing leadership, their mission-critical reliance on donor trust, and the lean operational structures that create gaps in financial authorization and crisis response capacity.
The good news is that the most effective defenses do not require significant technology investment. Dual authorization for financial transfers, out-of-band verification habits, staff training on verification processes rather than detection, and a clear crisis communication plan are all achievable with existing resources. The organizations most protected from deepfake attacks are not those with the best detection tools; they are those with the most disciplined verification processes and the most prepared crisis response teams.
Start with a frank internal assessment: does your organization have dual authorization for wire transfers? Do staff have a clear verification protocol for urgent requests from leadership? Do you have a crisis communication template drafted and approved? If the answer to any of these questions is no, those gaps represent your most immediate and actionable risk.
For organizations navigating the broader AI policy and risk landscape, the deepfake challenge connects to the larger question of how to build trustworthy AI practices into your organization. Our article on AI and nonprofit knowledge management and our guide to updating your AI policy for 2026 provide complementary frameworks for building organizational resilience across the full spectrum of AI risks.
Strengthen Your Organization's Defenses
One Hundred Nights helps nonprofits assess their AI risk exposure and build practical defenses that match their operational reality and budget.
