AI-Generated Content Ethics: When Should Nonprofits Disclose AI-Created Media?
As AI tools become central to how nonprofits communicate, fundraise, and tell their stories, the question of disclosure has shifted from theoretical to urgent. Understanding when, how, and why to reveal AI involvement in your content is now a core organizational responsibility.

A hunger relief organization posts a compelling image on social media: a child's face, wide-eyed and expressive, with text urging followers to donate. The photo never happened. The child does not exist. The image was generated by AI in seconds, chosen because it outperformed authentic photography in early engagement testing. The organization meant no harm, but the implications are profound. The image misrepresents reality. Donors responding to it are responding to a fiction. And if they ever find out, the damage to trust may be irreversible.
This scenario is not hypothetical. It is a near-daily occurrence across the nonprofit sector as AI image generation, synthetic video, AI-drafted fundraising appeals, and automated donor communications become standard tools. The capability to produce convincing, emotionally resonant content with minimal effort has arrived faster than the ethical frameworks to govern it. Most nonprofits are navigating this terrain without maps.
The stakes are unusually high for mission-driven organizations. Nonprofits operate on trust. Donors give because they believe in the cause and in the organization's honesty about how their money is used. Beneficiaries engage because they expect authentic representation of their experiences and needs. Funders provide grants based on credible reporting of real outcomes. When AI-generated content obscures the line between reality and representation, it threatens the foundational relationship that makes nonprofit work possible.
This article provides a practical framework for thinking through AI disclosure decisions: what the current regulatory landscape requires, what donors actually expect, which types of content create the highest obligation to disclose, and how to build disclosure practices that strengthen rather than undermine organizational credibility. It also addresses the harder philosophical questions about when AI assistance crosses into misrepresentation, regardless of legal requirements.
The Regulatory Landscape Has Already Changed
Many nonprofit leaders approach AI disclosure as a voluntary ethical choice. It increasingly is not. Legal requirements at the federal, state, and international level are creating binding obligations that organizations ignore at real financial and reputational risk.
FTC Rules (U.S.)
Active enforcement with significant financial penalties
The Federal Trade Commission's rule banning fake and AI-generated reviews carries penalties of $51,744 per violation. For nonprofits, the critical principle is that AI-generated synthetic testimonials (donor stories that never happened, fabricated beneficiary accounts) are deceptive if presented as real. The FTC's enforcement authority extends to nonprofits and charitable organizations.
The FTC requires "double disclosure" for sponsored or AI-generated content: disclose both the relationship and the AI origin. Nonprofits running AI-generated donor appeals or using synthetic voices should review their practices against this standard immediately.
EU AI Act (Article 50)
Full effect August 2026 with global implications
The EU AI Act's transparency requirements take full effect in August 2026. Article 50 requires that providers of AI systems generating synthetic audio, images, video, or text mark outputs in a machine-readable format. A December 2025 draft Code of Practice added that deepfakes must be labeled even when the content is otherwise lawful.
U.S. nonprofits with any European audience exposure, including international programs, EU-based donors, or content reaching European platforms, need to understand these requirements. The EU's extraterritorial application has been broad across digital content.
State-Level Laws
Rapid proliferation across the U.S. with advocacy implications
State legislatures are advancing AI disclosure bills at a rapid pace. Many states already require disclosure of AI in political advertising, and several have broader synthetic media laws addressing deepfakes. Nonprofits engaged in advocacy, voter registration, or any political communication face the highest compliance risk at the state level.
The National Conference of State Legislatures tracks dozens of active 2025-2026 AI bills. Multi-state nonprofit operations should conduct a jurisdiction-by-jurisdiction review, particularly for any AI-assisted communications related to advocacy campaigns.
Platform Policies
Active enforcement on major social channels
Meta, YouTube, and TikTok all require labels on AI-generated or substantially AI-altered realistic content. TikTok issues immediate account strikes for unlabeled AI-generated synthetic content and removed over 51,000 synthetic media videos in the latter half of 2025 alone. YouTube enforces through demonetization and mandatory labeling.
Nonprofit social accounts are not exempt from these policies. Organizations that post AI-generated content without platform-required labels risk account suspension, content removal, and the reputational damage that public enforcement actions carry.
Beyond these specific requirements, the PRSA updated its ethics guidance in 2025 to state that organizations should "clearly disclose when content, decisions, or interactions are significantly influenced or generated by AI, especially when this information could impact how messages are perceived, how relationships are built, and how trust is maintained." For nonprofits, this professional standard aligns precisely with the mission-integrity considerations that should already be driving disclosure decisions.
What Donors Actually Expect: The Research Picture
Legal compliance establishes the floor. Donor expectations may require a higher standard. The research on how donors perceive AI use in nonprofit communications is now substantive enough to inform genuine strategic decisions.
The Fundraising.AI Donor Perceptions of AI survey, representing over a thousand recent donors, provides the most comprehensive sector-specific data available. The findings are striking: 92% of donors say it is important that nonprofits plainly disclose where and why AI is used. This is not a fringe position; it is near-universal. Organizations operating without disclosure policies are out of step with the overwhelming majority of their donors' expectations.
The same research found that 34% of donors named "AI bots portrayed as humans representing a charity" as their single greatest AI-related concern, with half placing it in their top three. The core anxiety is not AI itself but deception: the fear of being emotionally manipulated by content designed to appear human when it is not. This maps directly to the most common nonprofit AI applications: AI-generated beneficiary stories, synthetic voices in video, and AI-written appeals framed as personal communications from real staff.
Give.org's research on charity appeals found that 55% of potential donors would be discouraged from giving if they discovered that AI-generated images lacked staff verification for accuracy. Among high-income donors giving $200,000 or more annually, 70% said they would avoid donating when encountering unverified AI-generated solicitation materials. For organizations cultivating major donors, this is a significant financial risk hidden in seemingly minor content decisions.
The Generational Split
Donor attitudes toward AI are not uniform across generations, and this matters for how nonprofits approach disclosure. Gen Z donors are significantly more likely to view AI use positively, with research showing they are 24 to 30 percent more likely to increase giving to AI-enabled organizations. They often see AI adoption as a sign of organizational innovation and efficiency.
Baby Boomers, who represent a disproportionate share of major gift donors, show far less enthusiasm: only about 9% indicate they would increase giving based on AI use. For organizations with a major donor program anchored in older generations, the risk calculus around undisclosed AI use is especially significant. Tailoring disclosure approaches to donor segments, while maintaining a consistent ethical baseline, is worth considering.
Academic research adds nuance. A study published in the Journal of Advertising found that disclosure of AI-generated content leads to unfavorable initial attitudes, with perceived credibility as the key mediating factor. In other words, the credibility gap is what damages donor trust, not the AI itself. This finding has practical implications: how disclosure is framed matters as much as whether it is made. A disclosure that contextualizes AI use within a broader commitment to accuracy and mission integrity can mitigate the credibility penalty.
There is also a viral risk worth acknowledging. When one UK-based charity disclosed its use of AI images through social media, the response shifted away from the humanitarian cause and toward debates about AI ethics and authenticity, with extensive public commentary focused on AI concerns rather than food insecurity. Undisclosed AI use discovered after the fact tends to generate far more damaging public discourse than proactive, well-framed disclosure.
A Framework for Deciding When to Disclose
Industry consensus has coalesced around a "disclose when it matters" standard: if AI use could affect how your audience perceives, trusts, or interprets your content, disclose it. Applying this principle in practice requires thinking carefully about what kinds of AI assistance carry the highest stakes.
Requires Disclosure: High Stakes
Content where AI involvement could constitute deception or seriously affect donor trust
- Fundraising images depicting specific beneficiaries, crisis scenes, or humanitarian situations that did not actually occur
- Donor testimonials or beneficiary stories generated or substantially drafted by AI without real-person sourcing
- Synthetic voices or AI avatars presented as staff, volunteers, or program participants
- AI-generated deepfake video of real people associated with your organization
- Advocacy or policy content substantially generated by AI, particularly for political communications subject to state disclosure laws
- Annual report impact statistics or data visualizations where AI was used to generate or extrapolate (not just visualize) outcome numbers
Disclosure Recommended: Medium Stakes
Content where AI played a substantial creative or editorial role
- Blog posts, articles, or social media content primarily drafted by AI with limited human editing
- Grant proposals where narrative sections were substantially AI-generated before human review
- Email campaigns with AI-generated personalization beyond simple name insertion
- AI-generated images not depicting real people but used in fundraising contexts
- Chatbot interactions with donors or clients not clearly labeled as automated
Generally Acceptable Without Disclosure: Low Stakes
Routine AI assistance that does not substantially shape content or perception
- Grammar correction, spell-check, and basic copyediting (Grammarly, Word, etc.)
- AI-assisted translation of human-authored content
- Internal scheduling, data sorting, or administrative tools not visible to donors or beneficiaries
- AI-suggested email subject lines tested against and selected by human staff
- Basic image resizing, cropping, or color adjustment
- AI research assistance for background information subsequently verified and written by humans
The core distinction running through these categories is whether AI substantially shaped the content that reaches your audience. Tools that assist human judgment without replacing it generally do not require disclosure. Tools that generate content, create representations of people or situations, or produce outputs audiences might reasonably assume came from human effort create stronger disclosure obligations. When in doubt, disclose.
The Mission Integrity Dimension Beyond Compliance
Legal compliance and donor expectations establish important boundaries, but they do not fully capture what is at stake for mission-driven organizations. There is a deeper question about the relationship between AI-generated content and the authentic representation that nonprofit work requires.
When a hunger charity uses a synthetic image of a malnourished child who does not exist, the question is not only legal, it is moral. Does the content faithfully represent the reality the organization exists to address? The argument that AI images are simply more efficient than photography elides a meaningful distinction: authentic documentation of real suffering communicates something that cannot be reproduced by simulation, however technically convincing. The person who appears in that photograph consented to being seen. Their specific experience, not a statistically averaged representation, is what connects donors to the mission.
Similar considerations apply to beneficiary stories. An AI-generated account of a program participant's journey, even if accurate in its general contours, represents something categorically different from a real person's testimony. Nonprofits exist, in part, to amplify human voices that would not otherwise be heard. Using AI to simulate those voices, even with good intentions, inverts that purpose.
This does not mean AI has no role in nonprofit communications. It means the role should be clearly defined and the limits clearly understood. AI can help staff writers develop richer, more compelling narratives around real stories. AI can generate illustrative imagery for conceptual content where no specific individual is represented. AI can assist with translation, accessibility features, and content localization. The distinction is between AI that amplifies authentic human content and AI that substitutes for it.
Framing Disclosure as a Value Statement
The most effective disclosure approaches connect AI transparency to organizational values rather than presenting it as a reluctant compliance activity. Consider the difference between these framings:
- Compliance framing: "This image was generated by AI." (Minimal, defensive, provides no context)
- Value framing: "This image was created with AI assistance to protect the privacy of program participants while accurately representing the experiences of the families we serve." (Connects disclosure to beneficiary dignity and organizational commitment)
- Policy framing: "Our AI use policy, available at [link], describes how we use AI tools in our communications and the human review processes that ensure accuracy." (Demonstrates systematic approach rather than ad hoc compliance)
Emerging Technical Standards for Content Authentication
Beyond organizational disclosure policies, a technical infrastructure for content authentication is developing rapidly. Nonprofits should understand these systems because they will increasingly define how AI-generated content is identified and labeled across the internet.
C2PA Content Credentials
The Coalition for Content Provenance and Authenticity brings together over 300 organizations including Adobe, Microsoft, BBC, and Google. Its technical specification creates a standardized "Content Credentials" system: a cryptographic record attached to digital media that logs who created it, what tools were used, and any AI involvement.
In 2025, C2PA rolled out visible icons in supported browsers and media players, giving end users immediate authenticity feedback. The specification is being examined by ISO for standardization. Organizations using Adobe Creative Suite already have access to C2PA credential embedding tools.
SynthID and Watermarking
Google's SynthID and Meta's Video Seal represent competing AI watermarking technologies that embed identification directly in AI-generated content, unlike C2PA metadata which can theoretically be stripped. Watermarks survive format conversion and editing operations.
The EU AI Act's draft Code of Practice recommends a multilayered approach combining both metadata-based (C2PA) and watermark-based methods, acknowledging that no single technique is currently sufficient for comprehensive content authentication.
For nonprofits, the practical implication is that AI-generated content will become progressively easier to identify regardless of organizational disclosure policies. Organizations that establish transparent practices now will be positioned as leaders rather than caught in reactive explanations later. Proactive adoption of C2PA credentials for AI-generated content, where supported by your tools, signals commitment to authenticity and may actually differentiate your organization positively.
Building Disclosure Practices That Work in Practice
Ethical intentions only produce results when they are embedded in operational processes. Here is how to translate disclosure principles into day-to-day organizational practice.
Create a Written AI Use Policy
A written policy does two things: it makes disclosure decisions explicit rather than leaving them to individual staff judgment, and it demonstrates to donors and funders that your organization has a systematic approach. The policy does not need to be long, but it should cover which AI tools staff may use, which types of content require disclosure, and the approval process for AI-generated donor-facing materials.
Publish a brief version of your policy on your website, analogous to a privacy policy. Only about 10 to 15% of nonprofits currently do this, making it an immediate differentiator for organizations that act. Consider linking to it from your About page and including it in grant applications where funders increasingly ask about AI governance.
- List approved AI tools and their permitted uses
- Define which content categories require disclosure and which do not
- Establish a review and approval process for AI-generated fundraising content
- Prohibit synthetic beneficiary representations that did not involve real individuals
Establish Content-Specific Protocols
Different content types need different disclosure approaches. For social media, use platform-provided labeling tools and add a brief text note in captions for clarity. For email campaigns, include a brief disclosure in the footer when AI substantially drafted the content. For fundraising images, consider proactive framing when authentic photography is not available.
Annual reports and grant reports warrant particular attention. AI-generated data visualizations should include disclosure notes. Impact statistics and outcome data should always reflect actual program data; AI should not be used to generate or extrapolate outcome numbers, and any AI involvement in data analysis should be documented.
- Social media: use platform AI labels plus a brief caption note
- Email campaigns: footer disclosure when AI substantially drafted content
- Annual reports: disclosure note on AI-generated visualizations
- Video: credits or description disclosure for AI-generated elements
- Chatbots: clear labeling that the interaction is automated
Designate Accountability
Policies only work when someone is responsible for implementing them. Designate a staff member, even if AI governance is only part of their role, who reviews AI-generated donor-facing content before it is published. This review should verify accuracy, check for inadvertent privacy violations, confirm compliance with your disclosure policy, and apply platform-required labels.
Consider including AI disclosure compliance as a standing agenda item in communications team meetings. Periodic reviews of what AI tools are being used, how outputs are being disclosed, and whether any new content types have emerged that your policy does not address will help keep your practices current as the technology evolves. For related considerations on how to build your team's AI governance capacity, see our article on building AI champions across your nonprofit.
The Adoption Gap Your Organization Needs to Close
The gap between AI use and AI governance in the nonprofit sector is wide. Research indicates that the vast majority of nonprofits are using AI tools, but only a small fraction have formal policies governing that use. Fewer still actively disclose AI involvement to donors, funders, or the communities they serve.
This gap is not just an ethical problem; it is a strategic vulnerability. As AI content authentication technology matures, as regulatory requirements expand, and as donors become more sophisticated about identifying AI-generated content, organizations without established disclosure practices will face reactive explanations rather than proactive trust-building.
The organizations that will be best positioned are those that establish clear policies now, when the expectations are still forming and proactive disclosure can differentiate them, rather than those that wait until disclosure is mandated and carry the reputational weight of having been non-compliant. For broader context on how your organization's AI governance fits within larger strategic questions, see our articles on AI knowledge management and managing organizational AI resistance.
of donors say nonprofit AI disclosure is important to them
of donors would be discouraged from giving if they discovered unverified AI images in appeals
of nonprofits currently disclose their use of generative AI tools
Transparency as a Competitive Advantage
There is a version of AI disclosure that organizations approach as a burden: one more compliance requirement, one more potential reputational risk, one more thing to manage with already-stretched staff. This framing misses the strategic opportunity embedded in the current moment.
Donor trust is the scarcest resource in the nonprofit sector, and it is becoming harder to earn. In a landscape where the vast majority of donors expect AI transparency and only a small fraction of organizations provide it, transparent AI practices are a genuine differentiator. The organization that publishes a clear AI use policy, labels its AI-generated content proactively, and can articulate its human review processes in donor communications is signaling something important: that it takes honesty seriously as an organizational value, not just as a legal requirement.
The technical tools for content authentication are maturing rapidly. The regulatory requirements are expanding. The donor expectations are already high and clearly articulated. The organizations that build disclosure practices now will find that they have created a foundation of trust that serves them across every dimension of their donor relationships. Those that wait will find the decision increasingly made for them, and not always in terms they would have chosen.
Start with a written policy. Designate accountability. Train your team to recognize which AI uses require disclosure. Frame that disclosure not as a reluctant acknowledgment but as evidence of an organizational commitment to honesty. In the current moment, these practices are not just ethically right; they are strategically sound. For additional guidance on establishing comprehensive AI governance at your organization, explore our resources on AI strategic planning and getting started with AI as a nonprofit leader.
Build AI Practices Your Donors Can Trust
Our team helps nonprofits develop AI governance frameworks, disclosure policies, and communication strategies that strengthen donor trust while enabling the efficiency benefits AI can offer.
