AI Transparency in Fundraising: How to Disclose AI Use Without Losing Donor Trust
The vast majority of nonprofits now use AI in some part of their fundraising operation. But most have said nothing to their donors about it. That silence carries its own risks. Understanding what donors actually think about AI use, what they need to hear from you, and how to communicate it without undermining the relationships you've built is quickly becoming a core competency for development professionals.

There is a gap at the center of nonprofit AI adoption that most organizations haven't fully reckoned with. Research from 2025 found that 83% of nonprofit staff believe they are transparent about their AI use, but only 38% of constituents agree. A nearly identical gap exists across all sectors: 94% of businesses claim AI transparency, while just 37% of customers agree. Organizations and the people they serve are telling very different stories about the same reality.
For nonprofits, this gap carries particular weight. The relationship between a nonprofit and its donors is built on trust, and trust depends on honesty about how the organization operates. When donors discover that AI has been used in ways they weren't told about, the response isn't usually indifference. It's a recalibration of how much they trust the organization's communications going forward.
The challenge is that disclosure isn't simple. Academic research has documented what practitioners call the "transparency penalty": disclosing AI use can reduce trust even when the disclosure is handled carefully. Donors who learn that a thank-you letter was AI-assisted sometimes feel less valued than if they hadn't known. Understanding this tension, and navigating it intelligently, is what separates organizations that handle AI transparency well from those that create new problems while trying to solve old ones.
This article walks through what the research actually shows about donor attitudes toward AI, what types of AI use donors care most about, how to frame disclosures in ways that preserve rather than erode trust, and what legal landscape is emerging that nonprofits should be tracking. The goal isn't to convince you to disclose everything or nothing. It's to help you develop a thoughtful, durable approach to AI transparency that reflects your organization's values and protects the donor relationships you depend on.
What Donors Actually Think About AI
Donor attitudes toward AI in fundraising are more nuanced than most coverage suggests. The headline numbers are often alarming, but the full picture reveals something more tractable. Donors aren't uniformly opposed to AI use, and many actively support it when it's used responsibly and transparently. The challenge is that "responsibly and transparently" means different things to different donors.
Conditional Support
According to the Fundraising.AI Donor Perceptions of AI 2025 report, which surveyed over 1,000 US donors, 67% of online donors agree that nonprofits should use AI to assist in marketing, fundraising, and administrative tasks. However, that support comes with significant conditions around transparency and human oversight.
The more generous a donor, the more likely they are to support AI use. Among high-dollar donors, support reaches 30%. Among small donors, it drops to 13%. This pattern suggests that the donors most critical to your long-term sustainability are also the most likely to be open to AI, as long as they're kept informed.
The Generational Divide
Generational differences in AI attitudes are substantial and should inform how you communicate with different donor segments. Gen Z donors (18-29) are significantly more likely to give more to AI-enabled organizations and place higher value on AI-driven personalization. Boomer donors (60-75) show much lower enthusiasm for AI and are more likely to view it with suspicion.
Donors who are already familiar with AI tools are also far more comfortable with nonprofit AI use. As AI literacy spreads across the broader population, the current resistance is likely to soften, but organizations that build strong transparency practices now will be better positioned for every stage of that transition.
The Transparency Demand
Donors are not anti-AI. They are anti-opacity.
A Fidelity Charitable study found that 93% of donors rated transparency in AI usage as "very important" or "somewhat important." That near-unanimous figure is striking. Donors aren't saying they don't want you to use AI; they're saying they want to know when you do. The organizations that will retain the strongest donor trust aren't those that avoid AI, but those that use it with clear policies and honest communication.
The same research found that 52% of donors want the ability to opt out of AI-driven interactions, and 48% support third-party audits of AI systems. These numbers reflect a donor community that takes AI governance seriously and expects the nonprofits they fund to take it seriously too. The organizations that respond to this expectation proactively will differentiate themselves from the majority that are still operating informally.
What Types of AI Use Donors Care Most About
Not all AI use is equally sensitive to donors. Understanding where donors draw lines, and where they're comfortable, helps you prioritize your disclosure efforts and focus your communication on the areas that actually affect trust.
Generally Accepted by Donors
- Fraud detection and prevention (48% of donors approve)
- Operational efficiency improvements (44% approve)
- Impact measurement and reporting (34% approve)
- Administrative tasks like scheduling and data entry
- Back-office financial analysis and reporting
- Personalized appeals when transparency safeguards are in place
Highest Donor Concern Areas
- AI bots portrayed as human staff (top concern for 34% of donors, top-three concern for 50%)
- Data privacy and security (cited by two-thirds of donors)
- Algorithmic bias in targeting or segmentation
- Loss of human touch in relationship communications
- Personalized fundraising using data without donor knowledge (40% express discomfort)
The pattern in this data is consistent: donors are comfortable with AI when it improves how the organization operates, but uncomfortable when it substitutes for human relationship-building or operates invisibly on their personal data. The single biggest red line is AI systems that impersonate human staff. If a donor believes they are communicating with a person and later discovers they were talking to an AI, the breach of trust is often severe and difficult to repair. This is true regardless of how good the AI's responses were.
The sector your organization serves also affects donor expectations. Human-services nonprofits and faith-based organizations tend to have donors who are especially sensitive to the perceived loss of empathy. Education and justice-focused donors tend to emphasize fairness and bias mitigation. Knowing your donor base well enough to understand which concerns are most salient for them should inform how you prioritize your transparency communications.
The Transparency Dilemma: Why Simple Disclosure Isn't Enough
Here is the uncomfortable finding that every fundraiser using AI needs to understand: simply disclosing AI use can reduce donor trust, even when the disclosure is handled carefully. A 2025 study published in the journal Organizational Behavior and Human Decision Processes ran thirteen experiments with more than 3,000 participants across multiple contexts and found a consistent "transparency penalty." When actors disclosed AI use, they were trusted less than those who said nothing, regardless of how the disclosure was framed.
The researchers tested six different disclosure framings and found that all of them reduced trust compared to silence. The mechanism appears to be perceived social legitimacy: AI-assisted work is seen as less appropriate than fully human work, and the disclosure itself is what activates that judgment. For a nonprofit sending a major gift ask or a deeply personal impact story, this research is not encouraging.
However, the same research found that "collective validity" framing, positioning AI use as a normal and widely accepted practice, reduced the trust penalty from d=1.12 to d=0.72. The gap doesn't disappear, but it narrows meaningfully. When donors understand that AI is standard practice across the nonprofit sector, rather than something your organization is doing unusually, their negative reaction is softer.
The practical implication is that transparency can't stand alone. Disclosure needs to be accompanied by context (this is a standard industry practice), specificity (here's exactly what AI did and what humans did), and evidence of human oversight (staff reviewed and personalized this communication before it reached you). Bare disclosure without these elements risks activating the transparency penalty without providing the trust-building benefits that honest communication should deliver.
None of this is an argument for concealing AI use. The risks of non-disclosure, which we address below, are more significant than the risks of a carefully handled disclosure. But it is a strong argument for investing in how you communicate about AI, not just whether you communicate about it.
How to Disclose AI Use Effectively
Effective AI disclosure is less about the act of telling donors and more about building the organizational systems that make transparency genuine rather than performative. The leading voices in nonprofit AI governance describe this shift as moving from transparency as messaging to transparency as infrastructure. Here's what that looks like in practice.
Publish a Public-Facing AI Use Policy
The most foundational disclosure step is publishing a clear, accessible AI use policy on your website. This doesn't need to be lengthy or technical. A well-written one-page document that explains what AI tools you use, what you use them for, what data they can and cannot access, and who in your organization is responsible for AI oversight provides donors with the information they need and demonstrates that your AI use is deliberate and governed.
Organizations like Oxfam International, the Red Cross, and United Way Worldwide have published comprehensive AI governance frameworks that can serve as reference points. You don't need to match their scale; you need to match their commitment to transparency. Resources like the FreeWill fundraising-specific AI use policy template and NTEN's AI policy template can accelerate this process significantly.
Only about 15% of nonprofits currently disclose their use of generative AI tools publicly, according to research from Nonprofit Tech for Good. Organizations that do have this documentation in place are already differentiated from the field.
Use Tiered Communication Approaches
Different disclosure contexts warrant different levels of detail. A general announcement in your email newsletter ("we use AI tools to improve our communications and operations, and you can read about how at [link]") establishes awareness without overwhelming donors with information they didn't ask for. More specific disclosures, such as AI-disclosure checkboxes on online donation forms or opt-out options for AI-driven personalization, serve donors who want more control over their experience.
For high-value donor relationships, major gift officers should be prepared to speak directly about how AI is and isn't used in your organization if donors ask. A relationship manager who can answer this question confidently and specifically is far more trust-building than an organization that deflects or seems caught off guard by the inquiry.
Lead with Human Oversight in All AI Communications
When disclosing AI use in specific communications, always emphasize the human role. Language like "We developed this communication with AI assistance, which was then reviewed and edited by our team" performs significantly better with skeptical donors than bare disclosure. The phrase "reviewed and personalized by our team" directly addresses the concern about losing the human connection.
NationBuilder's responsible AI guide for nonprofits offers a useful framing: "AI can handle up to 75% of the heavy lifting, then let humans step in to ensure outputs are accurate, on tone, and aligned with your mission." This kind of candid description of the human-AI collaboration, where AI does the drafting and humans do the relationship work, resonates with donors who understand that both elements have a role.
Frame AI as a Mission-Enabling Tool
The most effective framing connects AI use directly to mission delivery. "AI helps us reduce administrative costs, directing more of your donation to programs" is a statement that most donors will find compelling rather than troubling. Similarly, "We use AI to help identify supporters who care deeply about [cause], so we can connect you with the work that matters most to you" positions personalization as a service rather than a surveillance exercise.
Separate your AI use cases when communicating with donors. Disclosing that you use AI for fraud detection and financial reporting lands very differently than disclosing that you use AI to write personal thank-you letters. Donors are broadly comfortable with the former and significantly more sensitive about the latter. Lead with the operational uses when communicating broadly, and be more specific and careful with communications that touch the donor relationship directly.
Provide Opt-Out Mechanisms
Offering donors the ability to opt out of AI-driven interactions, particularly for personalized communications, is both a best practice and an expectation that a majority of donors hold. This doesn't mean building complex technical infrastructure; a simple preference in your email platform that routes certain donors to fully human-written communications is sufficient for most organizations.
The act of offering an opt-out is itself trust-building, even for donors who don't use it. It signals that you're treating donors as individuals with preferences rather than as data points in an automated system. That signal matters, particularly for major donors and long-term supporters whose relationship with your organization is the foundation of their giving.
The Real Risks of Staying Silent
Given the transparency penalty research, some organizations might conclude that the safest approach is simply not to disclose AI use. This calculus misunderstands the risks on both sides. The hazards of non-disclosure are significant and growing.
Discovery Is Worse Than Disclosure
Donors who discover undisclosed AI use through external sources, a news article, a conversation with a peer, or their own investigation are far more likely to churn than donors who were proactively informed. The act of concealment transforms a policy question into a trust question. Trust that is broken through discovered deception is substantially harder to rebuild than trust that was never fully formed.
Foundation and Grant Funding Risk
According to data from Nonprofit Tech for Good, 23% of foundations will not accept grant applications containing AI-generated content, and 67% are still undecided about their policies. Organizations that have used AI in grant writing without disclosure, and without verifying each funder's guidelines, are carrying risk that may not surface until a major funding relationship is damaged. See our article on AI in peer-to-peer fundraising for related considerations.
Growing Legal Exposure
The legal landscape around AI disclosure is moving quickly. The FTC Act's deceptive practices provisions already apply to AI use, and the FTC has demonstrated willingness to enforce them. New York's synthetic performer disclosure law takes effect in June 2026, requiring disclosure of AI-generated content in certain advertising and fundraising contexts. State-level AI regulation is proliferating, and organizations without disclosure infrastructure will face compliance scrambles as laws take effect.
The "AI Bot as Human" Scenario
If your organization uses AI chatbots or automated response tools in any donor-facing capacity, and any donor believes they are communicating with a human staff member, you face the single highest-risk disclosure scenario in fundraising. This is donors' top AI concern by a significant margin. Ensuring that any AI-powered interaction is clearly labeled as such is not just a best practice; it's a non-negotiable foundation of ethical fundraising.
The Legal Landscape Nonprofits Need to Track
AI disclosure requirements are no longer hypothetical. Several legal frameworks are already in effect or taking effect in 2026 that have direct implications for how nonprofits communicate about AI use in their fundraising operations.
FTC Act: Deceptive AI Claims
The FTC has been active in pursuing deceptive AI practices under existing law. Any AI use that deceives donors about the nature of communications, including AI systems presented as human staff members, falls within FTC enforcement scope. Nonprofits are not exempt. Establishing honest AI disclosure practices is also a baseline legal compliance measure.
New York Synthetic Performer Law (June 2026)
Effective June 2026, New York requires advertisers distributing content in New York to conspicuously disclose the use of AI-generated synthetic performers in commerce advertisements. This encompasses AI-generated video, images, and audio in fundraising appeals. Organizations that produce this type of content for campaigns reaching New York audiences need compliance plans in place now.
State-Level AI Regulation
Multiple states have enacted AI laws effective January 2026. A federal executive order has signaled potential preemption of some state laws, but the regulatory picture remains fragmented. Nonprofits operating across state lines, which includes any organization with a national donor base, should consult legal counsel about which state AI laws apply to their operations and what compliance looks like.
AFP Code of Ethical Standards
The Association of Fundraising Professionals' Code of Ethical Standards, updated in December 2023 and now enforceable, requires fundraisers to operate with honesty and transparency in all activities. While not explicitly naming AI, it clearly covers the use of AI in ways that could be considered deceptive or that fail to reflect the organization's actual practices. AFP members using AI in fundraising without disclosure face potential professional accountability.
The practical guidance here is to treat AI disclosure policy development as a compliance function, not just a communications function. Assign someone with legal awareness to track the evolving regulatory landscape, build disclosure language that is reviewed by counsel where appropriate, and establish a regular cadence for updating your AI use policy as both your practices and the legal landscape change.
Building a Practical Transparency Framework
Turning the principles in this article into operational practice requires a structured approach. The following framework gives you a practical starting point that can be implemented progressively, beginning with the most critical elements and building toward comprehensive AI governance over time.
Audit Your Current AI Use
Before you can disclose, you need to know what you're disclosing. Conduct a brief internal survey to understand where AI tools are currently being used across your organization, including informal use by individual staff members. The most common disclosure gap isn't intentional concealment; it's simply that leadership doesn't know what tools staff are using independently. A structured AI readiness assessment can help surface this picture.
Write a One-Page AI Use Policy
Document what AI tools you use, what you use them for, what data protections are in place, and who is responsible for oversight. Publish this on your website. This single action closes a significant portion of the transparency gap and provides the foundation for all your disclosure communications.
Communicate to Existing Donors
Include a brief mention in your next newsletter or donor update. Keep it simple and mission-focused: you use AI tools to improve your work, here's how, and here's how to learn more. Provide an opt-out option for donors who prefer fully human communications.
Build Disclosure Into Standard Workflows
For donor-facing communications produced with AI assistance, add a brief note about human review as standard practice. For any AI-powered chatbots or automated response tools, ensure they are clearly labeled. Build opt-out checkboxes into online forms for new donors.
Review and Update Regularly
AI use evolves quickly. Set a quarterly reminder to review your AI use policy against what tools are actually being used and update accordingly. Build a relationship with legal counsel who tracks AI regulation, and schedule an annual review of your disclosure practices against the current legal and ethical landscape.
Transparency as a Competitive Advantage
The organizations that will navigate AI adoption most successfully aren't the ones that use it most aggressively or avoid it most cautiously. They're the ones that use it thoughtfully and talk about it honestly. In a sector where trust is the foundational currency, the ability to tell donors clearly what you do and why you do it is a form of institutional strength.
The transparency gap that currently separates what nonprofits believe they're communicating from what donors actually hear is also an opportunity. Organizations that close that gap proactively, with published policies, honest disclosures, and opt-out mechanisms, will stand out from the majority who are still operating without governance frameworks. That differentiation matters increasingly as donors become more AI-literate and more attentive to how their preferred organizations handle the technology.
The complexity of the disclosure question, with its documented transparency penalties, varied donor expectations, and evolving legal requirements, is real. But complexity isn't a reason to delay. It's a reason to develop a thoughtful approach now, before external pressures force a reactive one. For organizations thinking about how AI governance fits into their broader strategic direction, our article on incorporating AI into your strategic plan offers a framework for making these decisions intentionally.
Build an AI Transparency Strategy That Works
One Hundred Nights helps nonprofits develop AI governance frameworks, donor communication strategies, and disclosure policies that protect trust while enabling the benefits of AI. We work with organizations at every stage of AI adoption.
