Transparency in AI Fundraising: What Donors Actually Want to Know
As AI becomes embedded in nonprofit operations, donors are watching closely. Research reveals that 93% of donors consider transparency about AI usage "very important" or "somewhat important" — yet only 15% of nonprofits currently disclose their AI use. This gap represents both a risk and an opportunity for organizations willing to lead with openness.

Artificial intelligence has quietly become part of the nonprofit fundraising toolkit. Nearly three-quarters of nonprofit leaders now use AI in some capacity — drafting donor communications, researching prospects, personalizing appeals, and automating routine tasks. For many organizations, these tools have become indispensable, helping stretched development teams accomplish more with limited resources.
But as AI adoption accelerates, a troubling pattern has emerged: the vast majority of nonprofits are using these tools without telling their donors. According to recent research, 82% of nonprofits use AI informally or ad-hoc, yet only 15% have disclosed this usage to their supporters. This silence creates a transparency gap that, left unaddressed, threatens to undermine the very relationships these tools are meant to strengthen.
The donors themselves have noticed. Surveys consistently show that supporters want to know when AI is involved in their interactions with charitable organizations. They have specific concerns about privacy, authenticity, and the preservation of human connection. And importantly, their giving decisions are increasingly influenced by how organizations handle these questions. Understanding what donors actually want to know — not what we assume they want — has become essential knowledge for any nonprofit investing in AI-powered fundraising.
This article draws on the latest research into donor attitudes toward AI, including comprehensive surveys from Fidelity Charitable, Give.org, and the Fundraising.ai Collaborative. We'll explore what donors genuinely care about, how transparency affects giving behavior, and practical frameworks for building trust through thoughtful disclosure. If your organization uses AI in any aspect of fundraising — or plans to — this research-backed guidance will help you navigate the evolving expectations of your supporters.
The State of Donor Awareness: A Rapidly Shifting Landscape
Donor understanding of AI has matured dramatically in recent years. The 2025 Donor Perceptions of AI study, which surveyed over 1,000 charitable donors, found that familiarity with artificial intelligence has surged — with the percentage of donors reporting they are "very familiar" with AI jumping by 10 percentage points year-over-year. This isn't abstract knowledge; donors increasingly encounter AI in their daily lives through virtual assistants, recommendation algorithms, and generative tools like ChatGPT.
This growing familiarity shapes expectations. When donors understand that AI can draft personalized emails, analyze giving patterns, or optimize ask amounts, they naturally begin to wonder whether their favorite charities are using these capabilities. And overwhelmingly, they want to know the answer. The research is unambiguous: 93% of donors rate transparency about AI usage as "very important" or "somewhat important" when deciding which organizations to support.
What's particularly striking is how donor attitudes have evolved from viewing AI primarily as a risk-management tool to seeing it as a potential driver of organizational effectiveness. In 2024, donors' top perceived benefit of AI in nonprofits was fraud detection. By 2025, enhancing fundraising efforts had claimed the top spot at 61%, narrowly exceeding operational efficiency at 58%. Donors are beginning to connect AI with revenue growth and mission impact — but only when they trust how it's being used.
of donors rate AI transparency as important
of nonprofits disclose their AI use
use AI informally without policies
This transparency gap — between the 93% who want disclosure and the 15% who provide it — represents significant organizational risk. As communicating AI use to donors becomes standard practice, nonprofits that remain silent may find themselves explaining why they didn't disclose rather than being praised for leading with openness. The window for proactive transparency is still open, but it won't remain so indefinitely.
What Donors Actually Care About: Beyond the Headlines
Understanding donor concerns requires moving beyond generalized anxiety about "AI" to the specific issues that shape giving decisions. Research reveals a clear hierarchy of concerns, and it may not match what many nonprofit leaders assume.
The Number One Concern: Authenticity
What worries donors most about AI in charitable giving
The single greatest worry among donors, ranked number one by approximately one-third (34%), is "AI bots portrayed as humans representing a charity." This isn't an abstract fear about technology taking over — it's a specific concern about deception. Donors want to know when they're interacting with a human and when they're not.
This finding has profound implications for how nonprofits should approach AI disclosure. Donors aren't necessarily opposed to AI-generated content or AI-assisted interactions. What they object to is being misled about the nature of those interactions. A fundraising appeal drafted with AI assistance isn't problematic in itself; pretending a chatbot is a staff member is. The authenticity of the relationship matters more than the tools used to support it.
Privacy and Data Security
Two-thirds of donors cite privacy and data security as key concerns regarding AI. This connects to broader anxieties about digital vulnerability — 69% of potential donors express concern that their data could be hacked or stolen when considering giving to a new charity.
- How donor data is collected and stored
- Whether data is shared with third-party AI systems
- What protections exist against unauthorized access
- How long data is retained and how it can be deleted
The Human Touch
Donors express lingering fear that technology could erode the human connection that makes charitable giving meaningful. This isn't technophobia — it's a legitimate concern about the nature of philanthropic relationships.
- Will staff still personally review and respond to donors?
- Does AI enhance or replace human relationships?
- Are sensitive communications handled with appropriate care?
- Can donors reach a human when they need to?
Impact Measurement and Reporting
A growing priority among donors is understanding how AI enhances impact measurement and reporting — cited by 53% of respondents, up nearly 20 percentage points from the previous year. This represents an opportunity for nonprofits: donors want to know more about where their donations go, and AI-driven insights can provide the proof points they seek.
Organizations that can demonstrate how AI improves their ability to measure and communicate impact have a compelling story to tell. Rather than treating AI as something to hide or minimize, framing it as a tool that enhances accountability resonates with donor values. This connects directly to broader transparency expectations — as we've noted in our article on AI tools that improve nonprofit transparency, technology and openness can reinforce each other when implemented thoughtfully.
Algorithmic bias rounds out the list of major concerns. Donors worry that AI systems might inadvertently discriminate against certain populations or perpetuate existing inequities. For nonprofits serving diverse communities, addressing this concern proactively — by explaining how tools are vetted for bias and how outcomes are monitored — builds confidence that technology serves the mission rather than undermining it.
How Transparency Affects Giving: The Numbers Tell a Story
Does AI transparency actually affect donor behavior? The research provides a nuanced answer that challenges simplistic assumptions. When asked how AI would influence their giving, donors split into distinct camps: 14% say they would give more to an AI-enabled organization, 32% would give less, and the remainder are neutral or conditional.
This breakdown might initially seem discouraging — more donors say they'd reduce giving than increase it. But the picture changes significantly when you examine specific segments and, crucially, when you factor in how AI use is communicated.
Generational Differences
Enthusiasm for AI concentrates among younger donors and those with greater AI familiarity. Among Gen Z donors (ages 18-29), 24% report being more likely to give to AI-enabled organizations — nearly double the overall average. Additionally, 30% of Gen Z donors appreciate personalized appeals powered by AI technology.
This generational pattern suggests that resistance to AI will likely diminish over time as younger donors become a larger share of the giving population. Organizations building AI capabilities now are positioning themselves for the future donor landscape.
High-Value Donor Attitudes
Perhaps surprisingly, the more generous the donor, the more likely they are to support nonprofits using AI. Research shows 30% of high-dollar donors are more likely to give to AI-enabled organizations, compared to 19% of medium donors and just 13% of smaller donors.
Major donors may have more exposure to AI in their professional lives and better appreciate how technology can drive organizational effectiveness. For development teams focused on major gifts, this data suggests that AI use — when transparent — may actually strengthen relationships rather than weaken them.
The Conditional Middle Ground
The 25% of donors who say their response "would depend on how AI is implemented" represent the key audience for transparency efforts. These conditional supporters aren't opposed to AI — they're waiting to see how organizations handle it. Their giving decisions hinge not on whether AI is used, but on how thoughtfully and transparently it's deployed.
When asked to imagine charity appeals that include AI-generated images, 54.5% of participants say they would be discouraged from giving if they knew the appeal was not verified for accuracy by a staff member. The key phrase is "not verified" — the problem isn't AI involvement, it's the absence of human oversight. Donors who learn that AI assists content creation but that staff review and approve everything may move from skeptical to supportive.
These patterns suggest that the 32% who say AI would reduce their giving may not represent fixed opposition. Many likely fall into the conditional category when given more information. The key variable is trust, and trust is built through transparency. As we've explored in our article on the donor AI paradox, the negative reactions often stem from fear of the unknown rather than informed objection. Organizations that proactively address concerns often find resistance melting away.
Building a Transparency Framework: From Policy to Practice
Moving from understanding donor expectations to meeting them requires a structured approach. The Fundraising.ai Collaborative's Responsible AI Framework identifies five core tenets that donors care about: privacy and security, data ethics, inclusiveness, accountability, and transparency. A comprehensive transparency framework addresses each of these while remaining practical enough for implementation.
Published AI Policies
The foundation of organizational transparency
Publishing an AI ethics statement on your website signals commitment to responsible use. This document should be easily accessible — not buried in legal disclaimers — and written in plain language that donors can understand. The goal isn't comprehensive legal protection but clear communication about your organization's approach.
- Guiding principles: What values guide your AI use (efficiency, accuracy, human oversight)?
- Scope of use: Where AI is and isn't used in fundraising operations
- Human oversight commitments: What staff review exists for AI outputs
- Data protection: How donor information is safeguarded
- Feedback mechanisms: How donors can raise concerns or questions
For organizations just beginning this work, our guide to creating an AI acceptable use policy provides templates and examples that can be adapted to your context.
Proactive Communication
Transparency shouldn't be limited to static policy documents. Regular communication about AI keeps supporters informed as your practices evolve.
- Newsletter updates when adopting new AI tools
- Blog posts explaining how AI supports your mission
- Annual reports including AI impact data
- Board communications about governance decisions
Opt-In/Opt-Out Mechanisms
Research consistently shows that donors appreciate having choice in how they engage with AI-powered systems.
- Disclosure on donation forms about AI personalization
- Option to receive non-AI-personalized communications
- Clear process for data deletion requests
- Preference management in donor portals
Implementation doesn't require perfection on day one. Many organizations start with a simple disclosure statement and build toward more comprehensive frameworks as their AI use matures. The key is beginning the conversation with donors rather than waiting for them to ask — or worse, waiting until a problem forces disclosure.
What to Disclose and How: A Practical Guide
Transparency doesn't mean overwhelming donors with technical details about every algorithm. It means providing the information that matters to them in accessible formats. Based on research into donor priorities, here's a framework for what to disclose and how.
Areas Where AI Is Commonly Used
Be specific about which functions involve AI assistance
Often Disclosed
- Donor research and prospect identification
- Email and appeal content drafting
- Data analysis and reporting
- Communication personalization
Should Also Consider Disclosing
- Ask amount optimization
- Predictive modeling for retention
- Chatbot or virtual assistant interactions
- Image or video content generation
Human Oversight Commitments
Address the top donor concern by explaining your review processes
Since "AI bots portrayed as humans" is donors' number one concern, clearly explaining human oversight builds immediate trust. Consider commitments like:
- "All donor communications are reviewed by staff members before sending." This addresses accuracy concerns while acknowledging AI assistance.
- "Our chatbot is clearly labeled as AI-powered, and you can always request to speak with a staff member." This provides transparency and choice.
- "Major gift conversations and personalized outreach are always handled by our development team." This reassures high-value donors about relationship quality.
Data Protection Assurances
Address privacy concerns with specific commitments
Given that two-thirds of donors cite data security as a concern, specific assurances about how data is handled in AI systems are essential:
- Data minimization: "We only share the minimum data necessary with AI tools to accomplish specific tasks."
- Vendor vetting: "We evaluate AI vendors for security certifications and compliance with data protection regulations."
- Training data: "Your personal information is not used to train AI models without explicit consent."
- Retention limits: "Data processed by AI tools is subject to the same retention policies as all donor information."
The specificity of these commitments matters. Vague statements like "we take privacy seriously" don't build trust the way concrete policies do. Organizations that have developed comprehensive data governance approaches, as we discuss in updating your data governance policy for the AI era, can communicate more confidently about their practices.
The Business Case for Transparency: Why Openness Pays
Beyond meeting donor expectations, transparency about AI use delivers tangible organizational benefits. Research from Give.org consistently shows that transparency in operations correlates with donor trust, which in turn correlates with giving behavior. In an environment where trust in institutions — including nonprofits — has declined, proactive openness becomes a differentiator.
Competitive Differentiation
With only 15% of nonprofits disclosing AI use, transparency represents an opportunity to stand out. Early adopters of transparent AI practices position themselves as leaders — organizations that donors can trust to use technology responsibly.
This differentiation matters particularly for donor retention. When supporters feel confident about how their data is used and how AI enhances rather than replaces human connection, they're more likely to maintain long-term relationships with your organization.
Risk Mitigation
Proactive transparency reduces the risk of negative reactions when donors discover AI use on their own — or worse, when something goes wrong. Organizations that have already communicated their AI practices have a foundation of trust to draw on.
The alternative — being caught using AI without disclosure — carries significant reputational risk. As AI-generated content becomes easier to identify and as media attention on AI in fundraising increases, the chances of undisclosed use becoming public grow.
Strengthening Donor Relationships
Transparency about AI can actually deepen donor relationships by demonstrating organizational values in action. When you explain how AI helps you understand donor interests better, measure impact more accurately, or operate more efficiently, you're showing that technology serves the mission.
This approach transforms AI from something potentially concerning into evidence of good stewardship. Donors who understand that AI-driven efficiency means more of their gift reaches programs may view technology adoption positively — but only if you tell them.
For organizations focused on building lasting donor relationships, transparency about AI connects to broader trust-building efforts. As we explore in building donor confidence in AI-powered personalization, thoughtful disclosure creates opportunities for deeper engagement rather than barriers to connection.
Common Mistakes to Avoid
As nonprofits navigate AI transparency, certain pitfalls repeatedly undermine well-intentioned efforts. Avoiding these mistakes is as important as implementing best practices.
Hiding AI Use
The most damaging mistake is attempting to conceal AI involvement, particularly in donor communications. This approach backfires when discovered and validates donors' concerns about authenticity and deception.
Instead: Acknowledge AI assistance matter-of-factly. Donors are generally comfortable with AI as a tool — it's the secrecy that creates problems.
Overclaiming AI Capabilities
Some organizations swing to the opposite extreme, touting AI capabilities they don't actually have or implying that AI drives results it doesn't deliver. This creates expectations that can't be met and may prompt uncomfortable questions.
Instead: Be accurate about what AI does in your organization. If it helps draft emails that staff then review, say that — don't claim sophisticated personalization you haven't implemented.
Ignoring Opt-Out Preferences
Offering opt-out options but then failing to honor them destroys trust faster than never offering the choice. If donors request non-AI-personalized communications, systems must actually deliver that experience.
Instead: Only offer options you can operationally support. It's better to have limited choices that work than comprehensive options that fail.
Abandoning Human Oversight
Efficiency gains from AI can tempt organizations to reduce human review of outputs. But donors' concerns about accuracy and authenticity make human oversight essential — and transparency about that oversight is what builds trust.
Instead: Maintain clear human review processes and communicate them to donors. The assurance that "staff review all communications" addresses the top donor concern directly.
These mistakes often stem from treating transparency as a one-time task rather than an ongoing commitment. As AI capabilities evolve and your organization's use matures, transparency practices must keep pace. Regular reviews of your disclosure practices — perhaps annually alongside AI policy updates — help prevent drift into problematic patterns.
Looking Ahead: Transparency as Standard Practice
The trajectory of donor expectations points clearly toward transparency becoming table stakes rather than differentiator. Organizations that establish transparent practices now position themselves to meet evolving expectations smoothly, while those that delay face increasingly difficult transitions.
Several trends will accelerate this shift. Regulatory attention to AI disclosure is increasing globally, with the EU AI Act and emerging state-level requirements in the US creating new compliance considerations. Media coverage of AI in fundraising will likely intensify, raising donor awareness and scrutiny. And as AI tools become more sophisticated and visible, the gap between informal use and formal disclosure will become harder to maintain.
The AI tool that fundraisers actually embrace, according to industry analysis, won't simply be the most accurate or articulate — it will be the most trustworthy. Donors are increasingly protective of their personal data, and fundraisers can't afford to jeopardize relationships through opaque algorithms or tone-deaf automation. The winning approach will therefore be transparent, safe, and ethical, extending a fundraiser's empathy rather than eliminating it.
For nonprofits, this means transparency isn't just about avoiding risk — it's about positioning for success in the emerging landscape. Organizations that can demonstrate thoughtful, transparent AI use will attract donors who value both effectiveness and ethics. Those that treat transparency as an afterthought may find their AI investments delivering diminishing returns as donor expectations rise.
Conclusion: Meeting Donors Where They Are
The research is clear: donors want to know about AI use in fundraising. They have specific concerns — about authenticity, privacy, and the human touch — that can be addressed through thoughtful disclosure. And their giving decisions are increasingly influenced by how organizations handle these questions.
The 93% of donors who rate transparency as important aren't asking nonprofits to abandon AI. They're asking for honesty about how it's used, assurance that human oversight exists, and confidence that their data is protected. These are reasonable expectations that align with broader values of accountability and stewardship.
Meeting these expectations doesn't require perfect AI systems or comprehensive policies from day one. It starts with simple steps: acknowledging AI use, explaining human review processes, and inviting donor feedback. As your organization's AI capabilities mature, transparency practices can evolve alongside them.
The window for proactive transparency remains open. With only 15% of nonprofits currently disclosing AI use, organizations that lead with openness still stand out. But as awareness grows and expectations rise, this advantage will diminish. The time to build trust is now — before donors start asking why you didn't tell them sooner.
Ultimately, transparency about AI isn't separate from good fundraising practice — it's an extension of it. The same values that guide ethical donor relationships — honesty, respect, stewardship — apply to technology adoption. Organizations that treat AI transparency as an expression of their values, rather than a compliance burden, will find that openness strengthens rather than complicates their donor relationships.
Ready to Build Donor Trust Through AI Transparency?
Let us help you develop AI policies and communication strategies that meet donor expectations while advancing your mission. Our consulting team specializes in helping nonprofits navigate technology adoption with confidence.
