Back to Articles
    Fundraising & Donor Relations

    Do Donors Care If You Use AI? What the 2026 Research Says About Trust and Giving

    The question is no longer hypothetical. As nonprofit AI adoption has reached broad mainstream use, donors are forming opinions about AI use by the organizations they support. The research paints a picture more nuanced than the alarming headlines suggest, with important strategic implications for how your organization communicates about AI.

    Published: March 16, 20269 min readFundraising & Donor Relations
    Research on donor attitudes toward AI use in nonprofit fundraising

    A statistic has been circulating among nonprofit fundraising circles: 31% of donors say they would be less likely to give to an organization that uses AI. If taken at face value and without context, this number is enough to make any development director nervous. It seems to suggest that AI adoption comes with a significant fundraising risk, that a substantial portion of your donor base might defect if they learn you are using AI tools.

    The reality is considerably more complex, and considerably more manageable. The 31% figure comes from the Fundraising.AI "Donor Perceptions of AI 2025" report, which surveyed 1,031 donors who had made charitable contributions in the previous 12 months. But that same survey found that 14% of donors would be more likely to give if a nonprofit uses AI, 29% say it would make no difference, and a crucial 25% say their response would depend on how AI is implemented. In other words, 54% of donors are either neutral, positive, or open to persuasion, compared to the 31% who express initial skepticism.

    The implications are significant. The data does not suggest that nonprofits should avoid AI to protect fundraising. It suggests that how organizations communicate about AI use matters enormously, and that transparency and context can convert cautious donors into accepting or even supportive ones. This article examines what the research actually shows, where donor concerns are most concentrated, how different donor segments respond differently, and what practical steps organizations can take to build rather than erode donor trust through AI adoption.

    What the Research Actually Shows

    The Fundraising.AI 2025 study is the most comprehensive piece of research to date on donor attitudes toward nonprofit AI use. It deserves careful reading rather than headline extrapolation. Alongside the top-line giving likelihood question, the survey asked donors what they find most concerning about nonprofit AI use, what they find most valuable, and what would change their minds.

    Donor Giving Likelihood If Nonprofit Uses AI

    Fundraising.AI Donor Perceptions Survey, August 2025 (n=1,031)

    Less likely to give31%
    No difference29%
    Depends on implementation25%
    More likely to give14%

    The Disclosure Gap

    What donors expect vs. what nonprofits provide

    92%
    of donors say nonprofits should plainly disclose where and why AI is used
    15%
    of charitable organizations currently disclose their use of generative AI

    The gap between donor expectations and organizational practice is the central risk, not AI use itself.

    The disclosure gap is arguably the most important finding in the research. Donors overwhelmingly expect transparency about AI use, and almost no nonprofits are providing it. This creates a specific risk: not the risk that donors will object to AI use, but the risk that they will discover it without context and interpret the lack of disclosure as deception. Undisclosed AI adoption, when discovered, is far more damaging to donor relationships than disclosed adoption handled thoughtfully.

    Additional findings from the survey reinforce the importance of context. Donor familiarity with AI jumped approximately 10 percentage points year-over-year from 2024 to 2025. Donors are becoming more informed and more nuanced in their views, not less. The "conditional" response category, where donors say their reaction depends on how AI is implemented, is likely to grow as donor sophistication increases. This means organizations that invest in clear, specific AI communication are positioned to turn undecided donors into supporters of thoughtful AI use.

    Where Donor Concerns Concentrate

    Not all AI use generates equal donor concern. The research reveals a clear pattern: donors draw a sharp line between AI as a back-office efficiency tool and AI as a replacement for authentic human relationships. Understanding where this line falls is essential for designing a communication strategy that addresses real concerns rather than generic anxiety about technology.

    Highest Concern: AI Impersonating Human Relationships

    The single most concerning practice for donors is AI bots posed as humans representing the organization. According to the Fundraising.AI survey, 34% of donors rank this as their top concern, and 50% place it in their top three. This finding is consistent across all donor demographics and should be treated as a non-negotiable boundary.

    The concern is not that AI is used in communications, but that AI is used deceptively, specifically that a donor believes they are speaking with or receiving a personal message from a human being when they are not. Any chatbot, automated response, or AI-drafted appeal that could plausibly be mistaken for personal human communication must be clearly identified. This is both an ethical requirement and a practical one: discovery of undisclosed AI impersonation is among the fastest ways to destroy long-term donor relationships.

    Significant Concern: Privacy and Data Security

    Two-thirds of donors name privacy and data security as key concerns related to nonprofit AI use. This reflects broader societal awareness of how AI systems consume and process personal data. Donors understand, even if imprecisely, that AI personalization requires data, and they are concerned about what data is being collected, how it is stored, and who has access to it.

    Approximately 39.8% of survey respondents express specific discomfort with data profiling used to drive personalized fundraising appeals, meaning the use of browsing behavior, giving history, and demographic inferences to tailor outreach. This does not mean organizations should avoid personalization, but it does mean that donors who understand they are being profiled are more comfortable when they have consented to that use of their data and understand what it enables.

    Where Donors See Genuine Value

    The research also identifies where donors perceive meaningful benefit from nonprofit AI use. The types of AI applications donors are most comfortable with are exactly those that improve organizational effectiveness without replacing human connection.

    • Fraud detection and financial oversight (48.3% see high value)
    • Improving operational efficiency (44.7%)
    • Impact measurement and reporting accuracy
    • Grant writing and administrative automation (internal, not donor-facing)
    • Enhancing fundraising program effectiveness (61% identify as top perceived benefit)

    Generational Differences: A Donor Segmentation Lens

    Donor attitudes toward AI are not uniform across age groups, and the generational differences have practical implications for how organizations communicate AI use to different donor segments. A blanket communication approach that works for Millennial donors may actively alienate Boomer donors, and vice versa.

    Millennials and Gen Z (18-44)

    Younger donors are the most open to AI-driven nonprofit operations. They engage primarily through digital platforms and carry a general comfort with technology into their giving behavior. They are more likely to respond positively to AI-enabled personalization when it is transparent and adds genuine value to the giving experience.

    Millennial charitable participation increased 16% since 2021, and 65% prefer giving multiple times per year, making personalized AI communications particularly relevant for this segment. The key for younger donors is transparency about what data is used and why, not avoidance of personalization.

    Generation X (45-59)

    Gen X donors sit between the comfort of younger generations and the skepticism of older donors. They are generally pragmatic about technology and respond well to explanations that frame AI use in terms of efficiency and mission impact rather than innovation for its own sake. For Gen X, the question is usually "does this help you serve your mission better?" rather than a principled objection to AI. Clear, matter-of-fact communication about how AI helps the organization run more effectively typically lands well.

    Baby Boomers (60+)

    Boomer donors are the most resistant to AI in their giving relationships. Only 9% of Boomers say they are more likely to give when AI is involved, and this segment places the highest value on traditional human interaction in donor stewardship.

    For major and legacy donors, who tend to skew older, the most important reassurance is that AI has not replaced the human relationships at the center of their connection to the organization. Boomer retention requires prioritizing regular non-fundraising communication, impact storytelling, and visible human touchpoints from staff and leadership.

    Higher-income donors, defined as those with household incomes above $100,000 annually, show a different pattern regardless of age. This group demonstrates greater comfort with AI in financial management and impact measurement contexts while remaining cautious about AI in direct communications. This makes sense: high-capacity donors typically have professional exposure to AI in their own organizations and understand its capabilities and limitations. Their concerns are less about AI as a category and more about specific applications that feel manipulative or depersonalized.

    The practical implication is that segmented communication strategies produce better outcomes than uniform ones. Communicating AI use to a Millennial digital donor through your app or social channels using accessible language about personalization is appropriate. Addressing AI use in a major donor relationship requires a more personal conversation where the donor understands that AI supports but does not replace the human relationship that centers their giving.

    Building a Donor AI Transparency Strategy

    The research makes clear that the organizations best positioned to maintain and grow donor trust through AI adoption are those that take a proactive, specific, and context-rich approach to disclosure. The organizations most at risk are those that either avoid AI disclosure entirely or address it so vaguely that donors cannot understand what it actually means for their giving relationship.

    "We use AI to advance our mission" is not disclosure. It is a deflection that will strike informed donors as evasive. Effective AI transparency tells donors specifically what AI does in your organization, why you chose to use it, what it does not do, and how humans remain in control of decisions that affect them.

    Write and Publish an AI Policy

    The foundation of donor trust in an AI-using organization

    An accessible AI policy, linked from your website and referenced in your annual report, signals organizational intentionality about AI governance. It should describe what categories of AI your organization uses, what decisions are made with AI assistance versus human judgment, what data is used and protected, and who in the organization is accountable for AI oversight. It need not be a technical document. A well-written two-page policy is more valuable than an exhaustive technical appendix that no donor will read.

    Organizations that have invested in building AI governance frameworks find that the policy development process itself clarifies internal norms and makes external communication easier. When staff have agreed on where AI is and is not appropriate, explaining that to donors becomes straightforward rather than uncomfortable.

    Keep Humans Visible in Relationship-Critical Touchpoints

    AI can draft content, but relationships require human presence

    The most important tactical implication of the research is that donor-facing touchpoints that carry relationship weight should remain visibly human, even when AI assists with preparation. A thank-you letter that was drafted by AI but reviewed, personalized, and signed by a real person is different from a message that is entirely AI-generated with no human judgment applied.

    This applies especially to major donor stewardship, where the personal relationship is often the primary reason for giving. Relationship officers can use AI to prepare for calls, research donor interests, or draft follow-up notes, but the donor interaction itself should be authentically human. The same principle applies to crisis communications, condolence messages, and any outreach where the emotional weight depends on genuine human presence.

    Frame AI Around Mission Impact, Not Technology

    Donors respond to outcomes, not capabilities

    The most effective AI communications lead with what AI enables, not what it is. "We use AI to identify donors who may be at risk of lapsing, so our team can reach out before the relationship fades" communicates the same thing as "we use AI-powered donor analytics" but in a way that makes the mission relevance immediate and legible.

    Organizations that describe AI through the lens of their work, reduced administrative overhead means more money to programs, faster grant research means more funding applications, AI-assisted impact measurement means more accurate reporting to funders, tend to encounter far less donor resistance than those that frame AI as an organizational efficiency or innovation story. The donor question is always "does this help you help more people?" Leading with that answer changes the conversation.

    Offer Data Control and Opt-Out Options

    Agency reduces resistance to personalization

    For AI-driven personalized communications, giving donors clear control over how their data is used substantially reduces the discomfort that data profiling generates. An explicit preference center where donors can indicate how they want to be communicated with and what types of outreach they prefer, including the ability to opt out of AI-personalized messaging, shows respect for donor autonomy and provides a constructive outlet for privacy concerns.

    Very few donors will actually opt out of personalization when given the choice, but the availability of the option itself signals that the organization is not treating their data as something to be exploited. This is the difference between personalization that donors experience as convenient and personalization that donors experience as invasive.

    What This Means for Your Fundraising Strategy

    The research should not produce paralysis. It should produce strategic clarity. Nonprofits that are already using AI for donor analytics, prospect research, personalized communications, or impact measurement, and the vast majority are, need not undo any of that work. They need to bring their communications about that work in line with what donors expect.

    The organizations most exposed to donor trust risk are those at the two extremes: organizations using significant AI in donor-facing contexts without any disclosure, and organizations so concerned about donor perception that they avoid AI entirely and fall behind peers in effectiveness. The optimal position is in the middle: intentional, thoughtful, transparent AI use communicated clearly and consistently.

    For organizations developing AI-powered annual fund strategies or donor scoring models, the transparency question should be built into program design from the start rather than addressed as an afterthought. What would you want your donors to know about how you are using their data and AI to inform your outreach? Starting from that question produces better programs and better donor relationships than starting from "how do we avoid this topic."

    The research also suggests a proactive opportunity. Organizations that establish genuine transparency leadership on AI, publishing clear policies, explaining AI governance in annual reports, and acknowledging the ethical questions involved, are positioned to differentiate themselves positively in donor perception. As the sector moves toward broader AI adoption, the organizations that led on transparency rather than trailing on disclosure will have built the credibility that matters when donors eventually compare their options.

    AI Transparency Checklist for Fundraisers

    • Does your organization have a written AI policy accessible to donors and the public?
    • Are all AI-powered chatbots and automated donor communications clearly identified as AI-assisted?
    • Do major donor relationship officers have guidance on when to disclose AI's role in relationship management?
    • Does your privacy policy address AI data use specifically, not just general data collection?
    • Are donor preference centers or opt-out mechanisms available for personalized AI communications?
    • Does your annual report or impact report address AI governance and ethical use?
    • Are your AI disclosure communications segmented appropriately for Boomer versus younger donor audiences?

    Conclusion

    The question "do donors care if you use AI?" has a clear answer from the 2025 research: some do, many are undecided, and the difference between those outcomes depends substantially on how organizations communicate. The 31% figure that has generated concern in the sector does not represent a fixed group of irreconcilable AI opponents. It represents donors whose current concerns are concentrated in specific, addressable areas: undisclosed AI impersonation, opaque data use, and a felt sense that human care is being outsourced to algorithms.

    The 25% who say their response depends on implementation are the most strategically important segment. These donors are waiting to see whether organizations use AI in ways that reflect their values and serve their missions, or use it in ways that feel extractive or disrespectful of the relationship. Clear communication, maintained human presence in relationship-critical moments, and genuine transparency about AI governance are the factors that determine how this group responds.

    The organizations that will navigate this transition best are those that treat AI transparency not as a risk management exercise but as an expression of the same values that make them trustworthy partners for donors in the first place. Honesty about how you work, including the tools you use and the judgments you make about them, is not separate from mission integrity. It is an expression of it.

    Build Donor Trust Through Thoughtful AI Adoption

    One Hundred Nights helps nonprofits design AI strategies that strengthen rather than strain donor relationships. From AI policy development to donor communication frameworks, we help you navigate the trust questions that matter.