How Funders Are Evaluating AI Use in Grant Applications: A 2026 Update
Nearly three out of five grantseekers now use AI when writing grant proposals, yet two thirds of foundations have not established a formal policy on AI-assisted applications. Understanding where funders actually stand, and how to navigate the gap between adoption and governance, has become essential knowledge for every nonprofit development professional.

The grant writing landscape has shifted beneath everyone's feet. AI tools have moved from experimental novelty to routine practice for nonprofit development teams, yet the philanthropic sector's response has been strikingly uneven. According to research from Candid and DH Leonard Consulting, 58.8% of grantseekers now use AI either regularly or infrequently when writing grant applications. At the same time, 67% of foundations remain undecided on whether to formally accept or reject AI-generated content. This gap between grantseeker behavior and funder policy creates a genuinely uncertain environment for nonprofits trying to do the right thing.
The stakes are significant on both sides. For nonprofits, grant funding is often the difference between sustaining programs and cutting services. For funders, grant proposals are the primary window through which they evaluate organizational capacity, mission alignment, and the authentic connection between a nonprofit and its community. When AI enters that process, it raises real questions about authorship, authenticity, and whether the voice in the proposal actually represents the organization.
This article provides a clear-eyed look at where the funding sector actually stands in 2026, how government and private funders differ in their approaches, what funders are looking for when they do evaluate AI-assisted applications, and how your organization can use AI ethically and effectively while maintaining the trust that makes long-term funder relationships possible. Whether you are using AI extensively in your development work or still approaching it cautiously, understanding the current landscape helps you make better decisions.
This discussion connects closely to broader conversations about getting started with AI in your nonprofit and the growing gap between AI adoption and AI strategy that many organizations are experiencing. Thoughtful AI use in grant writing is not separate from your overall AI approach; it is one of its most visible expressions.
Where Funders Actually Stand: The 2026 Policy Landscape
The most important thing to understand about the current funder landscape on AI is that the majority have not made up their minds. Research from Candid's 2025 Foundation Giving Forecast Survey breaks the picture down with clarity: 23% of foundations will not accept grant applications with AI-generated content, 10% explicitly will accept them, and 67% remain undecided. This is not a sector that has coalesced around a standard; it is a sector still working through fundamental questions about what AI use means and how to evaluate it.
23%
Will NOT accept AI-generated applications
These funders have drawn a clear line and are actively screening for AI-generated content in some cases.
67%
Have NOT established a formal policy
The vast majority of funders are in the undecided category, creating ambiguity for applicants trying to do the right thing.
10%
Explicitly accept AI-generated proposals
A small minority have formally indicated openness to AI-assisted applications, often with quality and transparency expectations.
The disclosure landscape is similarly unsettled. Research shows that 90.7% of grantseekers say they have not yet encountered an AI disclosure question on an application. Of those who have seen such a question, 48.3% expressed uncertainty about how to answer it, largely because no one has clearly defined what counts as reportable AI use. Does using AI-assisted grammar checking require disclosure? What about asking an AI to summarize a foundation's publicly available grant guidelines before writing? The definitional problem is real and the sector has not yet solved it.
For nonprofits, this ambiguity has a practical implication: you cannot rely on funders to tell you the rules clearly, because many funders do not know what rules they want to apply. What you can do is adopt internal standards around transparency and quality that will serve you well regardless of how any individual funder's policy eventually evolves. The organizations that build those standards now will be better positioned when funder requirements become more consistent.
Government Funders Have Moved First: NIH and NSF
While private philanthropies are still working through their positions, federal research funders have issued some of the sector's clearest AI policies. These policies offer a preview of where private funders may eventually land, and they set an important benchmark for thinking about authenticity and disclosure.
NIH: Restrictive Approach
Policy Notice NOT-OD-25-132, effective September 2025
- Applications "substantially developed by AI" will not be considered
- AI detection technology is being actively deployed to screen submissions
- Post-award detection of AI use may trigger referral to the Office of Research Integrity
- Individual PI submissions capped at six per year to discourage AI-enabled mass applications
NSF: Transparency Approach
Emphasis on disclosure rather than prohibition
- Encourages researchers to indicate AI use in project descriptions
- Reviewers are prohibited from uploading proposal content to non-approved AI tools
- Focus on protecting applicant intellectual property during the review process
- Balanced approach that acknowledges AI as a tool while emphasizing human authorship
The NIH approach is the more restrictive of the two, taking the position that AI-generated content undermines the fundamental premise that grant applications represent the original ideas and work of the applicants. The NSF approach is more nuanced, treating transparency as the primary obligation rather than prohibition. Both positions share an underlying concern: that AI should not substitute for human intellectual contribution, organizational expertise, or authentic representation of a nonprofit's work and vision.
For nonprofits seeking government grants, these policies require careful attention. More broadly, they signal the direction that stricter private funders are likely to follow. If you apply for government grants in any scientific or research-adjacent field, you need to understand your funder's specific AI policy before beginning any grant writing work that involves AI tools.
Major Private Foundations: What We Know and What Is Still Open
Major private foundations including the Ford Foundation, MacArthur Foundation, Gates Foundation, and others have been vocal about their views on AI's role in society, but their public communication has focused primarily on how they fund AI-related causes, not on whether grant applicants may use AI in their proposals. As of early 2026, no major private foundation had issued a comprehensive publicly available policy specifically governing applicant use of AI in grant writing.
This does not mean foundations are indifferent to the question. The Humanity AI initiative, announced in October 2025 and backed by Ford, MacArthur, Mellon, Packard, and several other major funders representing $500 million in committed funding, signals deep engagement with how AI shapes society and values. These foundations care intensely about AI's impact on equity, democracy, and the communities they serve. It would be surprising if that concern did not eventually translate into how they evaluate grantee AI use.
What Foundations Are Likely to Ask in 2026
Sector analysts predict foundations will increasingly incorporate AI-related due diligence questions that reflect genuine interest in grantee capacity, not gatekeeping.
- Mission alignment: Does your AI use support rather than undermine your mission and community values?
- Responsible implementation: Are human judgment and oversight maintained in your AI-assisted processes?
- Equity considerations: Does your AI use introduce bias, and who benefits or is excluded from your AI-enhanced programs?
- Data stewardship: How are you protecting community data and client privacy in your AI workflows?
- Financial sustainability: Is your AI use operationally embedded and funded in a sustainable way?
- Learning orientation: Are you documenting and sharing what you learn from your AI implementation?
The framing of these anticipated questions is broadly positive. Foundations asking about your AI use are more likely doing genuine organizational due diligence than looking for reasons to reject your proposal. Nonprofits that have thought carefully about how they use AI, why, and what guardrails they maintain will be able to answer these questions in ways that demonstrate organizational maturity and trustworthiness. This is an area where thoughtful preparation serves you well.
This connects directly to the importance of having an AI strategy that aligns with your mission rather than treating AI as a collection of unconnected tools. Funders who ask about AI use are essentially asking whether your organization thinks strategically about technology and its implications.
How Funders Are Using AI on Their Side of the Table
While the conversation often focuses on whether nonprofits can use AI to write proposals, it is worth noting that funders themselves are beginning to deploy AI in their grantmaking processes. Understanding what tools foundations use and what they are looking for helps you understand how your proposal will actually be evaluated.
Grant Guardian
Developed by the Patrick J. McGovern Foundation
Nearly 200 grantmakers including GitLab Foundation and United Way have adopted this tool, which extracts financial data from Form 990s and audited financial statements, generates a scorecard, and flags financial health indicators.
Crucially, it keeps a human in the loop at all times and does not make funding decisions or recommendations, only surfaces information for program officers to evaluate.
Foundant Technologies
AI-powered features for program officers
Automatically condenses applicant data into quick summaries for program officers, can pre-screen applications for alignment with award requirements, and flags inconsistencies across application materials.
These features are optional and can be toggled off based on foundation policy, reflecting the sector's cautious approach to AI adoption on the funder side.
The adoption of AI on the funder side remains limited but growing. Only 1% of foundations currently use generative AI to screen applicants, though 19% are actively considering it. What this means practically is that the information your organization presents in financial disclosures, Form 990s, and application materials is increasingly being processed and analyzed automatically. Accuracy, consistency across documents, and clear financial health indicators matter more than ever.
There is an irony worth noting: the sector is simultaneously cautious about applicants using AI while beginning to adopt AI tools on the funder side. This asymmetry is likely to resolve over time as both sides of the grantmaking relationship develop clearer norms and expectations. For now, the most important takeaway is that whatever you put in your application, it is increasingly likely to be processed and compared in ways that were not possible a few years ago.
What Reviewers Actually Notice: Positive and Negative Signals
Program officers who review many applications have developed pattern recognition for AI-generated content. Understanding what raises red flags, and what indicates thoughtful AI use, helps your organization navigate this landscape with integrity.
Red Flags for Reviewers
Signs that AI was used as a substitute for authentic organizational voice
- Generic, formulaic language that could apply to any organization in any field
- Technically answers every question but misses the funder's underlying priorities
- Fabricated citations: AI frequently generates convincing but nonexistent research sources
- Lack of community specificity and authentic connection to the populations served
- Overstated impact claims or capacity assertions not supported by verifiable evidence
- Heavy bullet-point structure with robotic phrasing and abstract ideas
Positive Signals for Reviewers
Signs that AI supported rather than replaced authentic organizational work
- AI used as a drafting and structuring tool, fed with real organizational data and impact outcomes
- Strong alignment with the specific funder's stated priorities, demonstrating human-led research
- Externally verifiable claims backed by public datasets, letters of commitment, or audited outcomes
- Authentic organizational voice maintained throughout, with real community specificity
- AI served as editor and structure-giver, with human expertise driving the substance
- Proactive transparency when AI assistance is noted, without prompting
The fabricated citation problem deserves special attention. AI language models frequently generate what appear to be real research citations, complete with plausible author names, journal titles, and publication years, that do not actually exist. Submitting a grant proposal with invented citations is not just an AI problem; it is a misrepresentation that can permanently damage your relationship with a funder. Every statistic, every research reference, and every citation in your proposal needs to be independently verified before submission, regardless of how it was generated.
The pattern that emerges from funder feedback is consistent: AI used as a sophisticated drafting assistant that helps organize, refine, and improve writing that originates from genuine organizational expertise is generally acceptable, while AI used as a substitute for that expertise is not. The difference is visible to experienced reviewers, and it matters for whether your proposal is competitive.
The Equity Problem Nobody Is Talking About Enough
There is an uncomfortable dynamic developing in the grant writing landscape that deserves serious attention. Larger, better-resourced nonprofits with established development departments are able to invest in AI tools, train staff to use them effectively, and produce high-volume, polished proposals. Smaller, grassroots organizations serving the most marginalized communities often lack the resources, staff capacity, or technical infrastructure to use AI tools thoughtfully, if at all.
The result is a potential widening of the grant landscape advantage toward already-advantaged organizations. If AI-assisted proposals are perceived as more polished, more complete, and more competitive, organizations that can harness AI effectively will win more grants. Organizations that cannot, whether for reasons of cost, capacity, or infrastructure, will face a harder road. This is a genuine equity concern that funders, infrastructure organizations, and the sector broadly need to address.
The Funder Support Gap
The disconnect between what funders expect and what they provide
Research from the Center for Effective Philanthropy highlights a striking contradiction: foundations are increasingly asking about grantees' technology capacity while simultaneously providing almost no support for developing that capacity.
- Nearly 90% of foundations provide no AI implementation support to grantees
- Only 15% of foundations are actively discussing AI policies and support needs with their grantees
- Only 20% of funders provide grantees with money for technology tools and resources
- Bridgespan and others have called for treating technology as a core operating cost
For nonprofits navigating this reality, the equity dimension of AI use is both a practical concern and a values question. If your organization serves communities that are already marginalized and under-resourced, how you think about technology equity internally, and how you communicate about it with funders, can itself become part of your mission narrative. This is an area where authenticity and mission alignment are inseparable.
Best Practices for Ethical AI Use in Grant Writing
Given the current landscape, what does responsible AI use in grant writing actually look like? The following framework draws on guidance from sector advisors, grant writing professionals, and funder feedback to provide practical direction for nonprofits at any level of AI adoption.
Principle 1: Transparency Without Apology
If a funder asks whether you used AI in your application, answer honestly and without embarrassment. Proactive disclosure, even when not asked, is appropriate when you have reason to believe a funder is particularly sensitive to AI use. The reputational risk of undisclosed AI use being detected later far outweighs any perceived advantage from staying quiet. Build a simple internal log of how AI assisted in each proposal so you can answer process questions if they arise.
Transparency also means being specific rather than vague. "We used an AI tool to help organize our program description and refine our budget narrative" is more useful and credible than a blanket statement. This level of specificity demonstrates that you have thought carefully about your process.
Principle 2: AI as Editor, Not Author
The most sustainable approach to AI in grant writing treats AI as a sophisticated editorial assistant rather than a content generator. This means your development team generates the core substance: the program descriptions drawn from real implementation experience, the outcomes data from your actual evaluation systems, the community context that reflects genuine relationships, and the organizational narrative that comes from your leadership's vision.
AI then helps you organize that content effectively, refine the language for clarity and persuasion, check for consistency across sections, identify gaps in your argument, and match the proposal's structure to what funders have indicated they are looking for. This division of labor produces proposals that are both efficient to create and authentically representative of your work.
Principle 3: Research Cannot Be Delegated
Reading a funder's strategic plan, reviewing their recent grants list, understanding their community focus, and identifying alignment between their priorities and your work is human research that cannot be effectively outsourced to AI. This research forms the foundation of a competitive application. AI can help you organize and articulate what you learn from that research, but it cannot do the learning for you.
AI tools used without genuine funder research produce generic proposals that technically meet all the formal requirements while missing the underlying question every funder is really asking: why are you the right organization for this investment, and why does this funder's mission connect specifically to yours?
Principle 4: Verify Everything
Every statistic, every citation, and every claim in your proposal needs independent verification before submission. AI tools are fluent at generating content that sounds authoritative, including research citations that do not exist and statistics that were never measured. A single fabricated citation can end a funder relationship permanently and damage your organization's reputation across the funding community.
Develop a verification step in your proposal review process specifically focused on checking any data or research generated with AI assistance. This is not optional; it is a baseline quality control requirement for any AI-assisted grant writing process.
Principle 5: Protect Sensitive Information
Public and free-tier AI tools may use your inputs for model training. Do not enter confidential client data, donor-specific information, sensitive financial details beyond what you would publicly disclose, or proprietary program information into public AI tools. Develop clear guidelines for your development team about what categories of information can and cannot be used in AI-assisted writing processes.
This connects to the broader imperative of having an organizational AI policy before you use AI routinely for grant writing. The majority of nonprofits still lack AI policies, but the risk exposure from grant writing without clear guidelines is real and avoidable.
Preparing for Increased Funder Scrutiny
The current period of policy ambiguity is likely to be temporary. As AI use in grant writing becomes more prevalent and as funder awareness increases, formal policies are going to follow. The organizations that build good practices now, during the period of ambiguity, will be far better positioned when scrutiny increases.
Think of the current moment as an opportunity to define your organization's approach on your own terms rather than being reactive to funder requirements you did not anticipate. Nonprofits that can articulate clearly how they use AI, what human oversight they maintain, and why their approach serves their mission and community will have a compelling story to tell any funder who asks.
Build Your AI Grant Writing Documentation
Create internal records that demonstrate thoughtful, ethical AI use
- Document which aspects of each proposal involved AI assistance (drafting, editing, structure, research summarization)
- Record how you verified statistics, citations, and factual claims in AI-assisted sections
- Note which team members reviewed AI-generated drafts and what changes they made
- Track what information types were excluded from AI tools due to sensitivity
- Develop a brief "AI use disclosure" template you can add to applications that ask about it
Your organization's approach to AI in grant writing should also connect to your broader organizational AI strategy. Funders who ask about AI are often asking whether you have thought systematically about technology and its implications, not just whether you used a particular tool. Being able to describe your approach in the context of a coherent organizational perspective on AI is much stronger than treating grant writing AI use as an isolated question.
Consider reviewing your current practices against the nonprofit AI maturity curve to understand where your organization stands and what next steps would strengthen your position. Organizations at higher levels of AI maturity will find funder questions about AI capacity easier to answer with confidence.
Conclusion
The current AI and grant writing landscape is genuinely uncertain, but it is not unknowable. The fundamental principles that make grant proposals competitive have not changed: authentic organizational voice, specific community connection, clear alignment with funder priorities, verifiable impact data, and honest representation of organizational capacity. AI can support all of these things when used thoughtfully, and it can undermine all of them when used carelessly.
The organizations that will navigate this landscape most successfully are those that treat AI as a serious tool requiring thoughtful governance rather than a shortcut to be deployed without reflection. That means developing an AI policy that includes grant writing guidance, training development staff on what responsible AI use looks like, building verification steps into the review process, and being prepared to discuss your approach honestly with funders who ask.
Funder policies will become clearer over time. When they do, organizations that built good practices during the period of ambiguity will have a genuine advantage. The question is not whether to use AI in your development work but how to use it in ways that reflect your organizational values, serve your mission, and maintain the trust-based relationships that make long-term funding partnerships possible.
Ready to Build a Thoughtful AI Strategy?
Developing clear guidelines for AI use in your development work is part of a broader organizational AI strategy. We can help you build an approach that serves your mission and meets evolving funder expectations.
