How Funders Are Using AI to Evaluate Grant Applications (What You Need to Know)
The philanthropic landscape is transforming: 81% of foundations now report using AI tools in some capacity, and 189 philanthropies have adopted AI-powered evaluation systems for grant applications. While only 30% of foundations have formal AI policies, individual staff members across the sector are already using AI for research, application review, and funding decisions. For nonprofits seeking funding, this shift raises crucial questions: How are funders actually using AI in grantmaking? Will AI-generated proposals be accepted or rejected? How can you adapt your grant strategy to this new reality? This comprehensive guide explores what's happening behind the scenes in foundation offices, what funders themselves are uncertain about, and how nonprofits can navigate this evolving landscape while maintaining authenticity and competitive advantage.

The phone call came as a surprise to Maria, a grants manager at a mid-sized education nonprofit. Her organization had been a long-time grantee of a regional foundation, submitting successful applications for the past seven years. But this year, the foundation's program officer wanted to have an unusual conversation: "We want you to know that we're starting to use AI tools in our review process. We're not entirely sure what that means for how you should write proposals, but we wanted to be transparent." The program officer sounded uncertain—clearly, the foundation was adopting AI before fully understanding its implications.
This conversation is happening across the nonprofit sector as foundations rush to adopt AI capabilities without clear frameworks for how to use them ethically, effectively, or equitably. The statistics reveal rapid adoption: according to recent surveys, 81% of foundations report some degree of AI usage, with the most common applications being research activities, writing and summarizing reports, and—increasingly—evaluating grant applications. Tools like Grant Guardian, a free AI-powered evaluation system, have been adopted by 189 philanthropies including notable names like United Way and GitLab Foundation. The Missouri Foundation for Health used AI evaluation tools for "more effective, efficient, and equity-focused application review processes."
Yet despite this widespread adoption, formal governance is lagging dramatically: only 30% of foundations have AI policies in place, and just 9% have both an AI policy and an AI advisory committee. This means the majority of foundations using AI in grantmaking are operating without clear guidelines, ethical frameworks, or documented processes. Program officers are experimenting individually, trying to figure out what works, and making decisions about AI use that directly impact nonprofit funding opportunities—often without their organizations having decided what's appropriate.
For nonprofits, this creates both opportunity and uncertainty. On one hand, AI evaluation tools might democratize access to funding by reducing bias and enabling foundations to review more applications thoroughly. On the other hand, these same tools could introduce new forms of bias, disadvantage organizations that lack AI literacy, or favor proposals that game algorithmic evaluation criteria. Perhaps most importantly, there's the question of authenticity: with 67% of funders still undecided about whether to accept AI-generated proposals, while only 10% have explicitly said they will accept them and 23% have said they won't, the guidance nonprofits need most remains unclear.
This article cuts through the uncertainty to provide concrete intelligence about how funders are actually using AI in grantmaking, what concerns and limitations they're experiencing, and how nonprofits can adapt their grant strategies without compromising authenticity or playing manipulative games with algorithmic systems. We'll explore both the opportunities and risks of this transition, examine what funders are learning as they experiment, and provide practical guidance for nonprofits navigating this new landscape. Most importantly, we'll address the ethical dimensions—because how philanthropy adopts AI will shape equity and access in the sector for years to come, and nonprofits have legitimate stakes in ensuring that transformation benefits mission-driven organizations rather than simply advantages those with the most technical sophistication.
How Funders Are Actually Using AI in Grantmaking
Understanding what funders are doing with AI requires looking beyond the headlines about AI adoption to examine specific applications. Foundation staff are using AI across the grantmaking lifecycle in ways that directly affect how they evaluate applications, assess organizational capacity, and make funding decisions. Here are the primary use cases that nonprofits should understand.
Initial Screening and Application Triage
Many foundations receive far more applications than program staff can thoroughly review. Previously, this meant some applications received only cursory attention or were eliminated based on surface-level factors. AI is increasingly being used for initial screening—reading applications, identifying those that clearly don't meet basic eligibility criteria, flagging incomplete submissions, and categorizing applications by program area, geographic focus, or funding type. This allows program officers to focus their attention on applications that merit deep review.
What AI is evaluating: AI screening tools analyze whether applications address the required questions, include necessary attachments, provide requested financial information, and demonstrate basic organizational capacity. Some systems flag applications with inconsistencies—for example, if budget narratives don't align with budget numbers, or if claimed outcomes seem implausible given proposed activities. These are objective, rules-based evaluations that AI can perform consistently.
What this means for nonprofits: Basic application quality matters more than ever. Applications that are incomplete, internally inconsistent, or poorly formatted are more likely to be filtered out by AI screening before human reviewers see them. However, this also means that applications that meet basic quality standards are more likely to receive genuine human attention rather than getting lost in overwhelming volume. The bar for "good enough to advance" may be clearer and more consistent than when individual program officers made subjective decisions about what to review carefully.
The nuance: While AI can identify technical deficiencies, it can't recognize potential in unconventional proposals or understand context that makes apparent inconsistencies actually reasonable. Foundations using AI screening responsibly ensure that organizations can appeal automated rejections and that program officers can override AI recommendations when warranted. Nonprofits should pay attention to whether funders disclose their use of automated screening and provide mechanisms for human review when needed.
Financial Analysis and Organizational Assessment
Tools like Grant Guardian have been designed specifically to analyze organizational financial health and capacity. These systems extract financial data from Form 990s or audited financial statements, analyze revenue trends and stability, calculate financial health metrics (like months of operating reserves, revenue concentration, administrative expense ratios), identify financial red flags or concerning trends, and generate scorecards based on criteria the foundation defines as important. This provides program officers with objective financial analysis that would previously have required dedicated finance expertise or external consultants.
What AI is evaluating: Financial sustainability indicators, consistency between financial statements and narrative descriptions, trends over multiple years that suggest growth or decline, reliance on single funding sources (a risk factor), and comparative financial health against sector benchmarks. United Way of Greater Philadelphia and Southern New Jersey, for example, uses AI analysis to ensure grantees have the financial capacity to successfully implement proposed programs and continue operations beyond the grant period.
What this means for nonprofits: Financial transparency and consistency are critical. Organizations should ensure that their Form 990s accurately reflect their financial position, that narrative descriptions of financial health align with actual financial statements, and that they can explain concerning trends or unusual financial circumstances. Rather than viewing this as threatening, financially healthy organizations should recognize that objective financial analysis can work in their favor—demonstrating capacity that might not be obvious from organizational reputation alone.
The nuance: AI financial analysis can disadvantage young organizations without long financial track records, nonprofits experiencing legitimate but temporary financial transitions, and organizations serving marginalized communities where traditional financial metrics may not reflect organizational effectiveness. Responsible funders using these tools should ensure that AI-generated financial assessments are contextualized by program officers who understand the nonprofit's circumstances and sector realities.
Proposal Analysis and Scoring
Some foundations are experimenting with AI tools that analyze proposal narratives for clarity, coherence, alignment with funding priorities, evidence of community engagement, realistic goals and outcomes, and appropriate budget-to-activities alignment. The Missouri Foundation for Health's use of AI for "more effective, efficient, and equity-focused application review" suggests they're using AI to ensure consistent evaluation standards and potentially identify promising proposals from organizations that might not have professional grant writers.
What AI is evaluating: Whether proposals clearly articulate problems, proposed solutions, and expected outcomes; demonstrate understanding of the communities they serve; provide evidence or logic connecting activities to claimed outcomes; and align with the foundation's stated funding priorities and theory of change. AI can identify proposals that use foundation language and priorities, demonstrate cultural competency, or show innovation—at least at a surface level.
What this means for nonprofits: Clarity and specificity matter. Vague proposals with generic language may score poorly in AI analysis even if the underlying programmatic work is strong. Conversely, proposals that clearly articulate community needs, use specific examples, demonstrate understanding of context, and explicitly connect proposed activities to expected outcomes are more likely to be flagged as strong by AI systems. This actually advantages grassroots organizations with authentic community connections over those using grant-writing templates with impressive but empty language.
The concern: AI analysis of proposal quality raises serious questions about bias and fairness. AI systems trained on successful historical proposals may reinforce existing patterns—favoring organizations, writing styles, or approaches that have succeeded in the past, which could disadvantage innovative approaches or organizations led by people from marginalized communities whose communication styles differ from dominant norms. This is perhaps the most ethically fraught application of AI in grantmaking, and nonprofits should pay attention to whether funders acknowledge these risks and how they're mitigating them.
Research and Due Diligence
Foundation staff report that the most common use of AI is for research activities—and this includes research about applicant organizations. Program officers are using AI tools to quickly scan news articles and public information about applicant organizations and their leaders, synthesize information from organization websites and social media, identify connections between organizations and other funded entities, and generate summaries of an organization's history and public reputation. This dramatically accelerates the due diligence process that previously required hours of manual research.
What AI is evaluating: Public reputation and media coverage (both positive and negative), consistency between what an organization claims in their application and what's visible publicly, leadership stability and transitions, partnership networks and collaborative relationships, and any controversies, legal issues, or concerns that appear in public records or news coverage.
What this means for nonprofits: Your digital footprint and public presence matter more than ever. Organizations should ensure that their website clearly communicates their mission, programs, and impact; social media presence is professional and aligned with organizational values; public-facing information is consistent with what appears in grant applications; leadership and staff information is current and accurate; and they're aware of what appears when someone searches for their organization online. Conversely, organizations should address any past controversies or negative coverage proactively in applications rather than hoping foundations won't discover them.
The risk: AI research tools can surface information out of context, give undue weight to negative coverage (which is often more prominent than positive work), and miss important context about organizational evolution or lessons learned from past challenges. Organizations that have navigated difficulties successfully and emerged stronger may be unfairly penalized if AI research flags past problems without recognizing growth and improvement.
Peer Review and Consensus Building
Some foundations use AI to synthesize feedback from multiple reviewers, identify consensus and disagreement across reviewers, highlight applications where reviewers had significantly different assessments, and generate comparative analyses across similar proposals. A Spanish foundation using AI tools for grant screening describes it as making "peer-review more efficient"—though this claim has generated controversy among researchers who see it as potentially undermining the human judgment that peer review requires.
What AI is doing: Analyzing reviewer comments to extract common themes, quantifying areas of agreement and disagreement, identifying applications that scored consistently high or low across reviewers versus those with divergent assessments, and surfacing outlier opinions that might otherwise be overlooked in consensus-driven processes.
What this means for nonprofits: The impact is indirect but potentially significant. If AI synthesis of reviewer feedback helps foundations identify promising but unconventional proposals that might otherwise be dismissed, this could benefit innovative approaches. However, if AI systems prioritize consensus over dissenting voices that recognize potential others miss, it could disadvantage truly innovative work that doesn't fit established patterns.
The debate: The use of AI in peer review processes is generating significant debate in both academic and philanthropic contexts. The U.S. National Institutes of Health banned the use of AI tools in grant review processes in 2023, partly due to concerns about confidentiality but also due to fundamental questions about whether AI can meaningfully synthesize expert judgment. Nonprofits should be aware that this is a contested application of AI, and funders themselves are uncertain about its appropriateness.
What Funders Are Concerned About (And Still Figuring Out)
While headlines focus on AI adoption, foundation leaders are wrestling with significant uncertainties about how to use AI responsibly in grantmaking. Understanding these concerns helps nonprofits anticipate how funder AI practices might evolve and where advocacy for equitable practices might be most effective.
Bias, Equity, and Fairness Concerns
Foundation leaders are acutely aware that AI systems can perpetuate or amplify existing biases in philanthropic funding patterns. Research shows that organizations using AI trained on historical data tend to favor patterns similar to past success—which in philanthropy means potentially reinforcing funding patterns that have historically disadvantaged organizations led by people of color, serving marginalized communities, or using unconventional approaches. One analysis notes that "AI systems trained on biased data could favor specific individuals, groups, and institutions based on previous limitations—for instance, inadvertently screening for applicants who have previously received grants, which may perpetuate unfair funding outcomes."
This concern is particularly acute for foundations committed to equity and social justice. The Missouri Foundation for Health explicitly sought to use AI for "equity-focused application review," suggesting that thoughtful implementation might actually reduce bias—but this requires intentional design, constant monitoring, and willingness to override AI recommendations when they produce inequitable outcomes. Foundations are uncertain about how to achieve equity goals when using AI trained on inherently biased historical data.
What nonprofits should watch for: Funders who are transparent about their AI use and explicitly address equity concerns in their implementation are more likely to be using AI responsibly. Red flags include foundations that adopt AI evaluation tools without acknowledging bias risks, that don't provide mechanisms for human override of AI recommendations, or that can't articulate how they're monitoring for inequitable outcomes. Nonprofits—particularly those led by marginalized communities—have legitimate reasons to ask funders how they're ensuring AI doesn't disadvantage organizations that already face barriers to funding.
Confidentiality and Data Security
Grant applications often contain sensitive information: financial details, proprietary program models, information about vulnerable populations served, strategic plans not yet public, and sometimes information about individuals receiving services. When foundations use AI tools—particularly cloud-based generative AI systems—to analyze applications, that information may be exposed to third-party systems. The National Institutes of Health's 2023 ban on AI in grant review was partly motivated by confidentiality concerns: if reviewers use ChatGPT or similar tools to analyze proposals, that information goes to OpenAI's servers and could potentially be used for training future AI models.
Foundations are uncertain about how to balance the efficiency of AI tools with their obligations to protect confidential information applicants share in trust. Some are developing internal AI systems that don't expose data to third parties, but most foundations lack the technical capacity for this approach. Others are using AI only for information already public (like financial data from Form 990s) rather than for proprietary proposal content. Many are simply proceeding without clear protocols, creating potential confidentiality risks.
What nonprofits should watch for: Nonprofits have the right to ask funders about data protection practices when AI is used in application review. Responsible funders should be able to explain whether they use third-party AI tools to analyze confidential applications, how they protect sensitive information, and whether applicant data is used for AI training. Organizations working with particularly vulnerable populations or with proprietary approaches should be especially vigilant about these questions and consider whether to submit certain sensitive information when funders can't ensure it won't be exposed to AI training datasets.
The AI-Generated Proposal Question
Perhaps the question generating the most uncertainty among funders is whether to accept grant proposals with AI-generated content. A recent Candid survey found that 67% of funders are still undecided about this question, 10% indicated they would accept AI-generated applications, and 23% said they would not. This means two-thirds of funders haven't established clear positions, leaving nonprofits without clear guidance.
The uncertainty reflects deeper questions about what funders are actually evaluating in proposals. If a foundation sees proposals primarily as sources of information about organizational capacity and proposed activities, AI assistance in writing may not matter—what matters is whether the information is accurate and the proposed work is sound. But if foundations view the proposal-writing process itself as revealing organizational capacity, strategic thinking, and communication skills, then AI-generated content undermines that evaluation. Many funders haven't clarified which perspective they hold, leading to unclear and inconsistent guidance.
There's also the practical challenge of detection: foundations can't reliably determine whether specific proposals used AI assistance. Detection tools are unreliable and produce high false-positive rates. This means that even foundations philosophically opposed to AI-generated proposals lack practical mechanisms to enforce such policies. Some program officers privately acknowledge that they assume nonprofits are already using AI and have accepted this reality even without official foundation policy.
What nonprofits should do: In the absence of clear funder guidance, the safest approach is using AI as a tool that augments human expertise rather than replaces it. Use AI to generate initial drafts, but ensure final proposals reflect genuine organizational knowledge, mission understanding, and strategic thinking. When in doubt, ask program officers directly about their foundation's stance—many will appreciate the transparency and provide guidance even if formal policies don't exist. Most importantly, ensure that proposals, regardless of how they were drafted, accurately represent your organization's work and capacity. The ethics concern isn't whether AI touched the document, but whether the final proposal is truthful and represents genuine organizational capabilities.
Balancing Efficiency with Human Judgment
Foundations are attracted to AI because it promises to help them review more applications more thoroughly, identify patterns across large applicant pools, and allocate staff time more effectively. But there's genuine uncertainty about when AI assistance enhances human judgment versus when it substitutes for the nuanced evaluation that requires lived experience, community knowledge, and contextual understanding. As one analysis notes, while AI may help "quickly parse through applications" and complete reviews "in half the time," this efficiency comes with trade-offs that foundations are still assessing.
Program officers often have deep knowledge of the communities they fund, understanding of local context that isn't visible in applications, and relationships with organizations that inform their assessments. AI can't replicate this contextual knowledge. The challenge is determining where AI genuinely adds value—by handling objective analysis that frees program officers for relationship-based evaluation—versus where it creates false confidence in algorithmic assessment that lacks crucial context.
What nonprofits should hope for: The ideal scenario is foundations using AI for objective, rules-based analysis (financial capacity, completeness, consistency) while preserving human judgment for subjective evaluations (organizational fit, community understanding, innovation potential). Nonprofits should be concerned when foundations appear to be using AI to make final funding decisions or when program officer relationships become less important to funding outcomes. The best funder-nonprofit relationships have always involved mutual understanding built over time—AI should support this rather than replace it.
Adapting Your Grant Strategy for an AI-Augmented Funding Landscape
Understanding how funders use AI is only valuable if nonprofits know how to adapt their strategies accordingly. The goal isn't to game algorithmic systems or compromise authenticity—it's to ensure that your genuine organizational strengths are visible and compelling whether evaluated by human program officers or AI tools. Here's how to adapt your grant strategy for this evolving landscape.
Elevate Clarity, Specificity, and Consistency
AI evaluation tools excel at identifying clarity and consistency—or the lack thereof. Proposals that clearly articulate problems, solutions, and expected outcomes; use specific examples and concrete details; maintain consistency between narrative and budget; and demonstrate logical connections between activities and goals will score well in both AI and human evaluation. The advantage is that these are also characteristics of genuinely strong proposals.
Practical applications: Replace vague language like "we will serve the community" with specifics like "we will provide after-school tutoring to 75 students at Jefferson Middle School." Instead of generic claims like "our program is effective," provide specific outcomes: "In 2025, 83% of program participants improved their reading level by at least one grade." Ensure your budget narrative explains every budget line item and that numbers in your budget match numbers in your narrative. Use consistent terminology throughout the proposal—if you call something "youth leadership development" in one section, don't call it "teen empowerment programming" elsewhere.
Review your proposal with the question: "If someone with no prior knowledge of our organization read this, would they clearly understand what we're proposing and why it matters?" AI has no prior knowledge or context—it evaluates purely based on what's written. But so do many human reviewers, particularly for first-time applicants. Clarity and specificity benefit you regardless of who's evaluating.
Strengthen Your Financial Presentation
With 189 foundations using financial analysis tools like Grant Guardian, presenting clear, healthy financials is more important than ever. This doesn't mean your organization needs perfect finances—it means you need to explain your financial situation clearly and address potential concerns proactively. AI financial analysis tools look for specific indicators: months of operating reserves, revenue concentration, administrative expense ratios, year-over-year trends, and consistency between financial statements and narrative descriptions.
Practical applications: Ensure your Form 990 accurately reflects your financial position and is filed on time. If your financials show concerning trends (declining revenue, low reserves, heavy dependence on single funding sources), address these directly in your proposal with context and mitigation strategies. For example: "Our operating reserves decreased in 2025 due to a one-time facility renovation, but they're rebuilding as planned and we expect to reach 4 months of reserves by end of 2026." Don't make reviewers guess why financial indicators look concerning—explain the context.
Consider having a financial professional review your financial presentation before submitting major proposals, particularly if you're applying to foundations you know use AI financial analysis tools. Small improvements in how you present and explain your finances can significantly impact how both AI and human reviewers assess organizational capacity.
Audit Your Digital Presence
If funders are using AI for research and due diligence—scanning your website, social media, news coverage, and public records—you need to know what they'll find. Conduct the same search a program officer would: Google your organization, review your website as a first-time visitor, check your social media profiles, search news coverage, and review your publicly available financial documents (Form 990, GuideStar profile). Look for inconsistencies, outdated information, or anything that might raise questions.
Practical applications: Ensure your website clearly communicates your mission, programs, and current priorities—and that this aligns with what you're saying in grant proposals. Update leadership and staff information regularly. If there have been leadership transitions, organizational pivots, or past controversies, ensure your public communications address these with appropriate context. Make sure your social media presence is professional and aligned with organizational values—or at least that personal accounts of staff and board members are separate from organizational accounts.
If you discover concerning information that might be found in AI research (negative news coverage, public complaints, financial issues reported elsewhere), consider addressing it proactively in proposals: "You may be aware that our organization experienced challenges in 2024 when [situation]. We've since [what you've done to address it] and are now [current positive status]." Proactive transparency is almost always better than leaving reviewers (AI or human) to interpret concerning information without context.
Use AI as a Tool, Not a Replacement
Given that 67% of funders remain undecided about AI-generated proposals and 23% have explicitly said they won't accept them, the strategic approach is using AI as an augmentation tool while maintaining authentic voice and genuine organizational knowledge. Use AI to overcome blank-page syndrome, generate initial drafts that you then heavily revise, research foundation priorities and language, identify gaps in your logic or narrative, and improve clarity through multiple revisions. But ensure the final product reflects genuine organizational understanding, strategic thinking, and mission commitment.
Practical applications: Try this workflow: Have AI generate an initial draft based on an outline you provide; review and revise extensively, replacing generic language with specific organizational examples; have a colleague who knows your work well review it—if they say "this doesn't sound like us," revise further; ensure every claim in the proposal is accurate and evidence-based, even if AI suggested impressive-sounding language; and run the final version through your normal review process as if AI hadn't been involved.
The ethical line isn't whether AI touched the document—it's whether the final proposal accurately represents your organization's work, capacity, and approach. A proposal drafted by AI but extensively revised to reflect genuine organizational knowledge is more ethical than a proposal written entirely by humans that exaggerates capacity or misrepresents programs. Focus on truthfulness and authenticity, not on whether AI was involved in the process.
Maintain Human Relationships with Program Officers
As foundations adopt AI evaluation tools, there's a risk that program officer relationships become less important—that funding becomes more algorithmic and less relational. The counter-strategy is intentionally strengthening relationships so that program officers become advocates for your work even when AI systems might not fully recognize your strengths. Remember that even foundations using AI extensively still make final decisions through human judgment, and program officers can override AI recommendations when they have strong relationships with organizations and confidence in their work.
Practical applications: Before submitting proposals, reach out to program officers for guidance on priorities and fit. Share your work's impact through regular updates, not just when applying for funding. Invite program officers to see your programs in action—virtual or in-person site visits build understanding that AI can't replicate. When you receive funding, provide excellent reporting that demonstrates impact and builds confidence. Treat program officers as partners in your mission rather than gatekeepers to funding.
Strong relationships matter most when AI evaluation produces ambiguous or negative results. A program officer who knows your work firsthand, has seen your impact, and trusts your organizational capacity will advocate for funding even if AI scoring is lukewarm. Conversely, organizations without relationships are more likely to be judged primarily on AI evaluation. In an increasingly algorithmic funding landscape, human relationships become more valuable, not less.
Ethical Considerations and Advocacy Opportunities
Beyond adapting individual grant strategies, nonprofits have collective interests in how AI transforms philanthropy. The choices foundations make about AI adoption will shape equity, access, and power dynamics in the sector for years to come. Nonprofits aren't passive recipients of these changes—there are opportunities for advocacy and influence over how AI is used in grantmaking.
Demanding Transparency and Accountability
Nonprofits have the right to know when AI is being used to evaluate their applications and to understand how it's being used. Currently, most foundations using AI don't disclose this to applicants, leaving organizations to guess about evaluation criteria and processes. Transparency serves both practical and ethical purposes: it allows organizations to adapt their strategies appropriately, enables accountability when AI systems produce biased outcomes, and respects the relationship between funders and grantees as partners rather than treating nonprofits as subjects of algorithmic evaluation they can't see or understand.
What nonprofits can advocate for: Foundations should disclose in application guidelines when AI tools are used in evaluation and what role AI plays in funding decisions. Organizations should be able to request human review if they believe AI evaluation was flawed or failed to recognize important context. Foundations should publish information about how they're monitoring for bias and what they're learning about AI's impact on funding equity. When foundations reject applications, applicants should be able to understand whether AI played a role in that decision.
Individual nonprofits can ask program officers directly: "Does your foundation use AI in application review? If so, how?" Regional associations of nonprofits can advocate collectively for disclosure policies. National nonprofit networks can elevate transparency as an equity issue that affects whether AI reinforces or disrupts existing disparities in philanthropic funding.
Monitoring for Equity Impacts
The most serious concern about AI in grantmaking is that it could reinforce existing funding disparities—favoring organizations that have historically received funding, perpetuating bias against organizations led by people from marginalized communities, or advantaging those with resources to hire professional grant writers over grassroots organizations with authentic community connections but less polished applications. These aren't hypothetical concerns—they're documented patterns in other contexts where AI has been applied to consequential decisions.
Foundations committed to equity should be monitoring whether their AI implementation is producing equitable outcomes: Are funding patterns for organizations led by people of color changing after AI adoption? Are first-time applicants as successful as previously? Are smaller organizations without grant-writing capacity competing effectively? Are innovative approaches that don't fit established patterns being recognized? Foundations should be asking these questions—but nonprofits can and should ask them too.
What nonprofits can advocate for: Foundations should publish data about how AI implementation is affecting funding equity across different organization types, sizes, and leadership demographics. Industry associations should develop standards for equity monitoring when AI is used in grantmaking. Foundations using AI should partner with organizations led by marginalized communities to co-design implementation approaches that center equity rather than treating it as an afterthought. When patterns of inequity emerge, foundations should be transparent about them and willing to modify or discontinue AI use if it's producing biased outcomes.
Shaping How AI Is Used, Not Just Adapting to It
How nonprofits can influence foundation AI practices during this formative period
With 70% of foundations lacking AI policies and program officers experimenting individually, there's an unusual window for influence. Foundation AI practices aren't yet locked in—they're still being figured out. Nonprofits who engage program officers in constructive conversations about AI can actually shape how it's used. Program officers are often uncertain and welcome thoughtful perspectives from the organizations they fund about what would be helpful versus concerning.
Productive conversations to have: Share with program officers when application processes are unclear or seem to prioritize form over substance (which AI screening might worsen). Discuss concerns about whether AI can recognize innovative approaches that don't fit established patterns. Offer perspectives on how AI might inadvertently disadvantage organizations like yours. Ask thoughtful questions about how foundations plan to ensure AI enhances rather than replaces relationship-based grantmaking. Share what would actually be helpful—for example, many nonprofits would welcome AI tools that help them write stronger applications more than AI tools that screen them out algorithmically.
The nonprofit sector has more power in this transition than may be obvious. Foundations genuinely want to get AI right, and many are uncertain how to do so. Thoughtful, constructive engagement from nonprofit leaders can influence foundation practices during this formative period in ways that may not be possible once practices become entrenched. This isn't about resisting AI—it's about shaping how it's used so that it genuinely serves both foundations and nonprofits rather than creating new barriers or inequities.
Navigating Transformation with Authenticity and Advocacy
The philanthropic sector's adoption of AI is neither inherently good nor bad—it's consequential. How foundations choose to use AI in grantmaking will shape which organizations receive funding, whether innovative approaches are recognized, and whether existing disparities are reinforced or disrupted. With 81% of foundations already using AI in some capacity but only 30% having formal policies, we're in a critical period where practices are being established that will influence the sector for years to come.
For individual nonprofits, the strategic imperative is adaptation without compromising authenticity. Strengthen the clarity, consistency, and specificity of your proposals—qualities that benefit evaluation whether by AI or humans. Improve your financial presentation and address potential concerns proactively. Audit your digital presence to ensure what funders find through AI research accurately represents your organization. Use AI as a tool that augments your grant-writing capacity while ensuring final proposals reflect genuine organizational knowledge and strategic thinking. Maintain and strengthen relationships with program officers who can advocate for your work even when algorithmic evaluation might miss your strengths.
But individual adaptation isn't sufficient. Nonprofits collectively have interests in how AI transforms philanthropy, and there are opportunities for advocacy and influence. Demanding transparency about when and how AI is used in evaluation, advocating for equity monitoring to ensure AI doesn't reinforce funding disparities, engaging program officers in conversations about responsible AI use, and participating in sector-wide discussions about standards and best practices—these actions can shape how philanthropy adopts AI in ways that serve mission-driven organizations and the communities they serve.
The uncertainty many funders are experiencing—67% undecided about AI-generated proposals, 70% without formal AI policies, program officers experimenting individually without clear guidance—creates both risk and opportunity. The risk is that ad-hoc adoption produces inequitable outcomes and damages trust between funders and nonprofits. The opportunity is that practices aren't yet entrenched, and thoughtful engagement from nonprofit leaders can influence how AI is used during this formative period.
Ultimately, AI in grantmaking should serve the relationship between funders and nonprofits rather than replace it. The best grantmaking has always been relational—built on program officers understanding context, recognizing potential, and making nuanced judgments that consider both data and intangibles. AI can support this by handling objective analysis, freeing program officers for deeper engagement, and potentially reducing bias when used thoughtfully. But it can also undermine it by substituting algorithmic evaluation for human judgment, introducing new forms of bias, and creating distance between funders and the organizations they support. Which future emerges depends on choices being made now—choices nonprofits can influence through adaptation, advocacy, and constructive engagement. For more guidance on navigating AI in the nonprofit sector, explore our articles on developing AI policies, building AI literacy, and creating an AI strategy for your organization.
Need Help Strengthening Your Grant Strategy?
We help nonprofits adapt their grant strategies for an AI-augmented funding landscape while maintaining authenticity and organizational voice. Let's discuss how to position your organization for success with both human and AI evaluation.
