How Foundations Are Using AI to Evaluate Grantees (And What That Means for You)
Foundations are beginning to use AI tools to screen grant applications, analyze financial health, and identify funding-fit before a program officer ever reads your proposal. Here is what nonprofits need to understand about this shift, which tools are already in use, and how to prepare your organization.

The grant application landscape is quietly changing. For decades, the relationship between a nonprofit and a foundation played out almost entirely between humans: program officers reading proposals, reviewing financials, conducting site visits, and making judgment calls shaped by relationships and intuition. That is still largely true today. But a new layer is being added, and nonprofits who understand it will be better positioned for the years ahead.
A growing number of foundations are experimenting with AI tools to help manage the volume of applications they receive, extract data from financial documents, and surface organizational health indicators before human review begins. In some cases, this means AI is summarizing proposals to save program officer time. In others, it means automated systems are calculating financial ratios from your most recent 990 and flagging potential concerns, all before you know you are being evaluated.
The current reality is more measured than the hype suggests. Actual AI adoption in grant evaluation remains early-stage. According to Candid's 2025 Foundation Giving Forecast Survey, only 1% of foundations currently use generative AI to screen applicants or inform funding decisions. However, 19% are actively considering it and 3% plan to implement soon. The tools being built, and the infrastructure being laid, signal a meaningful shift over the next three to five years.
This article explains what is actually happening, which tools funders are using, what criteria AI systems assess, what the equity risks are, and what nonprofits can do right now to ensure they are presenting the strongest possible profile, whether an AI or a human is doing the first review.
What AI Actually Does in Grantmaking
Before diving into specifics, it helps to clarify what AI can and cannot currently do in the grant evaluation process. The reality is more limited than the narrative sometimes suggests, and understanding those limits helps nonprofits respond proportionally rather than reactively.
What AI Does Well
Current capabilities in grantmaking contexts
- Extracting structured data from 990s, audits, and financial statements
- Calculating financial health ratios and flagging anomalies automatically
- Summarizing lengthy proposals so program officers can triage faster
- Identifying trends across large applicant pools
- Surfacing past giving history and relationship context
What AI Cannot Do
Limitations that keep humans in the loop
- Assess mission alignment, community trust, or cultural competency
- Evaluate leadership quality or organizational culture
- Understand context behind financial irregularities or unusual data
- Make final funding decisions without human oversight
- Accurately evaluate newer organizations with limited data histories
The key insight is that AI is currently functioning as a pre-screening and administrative layer, not as a decision-maker. It is helping foundations manage volume, reduce time spent on routine data extraction, and ensure program officers have structured information before conversations begin. Human judgment remains central to every funding decision at every foundation currently using these tools.
That said, even a filtering and summarizing role carries real consequences. If AI flags a financial concern incorrectly, or if a proposal summary misses critical context, the human reviewer may begin their evaluation with a skewed impression. Nonprofits cannot afford to ignore this layer simply because it is not making final decisions.
Grant Guardian: The Most Concrete AI Evaluation Tool in Use
The most significant example of AI-powered grantee evaluation currently in production is Grant Guardian, built and open-sourced by the Patrick J. McGovern Foundation. As of early 2026, nearly 200 philanthropies have adopted it, including the GitLab Foundation and multiple United Way chapters.
Grant Guardian uses Claude, the AI model from Anthropic, to extract financial data directly from 990 tax filings, balance sheets, and income statements. It then calculates a set of customizable financial health indicators and generates a standardized report that would previously have required a program officer to spend several hours manually reviewing documents.
What Grant Guardian Evaluates
Financial health criteria extracted from public filings
Each foundation using Grant Guardian can customize the specific ratios and thresholds it emphasizes, but the core categories of analysis include:
- Operating reserves: How many months of operating expenses does the organization hold in liquid reserves?
- Revenue diversification: What proportion of revenue comes from grants vs. earned income vs. individual donors?
- Liquidity ratios: Can the organization cover short-term obligations from available assets?
- Deficit or surplus trends: Is the organization running consistent surpluses, occasional deficits, or chronic shortfalls?
- Program expense ratio: What percentage of total expenses goes toward mission delivery vs. administration?
Importantly, Grant Guardian is designed with human oversight built in. It flags information and surfaces patterns; it does not score, rank, or exclude organizations. Every foundation using it still relies on program officer judgment to interpret the data.
The McGovern Foundation was transparent about the tool's purpose: it dramatically reduces the time program officers spend on routine financial document review, freeing them to focus on deeper relationship and mission assessment. That is a legitimate efficiency gain. But it also means financial health data is now being extracted and structured in standardized ways before a program officer reads a word of your proposal narrative.
For nonprofits, the implication is clear: the numbers in your 990 and audited financials are no longer just compliance documents. They are the inputs for AI-generated health assessments that shape how funders see you before you even begin a conversation.
Grant Management Platforms Adding AI Features
Beyond purpose-built tools like Grant Guardian, the major grant management platforms that foundations use to process applications are all integrating AI capabilities. This means even foundations not actively pursuing AI evaluation may find their workflows shaped by AI features embedded in systems they already use.
Fluxx
Added AI summarization across the grant lifecycle, including applications, reports, payment records, and attached documents. The design is human-in-the-loop with no automated scoring, but AI-generated summaries may influence how program officers engage with applications, particularly at high volume.
Candid
In January 2026, Candid launched a unified search platform integrating 1.9 million organizations, 3 million grant transactions, and $180 billion in annual grant data with AI-powered funder recommendations. This shapes which organizations surface prominently in funder research and discovery.
Instrumentl
Raised $55 million in April 2025 to refine AI-powered funding-fit algorithms. While Instrumentl primarily serves nonprofits searching for grants, the match-scoring logic it develops also shapes how grantees are surfaced to funders using the platform.
Salesforce Nonprofit Cloud
AI co-pilots embedded in Salesforce can scan incoming reports, recommend budget reallocations, and surface patterns across a foundation's entire grantee portfolio. Foundations using Salesforce may have AI-generated insights about your organization's performance without explicitly deploying a separate evaluation tool.
The broader implication is that AI is becoming ambient in the grantmaking process. Even foundations that have not made a deliberate choice to use AI for evaluation may find AI-generated summaries, match scores, and financial flags appearing in the workflows they already use. The question for nonprofits is not whether AI will be involved in how funders see you, but how to ensure what AI surfaces is accurate and favorable.
The Equity Risks Nonprofits Should Understand
The philanthropic sector has invested significant energy in recent years addressing historical inequities in who gets funded: which communities receive resources, which organizations are trusted with grants, and whose leadership is seen as credible. AI-assisted evaluation introduces real risk of encoding and amplifying those same biases in more opaque and efficient ways.
The core problem is that AI systems learn from historical data. If past grantmaking has systematically underfunded BIPOC-led organizations, organizations in rural areas, or grassroots groups with shorter track records, an AI trained on that data will identify those historical patterns as signals of risk. The system does not distinguish between "this organization is risky" and "this type of organization has historically been treated as risky by biased funders."
Documented Equity Concerns
Risks identified by researchers and sector observers
- Historical pattern replication: AI trained on past funding decisions reproduces biases around organization size, geography, and demographic focus
- Reduced visibility: Biased decisions become harder to identify and challenge when made by automated systems rather than individual humans
- Newer organization disadvantage: Organizations with shorter operating histories, fewer financial documents, and smaller budgets generate less favorable AI assessments, even when doing excellent work
- Awareness-action gap: A 2025 sector survey found 64% of nonprofits are aware of AI bias concerns, but only 36% are implementing equity practices, down from 46% in 2024
- BIPOC-led organization misrepresentation: Candid's own research confirms that AI systems can misrepresent outcomes and organizational strength for marginalized communities when trained on inequitable historical data
These are not theoretical concerns. They represent live risks that the sector is actively debating. For nonprofits that have historically faced discrimination in funding, AI-assisted evaluation could either help (by removing some subjective human bias) or hurt (by systematically encoding historical underinvestment as organizational risk). The outcome depends almost entirely on how foundations design and deploy these tools.
What nonprofits can do is advocate for transparent practices from their funders: ask which AI tools are used in the application process, what criteria they assess, how equity considerations are built in, and what recourse exists if AI-generated assessments are inaccurate. The foundations committed to equitable grantmaking will welcome these questions. Those that are not should be treated accordingly.
How Nonprofits Can Prepare for AI-Assisted Evaluation
The good news is that preparing for AI-assisted evaluation is largely identical to the financial and organizational discipline that good nonprofits already practice. AI does not introduce entirely new requirements; it raises the stakes on existing ones and rewards organizations that are proactively managing their public data.
1. Make Your Financial Data Impeccable
AI tools like Grant Guardian pull directly from public 990 filings and audited financial statements. These documents are your primary profile in any automated evaluation. Errors, delays in filing, or unusual patterns without explanatory context can generate unfavorable signals that no proposal narrative can overcome.
- File 990s on time and ensure they are accurate and clearly categorized
- Keep audited financials current, particularly if applying to larger foundations
- Ensure your Candid/GuideStar profile is up to date with current leadership and program information
- Review your financial ratios the way funders will see them: operating reserves, program expense percentage, revenue diversification
2. Build Toward Healthy Financial Indicators
If you know AI systems are specifically looking at operating reserves, revenue diversification, and deficit trends, those become priorities for your financial management strategy, not just items on a checklist. The tools being used reward specific attributes.
- Prioritize building operating reserves to at least three months of operating expenses
- Actively diversify revenue streams to reduce over-reliance on any single funder or grant
- If your organization ran deficits in prior years, address this proactively in application narratives
If you are working on building your financial infrastructure, articles on AI for nonprofit budget management and managing restricted funds with AI offer practical strategies for strengthening financial reporting.
3. Audit Your Public Digital Presence
AI evaluation tools do not limit themselves to formal financial documents. Some platforms also pull from your website, annual reports, impact data, and public ratings platforms. Your digital presence is part of your financial and organizational profile.
- Ensure your Charity Navigator, BBB Wise Giving, and Give.org profiles are accurate and current
- Publish an annual report or impact summary that is clearly structured and searchable
- Make sure your website clearly articulates mission, programs, outcomes, and leadership
- Respond to any outdated or inaccurate information appearing in public databases
4. Use AI to Prepare Your Applications
One of the most effective responses to AI-assisted evaluation is to use AI yourself. AI tools can help you structure applications to surface key information clearly, anticipate how automated systems will interpret your financials, and ensure your narrative directly addresses the criteria being assessed.
- Use AI to review your draft proposals from the perspective of an automated screening system
- Ask AI to identify any financial data points that might raise concerns and draft explanatory context
- Ensure key metrics (program reach, cost per beneficiary, outcome data) appear in clearly labeled, extractable form
For a comprehensive introduction to using AI tools in your development work, see our guide on getting started with AI for nonprofits.
5. Engage Funders in Conversation About Their AI Use
You have both the right and the standing to ask foundation partners about AI in their evaluation process. Foundations committed to equitable, transparent grantmaking should welcome these questions. The conversation itself can build trust and demonstrate your organization's sophistication.
- Ask whether AI tools are used in application review and at what stage
- Inquire about what criteria automated systems assess and how human oversight is maintained
- Ask about the process if automated screening generates concerns that may be explained by context
What the Next Three Years Look Like
The trajectory of AI in grantmaking is clear even if the pace is uncertain. The 1% of foundations currently using AI for application screening will grow. The platforms that process grant applications are integrating AI whether individual foundations choose to or not. The data infrastructure, shaped by tools like Grant Guardian and updated platforms like Candid's unified search, is being built now.
The organizations that navigate this shift most effectively will be those that treat it not as a threat but as a management discipline. Impeccable financial data, clear public documentation, and strong fundamental indicators are not new best practices. They are existing standards that AI is making more consequential.
At the same time, the equity risks are real and need persistent attention from both foundations and the broader sector. AI that embeds historical bias is not a neutral efficiency tool; it is a mechanism for perpetuating inequitable funding patterns with more speed and less accountability. Nonprofits serving marginalized communities, smaller organizations, and newer entities need to stay engaged with the policy and practice conversations happening in the sector around AI governance.
The most important thing is not to be passive. Whether your funders are using AI today or three years from now, the preparation required is the same: maintain rigorous financial health, document your impact clearly, and build relationships that ensure human judgment remains central to decisions about your organization. For more on how AI is reshaping the funder relationship, see our article on using AI for funder research and strategy.
Conclusion
AI-assisted grant evaluation is still early-stage. The vast majority of foundation funding decisions are still made by humans, based on relationships, mission alignment, and program quality that no automated system can fully assess. But the infrastructure is being built, the tools are being deployed, and the platforms are integrating AI in ways that will shape how foundations see your organization, whether or not they have made a deliberate choice to use AI.
The practical implications are not radical: keep your financials in excellent order, ensure your public data is accurate and current, document your outcomes clearly, and use AI yourself to prepare stronger applications. These are the same disciplines that strengthen any development program. AI simply raises the stakes on executing them consistently.
The sector is also at a critical moment for shaping how AI is deployed in grantmaking. The norms being established now, around transparency, equity safeguards, and human oversight, will define whether this technology narrows or widens historical funding inequities. Staying engaged in that conversation is not just good advocacy; it is good strategy for your organization's long-term funding health.
Strengthen Your Grant Strategy with AI
Whether funders are using AI to evaluate you or you are using AI to find and pursue funders, the organizations winning more grants are using technology strategically. We help nonprofits build AI capacity that serves their development goals.
