AI Readiness as a Grantmaking Criteria: Preparing for Foundation Technology Requirements in 2026
Foundations are rapidly shifting their expectations about how nonprofit grantees approach technology and AI. Here is what the research shows about where funders are heading, what they are looking for, and how your organization can demonstrate the AI readiness that will increasingly matter for competitive grants.

The most honest way to describe the current state of AI and grantmaking is this: most foundations are not yet formally requiring AI readiness from grantees, but the window for that to change is shorter than most nonprofit leaders realize. The Center for Effective Philanthropy's landmark September 2025 report, "AI With Purpose," surveyed 215 foundation leaders and 451 nonprofit leaders and found that 90 percent of foundations provide no AI implementation support to their grantees. Yet 91 percent of funders believe AI will positively transform philanthropy within three years. This gap between current practice and stated expectation is not a sign that AI readiness is irrelevant to grant success. It is a sign that the sector is at an inflection point, and organizations that prepare now will have a significant advantage.
The shape of that inflection is becoming clear. October 2025 saw ten of the world's most influential foundations, including Ford, MacArthur, Packard, Mellon, Mozilla, and Omidyar Network, launch Humanity AI, a $500 million, five-year initiative committed to ensuring AI development serves people rather than concentrates power. Their pooled grantmaking began in 2026. The Patrick J. McGovern Foundation awarded $75.8 million across 149 grants in 2025 with explicit expectations around data governance, community participation in AI design, and institutional capacity to evaluate AI systems. OpenAI launched its People-First AI Fund with $50 million in unrestricted grants, explicitly including organizations that are not yet using AI but are positioned to develop that capacity.
For nonprofit leaders navigating this landscape, the question is not whether AI readiness will matter to funders, but how to demonstrate readiness in ways that are authentic, not performative. Funders are sophisticated enough to recognize when organizations are claiming AI sophistication they do not have. The research suggests that honest assessment of where you are, combined with a credible roadmap for where you are going, is more compelling to most program officers than inflated claims of current capability.
This article covers what the research shows about funder expectations, what gaps currently exist between what funders expect and what nonprofits have, which major initiatives are shaping the grantmaking landscape in 2026, and how to build and document AI readiness in ways that strengthen both your operations and your funding position. It draws on the most current available survey data, foundation announcements, and sector analysis available through early 2026.
What "AI Readiness" Actually Means to Funders in 2026
When funders talk about AI readiness, they are not primarily asking whether your organization has deployed chatbots or built machine learning models. The criteria that matter to sophisticated grantmakers in 2026 are fundamentally about organizational capacity, values alignment, and governance. Bonterra's "AI Readiness Path" report, which surveyed 300 funder leaders alongside 547 nonprofit leaders, identified the three most important pieces of advice funders give nonprofits about AI: 93 percent say providing staff with adequate AI training is most important, 83 percent emphasize ensuring transparency in AI decision-making, and 82 percent prioritize data quality and security.
Six specific areas consistently appear across foundation guidance and emerging grantmaking criteria. Mission alignment asks whether AI amplifies your impact rather than simply automating tasks, and whether you can articulate that connection clearly. Responsible implementation asks what safeguards you have built to ensure AI enhances rather than replaces human judgment in your programs. Equity focus examines how your approach addresses potential bias, ensures transparency with communities served, and promotes digital equity rather than creating new barriers. Sustainability asks who maintains and improves your AI systems beyond the grant period, a question that trips up many organizations that frame AI investments as one-time projects rather than ongoing capacity.
Data stewardship asks what governance structures protect the communities you serve, which encompasses privacy policies, access controls, retention practices, and how sensitive data flows through AI systems. Finally, learning orientation asks how your organization will share what works, and what does not, with the broader sector. This last criterion reflects a growing funder interest in the nonprofit sector collectively building AI knowledge, not just individual organizations advancing their own capabilities. Organizations that position their AI work as contributing to sector-wide learning tend to receive more favorable reception from foundations with broad portfolios.
What Funders Evaluate
The six criteria emerging across foundation grantmaking processes
- Mission alignment: How AI amplifies impact, not just automates tasks
- Responsible implementation: Human oversight and safeguards in AI decision-making
- Equity focus: Bias mitigation, transparency, and digital equity commitment
- Sustainability: Plans for maintaining AI systems beyond the grant period
- Data stewardship: Governance protecting communities served
- Learning orientation: Commitment to sharing findings with the sector
Key Data Points from Funder Surveys
What the research shows about foundation expectations in 2025-2026
- 91% of funders believe AI will positively transform philanthropy within three years (Bonterra, 2025)
- 93% of funders say staff training is the most important AI advice they give nonprofits
- Only 17% of nonprofits say their funders have ever engaged them in an AI conversation (CEP, 2025)
- Only 36% of foundation program officers feel confident assessing AI proposals (Project Evident)
- 92% of funders worry about data privacy in grantee AI implementations
The Humanity AI Initiative and What It Signals for Grantees
On October 14, 2025, ten of the most influential foundations in the world announced Humanity AI, a $500 million, five-year initiative. The founding coalition includes the Doris Duke Foundation, Ford Foundation, Lumina Foundation, Kapor Foundation, MacArthur Foundation, Mellon Foundation, Mozilla Foundation, Omidyar Network, David and Lucile Packard Foundation, and Siegel Family Endowment. Rockefeller Philanthropy Advisors serves as fiscal sponsor. Pooled fund grantmaking began in 2026, with five priority areas: democracy, education, humanities and culture, labor and economy, and security.
The significance of Humanity AI goes beyond the dollar amount. When ten major foundations align their grantmaking strategy around a single theme, it sends a powerful signal to the entire field about what organizational work they consider fundable. Organizations working in democracy, civic participation, education access, labor rights, arts and culture, and public safety are now operating in a grantmaking environment where their relationship to AI governance is a relevant factor. This is not primarily a signal that funders want to see AI tools deployed. It is a signal that the leading philanthropic institutions believe nonprofits in their portfolios should be active participants in shaping how AI affects their communities, not passive recipients of whatever technology delivers.
For organizations seeking grants from Humanity AI coalition members, the practical implication is that articulating a thoughtful AI posture, even if that posture is largely cautious and focused on protecting community members from potential AI harms, is more important than claiming extensive AI deployment. A democracy-focused nonprofit that can explain how it is helping communities understand and advocate around AI decision-making in government systems is far better positioned with this funder cohort than one that simply lists AI tools it uses for communications efficiency.
The McGovern Foundation's parallel grantmaking provides additional signal. Its stated requirements for grantees, community participation in AI design, data governance capacity, institutional ability to evaluate AI systems, and commitment to serving public interests, are detailed enough to suggest a level of organizational capacity that many nonprofits have not yet built. Organizations applying for McGovern grants benefit from having documented their data governance practices, being able to describe their community engagement processes around technology decisions, and having at least one staff member with sufficient AI knowledge to engage substantively with technical evaluation questions.
Major 2026 Funding Initiatives and Their Expectations
What each initiative signals about what funders will look for from grantees
Humanity AI ($500M coalition, 2026 grantmaking begins)
Focus areas: democracy, education, humanities, labor, security. Signals that leading funders expect grantees to engage with AI governance as a social justice issue, not just as operational technology. Organizations in relevant sectors should be able to articulate how their work shapes or responds to AI's community impacts.
Patrick J. McGovern Foundation ($75.8M, 149 grants in 2025)
Requires community participation in AI design, data governance capacity, and institutional ability to evaluate AI systems. Provides in-house technical advisory teams to grantees. Sets the clearest documented expectations of any major funder.
OpenAI People-First AI Fund ($50M, 208 nonprofits in 2025)
Explicitly welcomes organizations not yet using AI. Focus on AI literacy, community-led innovation, and economic opportunity. Budget range $500K to $10M operating budget. Signals that funders distinguish between "readiness to learn" and "current deployment."
Overdeck Family Foundation AI Accelerator (22 organizations, cohort model)
Organizations with clear AI strategies jumped from 4% to 58% during the program. Strategic AI problem-solving jumped from 29% to 93%. Demonstrates that structured capacity-building programs produce measurable readiness gains funders can see.
The Four Gaps That Undermine Nonprofit AI Readiness
Research across multiple sector surveys paints a consistent picture of where nonprofits fall short of what funders are beginning to expect. Understanding these gaps is the first step toward addressing them, and being honest about which gaps apply to your organization is itself a form of readiness.
The Governance Gap
The largest and most visible gap to funders
While 82 to 92 percent of nonprofits now use AI in some capacity, only 10 to 24 percent have formal AI governance policies. Nearly half of all nonprofits have no written AI policy at all. The vast majority use AI on an ad hoc basis without documented workflows. For funders who have started asking about AI governance in grant conversations, this gap is immediately visible and signals organizational immaturity regardless of how sophisticated the AI tools in use actually are.
What to do:
- Draft a one-page AI acceptable use policy (what is encouraged, what needs approval, what is prohibited)
- Bring AI governance to the board as a standing agenda item, not a one-time presentation
- Assign a named staff member as your organization's AI point of contact
The Data Gap
Foundational infrastructure that enables everything else
More than 52 percent of organizations cite data quality and availability as the biggest barrier to AI adoption. Only 14 percent of nonprofit leaders believe their data maturity can support AI at scale. Nearly 76 percent say their data management capabilities cannot keep up with organizational needs. Funders who prioritize data quality are essentially asking: can this organization learn from its work systematically, and can AI tools trained on or connected to your data actually function reliably?
What to do:
- Conduct a basic data audit: what do you have, how is it structured, who can access it
- Establish a data privacy policy that specifically addresses AI-related data flows
- Prioritize cleaning and standardizing your most strategically important data sets
The Staff Capacity Gap
The gap funders are most concerned about
About 40 percent of nonprofits have no staff formally trained in AI. More than 90 percent of nonprofit professionals report feeling unprepared to fully leverage AI in their roles. When 93 percent of funders say staff training is the most important AI-related advice they give, this gap represents a direct misalignment between organizational reality and funder expectation. The concern is not that every staff member become an AI expert, but that organizations have baseline literacy across teams and identified champions who can evaluate and guide AI decisions.
What to do:
- Run a baseline AI literacy assessment and create a targeted training plan
- Identify two or three AI champions at different levels of the organization
- Build AI training into new staff onboarding, not just professional development
The Impact Measurement Gap
The difference between adoption and demonstrable value
The Virtuous 2026 Nonprofit AI Adoption Report surveyed 346 nonprofits and found that 92 percent have integrated AI into their daily work, but only 7 percent report it is moving the needle on mission in a major way. Nearly 79 percent have fallen into an "efficiency plateau" where AI helps with administrative tasks but is not producing mission impact. Funders are increasingly distinguishing between organizations that use AI and organizations that can demonstrate AI's contribution to outcomes. The former is table stakes; the latter is a competitive advantage.
What to do:
- Define specific mission-linked metrics before deploying any AI tool
- Document baselines and track changes attributable to AI implementation
- Share outcomes transparently in funder reports, including what did not work
An Underappreciated Dynamic: Funders Are Uncertain Too
One of the most important and underappreciated findings from recent research is that foundations themselves are navigating significant uncertainty about AI. A Project Evident study of 38 major philanthropies found that only 36 percent of program officers felt confident assessing the technical feasibility of AI proposals. The Technology Association of Grantmakers reported that while 81 percent of foundations use AI in some capacity, only 4 percent have enterprise-wide AI adoption. Nearly 45 percent of grantmakers lack even a data privacy policy for themselves.
This "institutional imposter syndrome" among funders has two practical implications for nonprofits. First, foundations may avoid funding AI-heavy proposals entirely if program officers do not feel equipped to evaluate them. This means clear, plain-language explanation of AI strategy matters enormously. Proposals filled with technical jargon that program officers cannot assess are likely to be passed over, regardless of actual organizational quality. Second, organizations that demonstrate informed, thoughtful engagement with AI, including honest discussion of limitations and risks, are more credible to funders than those that present AI as a fully solved problem.
The practical takeaway for grant applications is to write for a reader who is intelligent and mission-focused but not technically expert. Explain what AI tools you are using in plain language, why you chose them, what problem they solve, and what human oversight you maintain. Acknowledge the limitations of AI in your context. Describe your learning approach. This combination, concrete about what you are doing while appropriately humble about uncertainties, tends to build more trust with program officers than confident claims about AI transformation.
What to Include in Grant Proposals About AI
How to communicate AI strategy effectively to program officers at different knowledge levels
Include in proposals:
- Plain-language description of each AI tool and what problem it addresses
- Specific mission outcomes you expect AI to improve, with baseline metrics
- Your governance structure: who owns AI decisions, what oversight exists
- How communities served are protected from potential AI harms
- Sustainability plan: who maintains the system after the grant period
- Honest current stage assessment, including what you do not yet know
Avoid in proposals:
- Claiming AI capabilities you do not currently have or cannot demonstrate
- Framing AI primarily as a cost-cutting strategy without addressing impact
- Technical jargon without substantive explanation for non-expert readers
- Proposing AI solutions without demonstrating data readiness to support them
- Treating AI as a one-time project without ongoing maintenance planning
- Ignoring equity implications for communities most affected by your AI tools
Building Genuine AI Readiness: A Practical Sequence
The Overdeck Family Foundation's experience running AI accelerator programs provides one of the most encouraging data points in this space. Organizations that entered the program with only 4 percent having a clear AI strategy emerged with 58 percent having developed one. Strategic problem-solving using AI jumped from 29 percent to 93 percent. The cohort model proved essential, demonstrating that peer learning alongside structured coaching accelerates readiness faster than either element alone.
For organizations building readiness without access to a structured cohort program, the sequence that produces the best results generally follows the same pattern: governance before tools, data before analysis, literacy before strategy, pilots before scale. Organizations that try to deploy sophisticated AI applications before establishing governance, cleaning their data, and building staff literacy typically encounter problems that damage both their confidence in AI and their staff relationships. Starting with the governance and data foundation feels slower, but it creates the conditions for everything else to succeed.
Two resources are particularly valuable for self-assessment. NTEN's Nonprofit Tech Readiness program includes specific AI readiness assessments and can help organizations understand where they sit on a maturity spectrum. Bonterra's "AI Readiness Path" framework identifies five maturity stages from foundational through transformative, with clear markers for each stage. Using either or both frameworks to honestly assess your current stage gives you a credible answer when funders ask about your AI readiness, one grounded in external frameworks rather than self-assessment alone.
The size disparity in AI adoption, where nonprofits with revenues over $1 million adopt at nearly twice the rate of smaller organizations, reflects real resource constraints that governance and planning frameworks cannot fully address. Smaller organizations should prioritize the highest-leverage, lowest-cost readiness investments: a written governance policy, a named AI point of contact, participation in peer learning communities, and applications to grant programs like OpenAI's People-First AI Fund that explicitly welcome organizations still building AI capacity. The goal is not to compete directly with well-resourced large organizations but to demonstrate that your organization is approaching AI thoughtfully and in a way that reflects your mission values.
AI Readiness Building Sequence
A practical progression from foundational to advanced, aligned with funder expectations
Governance foundation (weeks, not months)
Draft a one-page AI acceptable use policy. Designate an AI point of contact. Brief your board and create a standing AI agenda item. These steps cost nothing and signal organizational seriousness to funders immediately.
Data audit and privacy policy (1-3 months)
Document what data you have, how it is structured, how it is protected, and where gaps exist. Develop a data privacy policy that explicitly addresses AI. Identify the highest-priority data quality improvements for your context.
Staff literacy baseline (1-6 months)
Assess current AI knowledge and confidence across teams. Design targeted training for different roles, not one-size-fits-all sessions. Identify internal champions who can serve as peer resources. Connect staff to free resources from NTEN, TechSoup, and sector-specific networks.
Mission-linked pilots with measurement (3-12 months)
Identify one or two AI use cases directly connected to your most important mission outcomes. Define baselines, implement with documented human oversight, measure results, and share learnings internally and externally.
Formal AI strategy and roadmap (year two onward)
Using pilot learnings and organizational assessment, develop a multi-year technology roadmap that sequences AI investments against foundational data and governance improvements. This is the document that positions you for the most significant foundation technology grants.
The Competitive Landscape: What 2026 Is Establishing
The sector consensus emerging from 2025 and early 2026 research points toward a clear trajectory. AI readiness is becoming a proxy for overall organizational maturity that sophisticated funders read as a signal about management quality and strategic clarity. This does not mean every grant conversation will center on AI. But it does mean that organizations that can demonstrate thoughtful AI governance, connected to their mission and their communities, are beginning to stand out from peers that have not engaged seriously with this domain.
The Bridgespan Group's analysis of the current funding landscape argues that funders who are serious about AI will move toward flexible, multiyear, unrestricted grants that cover the full adoption lifecycle, including training and ongoing support. This model requires funders to trust that grantees are managing AI investments prudently, which in turn requires grantees to demonstrate the governance, data practices, and accountability structures that justify that trust. Organizations building those structures now are better positioned to compete for the grant structures that will emerge as foundations mature their own AI strategies.
For organizations working in the focus areas of the Humanity AI coalition, democracy, education, labor, humanities, and security, the stakes are particularly high and the opportunity is particularly clear. The most prestigious philanthropic dollars available in this space in 2026 will flow to organizations that can demonstrate their work advances human-centered AI governance. That is not a technical requirement. It is a values and organizational capacity requirement, one that most mission-driven nonprofits are far better positioned to meet than they realize, provided they invest in the governance infrastructure that makes their values visible and credible to funders.
Quick Reference: AI Readiness Documentation Checklist
Documents and capabilities funders are increasingly expecting to see
Written documentation
- AI acceptable use policy (even a one-page version)
- Data governance policy covering AI-specific data flows
- AI roles and accountability structure (who decides, who oversees)
- Technology roadmap showing sequenced AI investments
- Impact metrics for any AI tools in active use
Organizational capabilities
- Clean, structured program data that could support AI analysis
- At least one staff AI champion who can evaluate tools and guide decisions
- Board-level AI literacy sufficient to exercise meaningful governance oversight
- Documented pilot experiments with measured outcomes
- Ability to articulate AI strategy in plain language to non-technical audiences
Conclusion: Readiness Is a Journey, Not a Threshold
The most important insight from the current state of AI grantmaking is that funders are not looking for perfection. They are looking for organizations that are engaging thoughtfully, honestly, and systematically with the opportunities and risks AI presents for their missions and the communities they serve. The research consistently shows that organizations rewarded by funders are not those with the most sophisticated AI deployments, but those that demonstrate clear thinking about what AI is for, how it connects to mission outcomes, and what governance structures protect their work.
The governance gap is the most actionable gap in the sector. Writing an AI acceptable use policy costs nothing and takes a few hours. Designating an AI point of contact requires no budget. Briefing your board on AI governance positions you as organizationally mature and forward-thinking. These small steps, compounded over a year of intentional investment in data quality, staff literacy, and mission-linked pilots, produce the kind of AI readiness that funders are beginning to treat as a signal of organizational quality.
For related resources on building the organizational foundation for AI, our articles on incorporating AI into your strategic plan, assessing your organization's AI maturity, and closing the nonprofit AI governance gap provide practical frameworks for the work ahead.
Ready to Build Your AI Readiness?
One Hundred Nights helps nonprofits develop the governance frameworks, data practices, and AI strategies that both strengthen operations and satisfy growing funder expectations. Let us help you build readiness that is genuine, not just documented.
