What the Ford, MacArthur, and Packard Foundation AI Strategy Means for Grantees
Three of America's most influential foundations have joined a historic $500 million coalition to shape AI's future. Here's what that shift means for the nonprofits they fund, and how grantees can position themselves for this new philanthropic landscape.

In August 2025, something remarkable happened in the world of philanthropy. The Ford Foundation, MacArthur Foundation, David and Lucile Packard Foundation, and seven other major funders announced Humanity AI, a five-year, $500 million initiative dedicated to ensuring artificial intelligence advances the public interest. The announcement marked a turning point: America's most influential foundations were no longer simply watching AI develop from a distance. They were now active participants in shaping how the technology evolves and who benefits from it.
For the thousands of organizations that receive grants from these foundations, this strategic pivot carries real implications. When major funders shift their priorities, the entire grantmaking ecosystem eventually follows. Program officers develop new frameworks for evaluation. Grant applications evolve to reflect funder priorities. Organizations that understand these shifts early can position themselves advantageously. Those that miss the signals often find themselves scrambling to explain relevance to funders who have moved on.
This article unpacks what the Humanity AI initiative actually is, what each foundation's specific strategy looks like in practice, and what grantee organizations should realistically expect as these commitments translate into grantmaking decisions. It also offers practical guidance for nonprofits navigating a funding landscape where AI literacy is increasingly relevant to competitive positioning.
The picture that emerges is more nuanced than many headlines suggest. These foundations are not simply rewarding organizations that use AI tools. They are investing in a vision of AI development that centers human agency, equity, and democratic accountability. Grantees who understand that distinction will find themselves with a meaningful advantage.
Understanding the Humanity AI Initiative
The Humanity AI initiative brings together ten major foundations under a shared commitment to influence how AI develops at a societal level. The founding partners include the Ford Foundation, MacArthur Foundation, David and Lucile Packard Foundation, Doris Duke Foundation, Mellon Foundation, Lumina Foundation, Kapor Foundation, Mozilla Foundation, Omidyar Network, and Siegel Family Endowment. Combined, these organizations have committed $500 million over five years, with pooled grantmaking expected to begin in 2026.
The initiative operates on two tracks simultaneously. First, each participating foundation will direct new investments through its own grantmaking portfolio toward AI-related areas that align with its existing mission. Second, Humanity AI will make grants from a pooled fund that supports work explicitly focused on the initiative's four priority areas: democracy and civic life, education, humanities and culture, and labor and the economy.
What distinguishes this initiative from general AI philanthropy is its explicit focus on ensuring people have power and voice in decisions about AI development. The founding coalition has been candid about their concern that AI is being built primarily to serve the interests of technology companies and wealthy investors, rather than the broader public. Their investments are designed to counter that dynamic by supporting civil society organizations, research institutions, and advocacy groups that can articulate public interest perspectives in AI governance conversations.
Four Priority Areas
Humanity AI's core grantmaking focus areas
- Democracy: Supporting frameworks and partnerships that give citizens voice in AI governance and policy
- Education: Shaping AI's role in learning to serve students' best interests, not just efficiency goals
- Humanities and Culture: Protecting artistic expression and human creativity from displacement
- Labor and Economy: Ensuring AI augments rather than replaces the way people work and earn
Initiative Structure
How the $500 million will flow to grantees
- Pooled fund grants starting in 2026 for work directly addressing the four priority areas
- Individual foundation grantmaking aligned with AI themes across each funder's existing portfolio
- Five-year time horizon with capacity to adapt as the AI landscape evolves
- Applications for the pooled fund expected to open in 2026 as grantmaking strategies are finalized
What Each Foundation Is Prioritizing
While the three foundations share common ground through Humanity AI, each brings its own institutional focus and grantmaking strategy to the coalition. Understanding these individual strategies helps grantees identify which funding streams are most relevant to their work and how to frame their proposals accordingly.
Ford Foundation: AI Through an Equity Lens
Centering racial and economic justice in AI governance
The Ford Foundation's approach to AI reflects its core institutional mission: addressing inequality and injustice. Ford's program officers are evaluating AI work through questions about who builds AI systems, who governs them, and whose interests they serve. Organizations whose work examines how AI systems can perpetuate or deepen existing inequities, or that develops alternative frameworks for equitable AI design, fit naturally into Ford's evolving AI strategy.
Ford is particularly interested in civil society organizations that can participate in AI governance conversations at national and international levels. This includes policy advocacy organizations, legal research institutions, and community-based groups that can translate technical AI debates into accessible language for affected communities. The foundation is also attentive to the intersection of AI and labor rights, particularly as automation affects workers in industries where Ford has traditionally invested.
- Civil society engagement in AI policy and governance processes
- Research on algorithmic bias and discriminatory AI applications
- Organizations building capacity for communities most affected by AI to advocate for their interests
MacArthur Foundation: AI, Opportunity, and Chicago
Building workforce capacity and community-centered AI development
MacArthur has developed a specific AI focus called the AI Opportunity program, which sits within the foundation's broader Big Bets portfolio. The program has three distinct areas of concentration: the intersection of AI, the economy, and the workforce (with particular attention to young people in Chicago); community-centered AI development and use; and nonprofit AI applications. Each of these areas creates distinct opportunities for different types of grantees.
The workforce focus means MacArthur is actively interested in organizations that are helping workers, particularly young workers in Chicago, develop AI skills and navigate an economy being reshaped by automation. The community-centered AI development focus creates opportunities for organizations that are building AI tools specifically with and for underserved communities, rather than simply deploying commercial AI in those communities. The nonprofit AI applications focus is perhaps most broadly relevant, as it signals MacArthur's interest in the sector learning to use AI effectively to advance missions.
- Workforce development programs building AI skills for young adults
- Community-led AI tool development that centers lived experience
- Demonstration projects showing how nonprofits can use AI effectively and responsibly
Packard Foundation: AI Safety, Democracy, and Environmental Applications
Funding responsible AI development across multiple program areas
Packard's AI engagement spans several of its program areas. The foundation has funded the Safe Artificial Intelligence Forum Institute for work on international AI safety dialogues, reflecting concern about responsible AI development at a global level. In its democracy program, Packard has supported organizations working on AI's role in civic life, with attention to both opportunities and risks. The foundation is particularly interested in how AI can support rather than undermine democratic institutions and processes.
For Packard's substantial environmental portfolio, AI represents both a tool and a concern. On the tool side, AI offers significant potential for climate modeling, species monitoring, and environmental data analysis. Packard has funded environmental organizations exploring these applications. On the concern side, Packard is attentive to AI's energy consumption and carbon footprint, and to ensuring that AI development doesn't exacerbate environmental harm. Environmental organizations can engage with Packard's AI interests from both directions.
- AI safety research and international governance frameworks
- Environmental AI applications for conservation and climate work
- Organizations at the intersection of AI accountability and democratic health
What This Means for Grantees in Practice
The Humanity AI initiative is still early in its development. As of early 2026, the foundations are finalizing their individual and collective grantmaking strategies, and the pooled fund is not yet accepting unsolicited applications. This means grantees need to think about the medium-term implications rather than immediate application opportunities. Organizations that begin positioning themselves now will be better prepared when grant cycles open.
One of the most significant practical implications concerns how existing grantees relate to these foundations' evolving priorities. Organizations with long-standing relationships with Ford, MacArthur, or Packard should expect that AI will become a more prominent topic in program officer conversations. This doesn't mean that traditional program work is being de-prioritized. It means that funders are increasingly curious about how grantees are thinking about AI in relation to their missions, and what role AI plays (or might play) in their work.
A 2025 study by the Center for Effective Philanthropy found that nearly 90% of foundations don't yet offer AI implementation support to their grantees. This creates both a challenge and an opportunity. The challenge is that organizations may be expected to engage with AI questions without having received any structured support for doing so. The opportunity is that foundations are actively looking for grantees who can demonstrate thoughtful AI engagement, and organizations that can articulate a clear and principled approach will stand out.
Risks to Avoid
- Treating AI as a buzzword in grant applications without substantive engagement
- Ignoring AI questions entirely in applications to these funders
- Framing AI purely as a cost savings tool without addressing equity or accountability dimensions
- Deploying AI in ways that conflict with the foundations' stated values around transparency and human agency
Opportunities to Pursue
- Articulating how your organization is thoughtfully engaging with AI in ways that align with your mission
- Documenting responsible AI practices that can serve as models for the sector
- Identifying where AI governance questions intersect with your existing program work
- Building relationships with program officers now, before grant cycles formally open
What Foundations Actually Evaluate in AI-Inclusive Proposals
As foundations incorporate AI considerations into their grantmaking, a clearer picture is emerging of what rigorous evaluation looks like. Researchers and practitioners who study grantmaker-grantee relationships have identified several dimensions that well-resourced foundations pay attention to when assessing AI-related proposals. Understanding these dimensions helps organizations present their work in ways that resonate with funder priorities.
Mission alignment is typically the first filter. Foundations want to understand how AI work connects to the organization's core purpose, not simply what AI tools are being used. A food bank that uses AI for demand forecasting to reduce waste and better serve clients is telling a mission-aligned story. The same organization describing AI primarily as a way to reduce administrative overhead may generate less enthusiasm, even if the efficiency gains are real. The framing matters as much as the substance.
Equity focus has become a significant factor for all three foundations discussed here. Program officers are asking how AI tools were selected, whether bias in these tools has been assessed, and what the organization has done to ensure AI doesn't harm the communities it serves. Organizations that have thought carefully about these questions, even if they haven't resolved every challenge, tend to fare better than those that treat equity as an afterthought.
Four Evaluation Dimensions for AI Proposals
What rigorous foundation review teams look for
1. Mission Amplification
Does the AI work directly amplify the organization's capacity to achieve its mission? Reviewers distinguish between AI that deepens impact and AI that simply digitizes existing processes.
2. Responsible Implementation
Has the organization established safeguards ensuring AI enhances rather than replaces human judgment in high-stakes decisions? Is there a clear policy governing AI use?
3. Equity Integration
Has the organization addressed potential bias in AI tools? Are the communities served by the organization involved in decisions about AI deployment? Does AI use promote or complicate equity goals?
4. Sustainability and Stewardship
Who will maintain and improve the AI system beyond the grant period? Does the organization have the capacity to use AI responsibly and update its approach as the technology evolves?
Sustainability deserves particular attention because many foundation-funded AI projects struggle once the grant period ends. Funders have seen enough examples of technology initiatives that collapse when grant funding runs out to approach AI proposals with appropriate skepticism. Organizations that can demonstrate a realistic plan for maintaining AI capabilities, either through earned revenue, reduced-cost tools, or integration into existing operational budgets, will be more credible to reviewers.
It's also worth noting that foundations like MacArthur are explicitly interested in nonprofit AI applications as a program area, not just AI governance work. This means direct service organizations have real opportunities to attract Humanity AI-related funding if they can demonstrate thoughtful, mission-aligned AI use. The key is telling that story in a way that connects to broader themes of equity, accountability, and human empowerment.
Preparing Your Organization for This New Funding Reality
The shift in major foundation priorities toward AI doesn't require nonprofit leaders to become AI experts overnight. It does require a thoughtful organizational stance on AI that can be articulated clearly and credibly. Organizations that approach this proactively, rather than scrambling when a grant application asks about AI use, will be much better positioned.
Developing an AI policy is an important first step, and not just because funders may ask about it. A clear policy helps staff understand what AI tools are appropriate to use, establishes accountability for AI decisions, and demonstrates to stakeholders that the organization takes responsible AI seriously. Building on the work others have done is perfectly reasonable here. The sector has produced a growing library of AI policy templates and frameworks. Adapting one for your context is far better than starting from scratch or having no policy at all.
Staff AI literacy is another foundational investment. When program officers ask questions about how your organization uses AI, the answers that come back will reflect your team's actual level of engagement with the technology. Building AI champions within your staff creates internal capacity that shows up credibly in funder conversations. This doesn't require everyone to become a power user. It requires enough people to have substantive hands-on experience that AI is a real part of the organization's work rather than a talking point.
The nonprofit AI strategy gap documented in recent research is relevant here. Many organizations have individual staff using AI tools ad hoc, but lack any coherent organizational strategy for AI adoption. Foundations like those in the Humanity AI coalition are looking for thoughtful institutional engagement, not just individual experimentation. Developing even a modest strategic framework for AI at your organization, one that connects technology to mission and addresses equity and accountability questions, represents meaningful preparation.
Short-Term Actions (Next 90 Days)
- Review your organization's current AI tool usage and document it
- Draft or update an AI use policy, even a simple one-page version
- Sign up for updates from the Humanity AI initiative to track grant opportunities
- Identify which of your program areas connects most naturally to Humanity AI's four priorities
Medium-Term Investments (3-12 Months)
- Build staff AI literacy through structured training and experimentation
- Develop a narrative connecting your AI use to mission, equity, and accountability
- Explore convenings and peer learning opportunities in your sector on responsible AI use
- Consider developing a pilot project that could demonstrate AI impact to funders
The Broader Signal: AI Is Changing Philanthropy Sector-Wide
The Humanity AI initiative is the most visible expression of a broader shift in how foundations think about technology. Across the philanthropic sector, AI is moving from a peripheral technology question to a central strategic concern. Foundations are using AI to process grant applications more efficiently. They are asking grantees about AI in their program evaluations. They are beginning to fund AI capacity building in ways they didn't previously.
This shift matters even for organizations that don't receive grants from Ford, MacArthur, or Packard. When major foundations set strategic directions, smaller community foundations and local funders often follow their lead, sometimes explicitly adopting frameworks developed by larger peers. The evaluation criteria, the language of responsible AI, and the expectation that grantees engage thoughtfully with technology questions are likely to spread through the philanthropic ecosystem over the next few years.
At the same time, it would be a mistake to view this shift purely through the lens of competitive positioning. The questions these foundations are asking about AI, including who benefits, who is harmed, who has power, and who is accountable, are legitimate questions that any mission-driven organization should be asking about its own AI use. Engaging with those questions seriously isn't just about impressing funders. It's about ensuring that your organization's technology choices are aligned with your values.
The foundations participating in Humanity AI have been clear that they are investing in a vision of AI that serves the public interest and centers human dignity. Organizations that share those values, and can demonstrate that they are working toward them in their AI practices, will find themselves naturally aligned with this new philanthropic direction. The work of articulating that alignment clearly, to funders and to your own community, is where the real preparation begins.
For organizations currently navigating funding uncertainty or exploring new grantmaker relationships, the strategic planning process offers a natural place to integrate AI considerations. When your next strategic plan addresses how technology, including AI, will support your mission over the coming years, it signals to funders that your organization is thinking proactively rather than reactively about these questions. That kind of thoughtful engagement is precisely what the Humanity AI initiative is designed to support.
Conclusion
The Ford, MacArthur, and Packard Foundation's commitment to Humanity AI represents more than a large funding announcement. It signals a fundamental shift in how major philanthropy thinks about its role in shaping technology's future. These foundations are no longer passive observers of AI development. They are active participants, using their considerable resources and influence to ensure that AI serves broad public interests rather than narrow private ones.
For grantees, the implications are gradually becoming clearer. AI is entering the evaluation framework for major foundation funding, not as a requirement, but as a dimension of organizational capacity and mission alignment that funders will increasingly assess. Organizations that develop thoughtful, values-aligned approaches to AI now will be well-positioned when formal grant cycles open in 2026 and beyond.
The most important thing to understand is that these foundations aren't simply asking whether you use AI. They're asking whether your AI use reflects the values your organization says it holds. That's a question worth taking seriously, regardless of what it means for your next grant application.
Ready to Build Your AI Strategy?
Position your organization for the next generation of foundation funding by developing a thoughtful, mission-aligned approach to AI. Our team works with nonprofits to build AI capacity that aligns with your values and strengthens your funder relationships.
