How Major Foundations Are Investing in People-Centered AI Development
The Ford, MacArthur, and Packard foundations have joined a historic $500 million coalition to reshape AI development around human values. Here is what their strategy means for nonprofits, and how your organization can align with this powerful philanthropic shift.

In the fall of 2025, ten of the world's most influential philanthropic foundations announced something unprecedented: a coordinated, $500 million commitment to ensure that artificial intelligence serves people rather than exploits them. The Humanity AI initiative, anchored by the Ford Foundation, MacArthur Foundation, and David and Lucile Packard Foundation alongside seven other major funders, represents the largest coordinated philanthropic investment in the future of AI to date. For nonprofit leaders navigating an increasingly AI-saturated world, understanding this initiative is not optional. It is essential.
The timing is not coincidental. As technology companies pour billions into AI development with relatively little public accountability, philanthropic leaders recognized a widening gap between AI's technical trajectory and its social impact. The consortium of foundations set out to counterbalance that dynamic by centering communities, workers, artists, students, and democratic institutions in the design of AI systems and governance frameworks. This is philanthropy at its most ambitious and most politically significant.
For nonprofits, the Humanity AI initiative creates both opportunity and expectation. Organizations working in democracy, education, labor, arts, and security will find direct alignment with this initiative's grantmaking priorities. But even nonprofits in adjacent fields should pay attention, because the way foundations think about AI is changing rapidly. The funders who joined this coalition are some of the same organizations that support thousands of nonprofits across the country. Their AI philosophy will increasingly shape how they evaluate grantees, what questions they ask in applications, and what they expect from organizations they fund.
This article explores the Humanity AI initiative in depth, examines what each major foundation is prioritizing, and translates these philanthropic strategies into concrete guidance for nonprofit leaders. Whether you are seeking funding from these foundations or simply trying to understand the broader shift in how philanthropy views AI, this analysis will help you navigate the terrain.
Understanding the Humanity AI Initiative
The Humanity AI initiative launched in August 2025 with a stated mission to ensure that people, not just technology developers, have a meaningful stake in AI's future. The ten founding foundations each committed to directing substantial grantmaking toward five core issue areas over a five-year period. Collectively, they pledged $500 million, with Rockefeller Philanthropy Advisors serving as the fiscal sponsor for the pooled grant fund.
What distinguishes Humanity AI from other philanthropic AI initiatives is its explicitly political framing. The coalition was candid about its goal of counterbalancing the influence of large technology companies in shaping AI policy and development. The initiative is not simply about ensuring AI is used responsibly; it is about ensuring that communities most affected by AI have a voice in how it is designed, deployed, and governed. This framing has significant implications for which types of nonprofits will find the best fit for funding, and how organizations should frame their work when approaching these foundations.
Five Priority Areas
The grantmaking focus of the Humanity AI coalition
- Democracy: Protecting rights, freedoms, and democratic institutions from AI-enabled manipulation or erosion
- Education: Ensuring AI in schools expands access to knowledge and strengthens how people learn
- Humanities and Culture: Protecting artists, creatives, and cultural expression from displacement or exploitation
- Labor and Economy: Ensuring AI enhances how people work rather than replacing human workers at scale
- Security: Protecting individuals and communities from AI-enabled surveillance, fraud, and harm
Coalition Partners
Ten major foundations committed to the initiative
- Ford Foundation
- John D. and Catherine T. MacArthur Foundation
- David and Lucile Packard Foundation
- Doris Duke Foundation
- Lumina Foundation
- Kapor Foundation, Mellon Foundation, Mozilla Foundation
- Omidyar Network, Siegel Family Endowment
The grantmaking timeline is significant for nonprofits planning their funding strategies. Coalition funders began aligned grantmaking in late 2025, with pooled grants from the shared Humanity AI fund scheduled for 2026 and beyond. This means organizations positioning themselves now, building relationships with program officers, demonstrating relevant work, and refining their AI-related narratives, are in the best position to benefit from this wave of philanthropic investment.
The Ford Foundation: Equity at the Intersection of AI and Power
The Ford Foundation's involvement in Humanity AI reflects its longstanding commitment to addressing inequality and building inclusive societies. Ford's approach to AI is shaped by its core theory of change: that inequality is fundamentally about the unequal distribution of power, and that AI systems, left unchecked, tend to concentrate power among those who already have it. The foundation sees the Humanity AI initiative as an extension of its existing work in economic justice, democratic participation, and human rights.
Ford's AI grantmaking is likely to focus on organizations that are examining how AI systems affect marginalized communities, advocating for AI governance frameworks that center equity, and developing alternative models for AI development that prioritize community benefit. Nonprofits working in immigration, criminal justice, housing, economic mobility, and civil rights may find alignment if they can clearly articulate how AI either threatens or could be harnessed to advance their missions.
For Ford grantees and prospective applicants, the key is demonstrating that your organization understands AI not just as a tool for efficiency, but as a system with social consequences. Ford program officers will want to know how your organization is engaging with affected communities on AI-related decisions, how you are thinking about the risks AI poses to the populations you serve, and what role your work plays in shaping a more equitable AI landscape.
What Ford Grantees Should Demonstrate
- Community voice in AI-related decisions affecting your constituents
- Analysis of how AI systems may perpetuate or challenge existing inequalities
- Policy advocacy connecting your issue area to AI governance frameworks
- Commitment to using AI responsibly and transparently within your own operations
The MacArthur Foundation: Workforce, Opportunity, and Who Benefits from AI
MacArthur's approach to Humanity AI is grounded in its "Big Bet" model, in which the foundation makes large, concentrated investments in areas where it believes transformative change is possible. For AI, MacArthur has identified workforce and economic opportunity as its primary lens. The foundation is concerned about who creates AI, who benefits from it, and who gets left behind, particularly at the intersection of technology and labor markets.
MacArthur hired a Director of AI Opportunity in 2025 to manage its new Big Bet Program, which will fund work at the intersection of AI and the economy. The foundation's grantmaking under this program will focus on expanding who participates in the AI economy, including workers in industries facing AI-driven disruption, communities historically excluded from technology sectors, and organizations developing AI applications that serve underserved populations rather than primarily wealthy ones.
MacArthur also invested $10 million in a separate initiative to advance AI development by and for people, emphasizing participatory approaches to AI design. This reflects a growing belief among major funders that the best way to ensure AI serves the public is to include the public in its creation, not just its deployment. Nonprofits that are bringing community members into AI-related design or governance processes, rather than simply deploying AI tools to serve communities, will align most strongly with MacArthur's priorities.
MacArthur's AI Opportunity Focus
- Workforce transition support for AI-disrupted industries
- Expanding access to AI-related career pathways for underserved communities
- Participatory AI design that includes affected communities
- Research on AI's economic effects on low-income workers
Strong Alignment Signals
- Workforce development organizations addressing AI displacement
- Research organizations studying AI and economic inequality
- Tech training programs serving historically excluded communities
- Policy organizations advocating for AI accountability standards
The Packard Foundation: Science, Environment, and Responsible Innovation
The David and Lucile Packard Foundation brings a distinctive perspective to the Humanity AI coalition, one rooted in its long history of supporting scientific research, environmental conservation, and children's health and development. Packard's AI investment reflects its belief that powerful technologies must be governed by rigorous scientific principles and strong ethical frameworks, and that the organizations best positioned to guide AI development are those with deep expertise in both the domains AI is entering and the communities those domains serve.
Environmental nonprofits will find particular resonance with Packard's framing. The foundation is attentive to AI's potential role in addressing environmental challenges, from climate modeling and species monitoring to sustainable agriculture and conservation planning. At the same time, Packard is concerned about AI's own environmental footprint, including the energy and water demands of large data centers, and how those costs are distributed across communities. Organizations that are using AI to advance environmental goals while being thoughtful about environmental trade-offs are well positioned to make a compelling case to Packard.
For children and family-focused nonprofits, Packard's interest in AI intersects with its deep concern for child development, education equity, and family well-being. The foundation will be watching how AI is deployed in schools and family service settings, asking whether these applications truly serve children's best interests or whether they introduce new risks, including data privacy concerns, algorithmic bias, and the replacement of human relationships with machine interactions.
Packard's Cross-Cutting AI Concerns
Issues that Packard brings to the people-centered AI conversation
- Environmental sustainability of AI infrastructure and energy use
- Scientific rigor in AI applications for conservation and environmental monitoring
- Child-centered design principles for AI systems used in educational settings
- Data privacy protections for vulnerable populations, including minors and families
- Interdisciplinary research connecting domain expertise with AI capability
What This Philanthropic Shift Means for Nonprofits
The Humanity AI initiative is not an isolated event. It is a signal of a broader shift in how major foundations are thinking about technology, power, and their role in shaping both. For nonprofits, understanding this shift is crucial, whether you are a direct grantee of these foundations or simply operating in a funding landscape that these institutions help define.
One of the most significant implications is that AI is no longer a back-office consideration for funders. Program officers at major foundations are beginning to ask questions about AI in grant applications and site visits, even when the primary focus of the grant has nothing to do with technology. They want to know whether organizations are using AI tools and how, what policies govern that use, and whether staff have the training to use AI responsibly. For more context on how to prepare for these conversations, see our article on how funders are evaluating AI use in grant applications.
Beyond the technical questions, funders are paying increasing attention to whether nonprofits share their values about AI. Organizations that have thought carefully about AI's potential risks to their beneficiaries, that have involved community members in AI-related decisions, and that can articulate a clear values-based AI philosophy will stand out. Those that treat AI purely as a productivity tool, without considering its social dimensions, may find themselves at a disadvantage as funder expectations evolve.
For Education Nonprofits
All ten Humanity AI foundations have committed to ensuring AI in education serves students first. This creates significant funding opportunity for organizations working on:
- Equitable access to AI-enhanced learning tools
- Teacher training in AI-augmented instruction
- AI literacy programs for students and families
For Democracy Nonprofits
The democracy priority area is one of the most urgent for the Humanity AI coalition. Organizations focused on voting rights, civic engagement, and democratic accountability can connect their work to AI by addressing:
- AI-generated misinformation and election integrity
- AI surveillance and civil liberties
- Participatory AI governance frameworks
For Arts Organizations
Mellon Foundation's participation signals strong interest in protecting artists and cultural expression. Arts nonprofits can engage by focusing on:
- Artist rights in the age of generative AI
- AI tools that amplify rather than replace human creativity
- Cultural heritage preservation using AI responsibly
Nonprofits that do not naturally fit into the five priority areas of Humanity AI should not assume this initiative is irrelevant to them. The participating foundations also have their own diverse grantmaking portfolios, and the AI philosophy they are developing through Humanity AI will permeate how they evaluate all applicants, not just those focused explicitly on AI issues. Developing a thoughtful organizational AI strategy, as outlined in our guide to integrating AI into your nonprofit strategic plan, is becoming a baseline expectation rather than an advanced practice.
How to Prepare for Foundation AI Expectations in 2026
As foundation AI expectations evolve, nonprofits that proactively address these changes will be better positioned for funding success. This is not about performatively adopting AI tools to impress funders. It is about genuinely engaging with the questions that thoughtful funders are asking, and being able to demonstrate that your organization has thought carefully about these issues.
The most important thing nonprofits can do is develop an honest, values-grounded AI narrative. This means being clear about what AI tools you use, why you use them, what safeguards you have in place, and how you are involving the people you serve in decisions about AI. Funders are sophisticated enough to see through organizations that adopt AI buzzwords without genuine engagement, and they are increasingly skeptical of nonprofits that claim to be "AI-forward" without evidence of thoughtful implementation.
Building internal AI literacy across your team is also increasingly important. When foundation program officers visit or meet with leadership, they may ask how different staff members think about AI and what skills they have developed. Organizations that have invested in building AI champions across departments, rather than concentrating all AI knowledge in a single tech-savvy staff member, demonstrate an organizational maturity that funders find compelling.
Practical Steps to Align with Foundation AI Values
Actions nonprofits can take now to prepare for foundation AI expectations
- Document your AI use: Create a clear inventory of AI tools your organization uses, how they are governed, and what data they access
- Develop an AI policy: Even a simple policy demonstrates organizational seriousness about responsible AI use
- Engage your community: Consider how beneficiaries, clients, or community members can have voice in AI-related decisions affecting them
- Identify AI risks in your mission area: Be able to articulate how AI could threaten the populations you serve, not just how it could help you work more efficiently
- Connect AI to equity: Link your AI practices to your broader commitments to equity, inclusion, and community well-being
- Build relationships with program officers: Reach out to foundations whose AI priorities align with your work before a grant cycle opens
The Broader Philanthropic Landscape Beyond Humanity AI
While Humanity AI is the largest coordinated philanthropic AI initiative to date, it is not the only game in town. Several other major philanthropic efforts are reshaping how AI-related funding flows to nonprofits, and understanding this broader landscape helps organizations identify the right funding partners for their specific work.
OpenAI launched its People First AI Fund in 2025, focusing on nonprofits using AI to directly serve underserved communities. Unlike the Humanity AI initiative, which is broadly focused on AI governance and values, the People First AI Fund is specifically supporting organizations that are deploying AI in service delivery. This creates a complementary funding opportunity for nonprofits that have already integrated AI into their program delivery and want to scale those efforts.
Microsoft, Google, and Amazon have each maintained nonprofit AI funding programs for several years, typically providing technology credits, training resources, and some direct grants. These corporate philanthropic programs are distinct from the Humanity AI initiative in that they are primarily focused on enabling nonprofits to use AI, rather than reshaping how AI is developed and governed. Nonprofits should engage with both streams, recognizing that corporate and foundation funders often have different expectations and priorities.
Community foundations are also beginning to develop AI-related programs, often tailored to the specific needs and contexts of their local communities. For nonprofits with strong regional roots, engaging with local community foundation AI initiatives may be an even more accessible entry point than applying to large national foundations. Building your case at the local level also generates evidence and stories that can strengthen future applications to national funders.
Key Questions to Ask Before Approaching Foundation Funders
- Does our work directly address any of the five Humanity AI priority areas, even if AI is not our primary focus?
- Can we articulate how AI either threatens or could advance the communities we serve?
- Do we have a documented AI policy that reflects thoughtful values-based decision-making?
- Have we engaged community members in AI-related decisions, or are we planning to?
- Is our organization's AI use consistent with the values we express to funders?
Conclusion: Philanthropy Is Reshaping AI, and Nonprofits Must Engage
The Humanity AI initiative represents a pivotal moment in the relationship between philanthropy, technology, and civil society. For the first time, some of the world's most influential foundations are not simply watching AI development unfold. They are actively investing in reshaping its trajectory, and they are looking to nonprofits to be partners in that effort.
For nonprofit leaders, the message is clear: AI is no longer a topic that can be safely delegated to your IT staff or addressed with a boilerplate policy. It is a values question, a community question, and increasingly a funding question. Organizations that engage thoughtfully and authentically with AI, understanding both its potential and its risks, documenting their practices, involving communities in decisions, and connecting their work to larger frameworks of equity and accountability, will be better positioned in an increasingly AI-shaped philanthropic landscape.
The good news is that the Humanity AI coalition and its member foundations are deeply committed to ensuring that nonprofits, especially those closest to affected communities, have a real voice in this conversation. The initiative is not simply about funding AI research at elite universities. It is about supporting the organizations that are on the ground, building relationships, delivering services, and advocating for people whose lives will be profoundly shaped by AI. That is where most nonprofits live. And that is exactly where the most important work of the coming decade will happen.
Consider reviewing your organization's current AI practices, developing or updating your AI governance policy, and beginning conversations with program officers at foundations whose AI priorities align with your mission. The moment to engage is now, while the initiatives are forming and grantmaking criteria are still being shaped, rather than after the first round of awards has been announced.
Align Your Organization with Foundation AI Values
One Hundred Nights helps nonprofits develop thoughtful AI strategies that resonate with funders, serve communities, and reflect your organizational values.
