Humanity AI: How a $500 Million Foundation Coalition Is Shaping AI's Future for Nonprofits
Ten of the world's most influential foundations have committed half a billion dollars to ensure artificial intelligence develops in ways that serve people, not just profit. Here is what the Humanity AI Initiative means for nonprofits seeking funding, building programs, and navigating an AI-transformed philanthropic landscape.

In August 2025, ten of philanthropy's most prominent foundations made a collective decision that signals a fundamental shift in how the sector thinks about artificial intelligence. The Doris Duke Foundation, Ford Foundation, John D. and Catherine T. MacArthur Foundation, David and Lucile Packard Foundation, Andrew W. Mellon Foundation, Kapor Foundation, Lumina Foundation, Mozilla Foundation, Omidyar Network, and Siegel Family Endowment jointly committed $500 million over five years to build what they call a "people-centered future for AI." The initiative, known as Humanity AI, represents the largest coordinated philanthropic investment in responsible AI development to date.
For nonprofits, this announcement carries implications that extend well beyond a new grant program. It reflects a growing conviction among major funders that AI development has tilted too far toward the interests of large technology companies, leaving communities, workers, and civil society on the sidelines. By pooling resources and coordinating strategy across five priority issue areas, these foundations are positioning the nonprofit sector as an essential counterweight to unchecked technological power. Nonprofits stand to benefit not just as potential grantees, but as advocacy organizations, service providers, and AI practitioners whose work can demonstrate what people-centered AI actually looks like in practice.
MacArthur Foundation President John Palfrey captured the underlying motivation when he observed that AI often feels like something happening to communities rather than with them and for them. That framing, shared across the founding coalition, explains why first grants are expected in 2026 and why the initiative has already generated significant interest from nonprofits across every sector. Understanding what Humanity AI is, what it will fund, and what it expects from grantees is now essential knowledge for any nonprofit leader navigating the current philanthropic environment.
Understanding the Humanity AI Initiative
Humanity AI is structured as a pooled philanthropic fund managed by Rockefeller Philanthropy Advisors, which serves as fiscal sponsor. Each of the ten founding foundations contributes to the collective pool while also directing investments according to their own mission priorities within five defined focus areas. This architecture allows coordinated impact at a scale that would be impossible for any single foundation to achieve alone, while preserving each funder's ability to align grants with their existing programmatic work.
Five Priority Focus Areas
Humanity AI targets five interconnected domains where AI's trajectory has the greatest human stakes
- Democracy: Strengthening civic participation, election integrity, and public access to reliable information in an AI-mediated information environment
- Education: Ensuring equitable access to AI tools in learning, preventing new educational divides, and improving outcomes for underserved students
- Labor and Work: Protecting workers from displacement, supporting job transitions, and ensuring workers have voice in how AI reshapes their industries
- Arts and Culture: Preserving human creativity, protecting artists' rights, and ensuring cultural heritage survives the age of generative AI
- Safety and Security: Mitigating AI harms, advancing responsible development practices, and building governance frameworks that protect vulnerable populations
The Founding Coalition
Ten foundations spanning environmental, arts, democracy, technology, and social justice philanthropy
- Doris Duke Foundation
- Ford Foundation
- John D. and Catherine T. MacArthur Foundation
- David and Lucile Packard Foundation
- Andrew W. Mellon Foundation
- Kapor Foundation, Lumina Foundation, Mozilla Foundation
- Omidyar Network, Siegel Family Endowment
The breadth of the founding coalition is intentional. By bringing together foundations that specialize in environmental conservation, arts and culture, technology policy, democratic participation, and social justice, Humanity AI signals that responsible AI is not a niche concern but a cross-cutting challenge that touches every dimension of civil society. Nonprofits working in any of these areas will find potential alignment with the initiative's priorities, whether they work directly on AI policy or simply use AI tools in their program delivery.
What "People-Centered AI" Actually Means
The phrase "people-centered AI" appears throughout Humanity AI's communications, but its practical meaning is worth unpacking for nonprofits that will be asked to demonstrate alignment with this approach in grant applications and program planning. People-centered AI is not simply a marketing term; it represents a coherent set of design principles and accountability practices that distinguish responsible deployment from the default approach many technology companies take.
Core Principles of People-Centered AI
What foundations mean when they fund "responsible" and "human-centered" AI development
Human Augmentation
AI enhances human capabilities rather than displacing communities or automating away the judgment and relationships that define mission-driven work
Transparency and Explainability
AI systems operate openly so that the people affected by them can understand how decisions are made and meaningfully challenge outcomes
Equitable Outcomes
Benefits reach diverse populations and AI does not exacerbate existing racial, economic, or geographic disparities in access and outcomes
Inclusive Design
AI systems are built with active input from affected communities, not imposed upon people who had no role in shaping the tools that will affect them
Human Oversight
Humans retain meaningful control and can intervene, override, or shut down AI systems when they produce harmful or unintended outcomes
Clear Accountability
Someone is responsible when AI causes harm, and affected people have accessible channels for redress and correction
For nonprofits, these principles translate into practical questions about how you use AI in your own operations and how you advocate for responsible AI on behalf of the communities you serve. A food bank using AI for inventory forecasting would demonstrate people-centered principles by explaining to clients how AI helps stretch resources further, testing whether the algorithm performs equally well across all the zip codes they serve, and maintaining human review of any decisions that affect individual clients. The same principles apply whether AI is deployed in grant reporting, program outreach, or case management.
Humanity AI's funders are particularly concerned about the gap between AI adoption and responsible readiness. While adoption rates across the nonprofit sector have climbed rapidly, the proportion of organizations with governance frameworks, equity testing protocols, and meaningful staff training remains much lower. This gap is not just a liability risk; it is a signal to funders that many nonprofits are using AI without the organizational infrastructure to do so responsibly.
Funding Opportunities for Nonprofits
The Humanity AI Initiative is the largest single philanthropic commitment to people-centered AI, but it is not the only funding opportunity nonprofits should be aware of. The broader philanthropic AI landscape has expanded significantly, and organizations that position themselves thoughtfully can access multiple streams of support.
Humanity AI Grants (2026)
First grants will be distributed in 2026 across five priority areas
Organizations with missions aligned to democracy, education, labor and work, arts and culture, or safety and security are the most natural candidates. Strong applications will likely demonstrate:
- Clear theory of change connecting AI governance to people-centered outcomes
- Authentic community voice in program design and AI deployment decisions
- Existing equity frameworks or commitment to building them
- Measurable outcomes tied to mission impact, not just efficiency gains
Other Active AI Funding Sources
Additional philanthropic and technology sector grants available now
- OpenAI Foundation: Has provided $40.5 million in unrestricted grants to 208 nonprofits, with additional board-directed grants for health and transformative AI work
- KPMG U.S. Foundation: $6 million to support nonprofits integrating AI into operations
- Google.org AI for Social Good: $500K to $2M grants for organizations using AI on pressing social challenges
- Microsoft AI for Earth: Funding plus cloud computing access for environmental AI projects
The broader context for all of this funding is a striking structural gap in philanthropic support. Research consistently shows that the vast majority of foundations provide no AI implementation support to their grantees, and only a small fraction plan to increase this support in coming years. Organizations that approach foundations with clear AI governance frameworks and equity practices are positioning themselves for a competitive advantage as the philanthropic sector catches up to the reality that nonprofits need more than money to use AI well.
Three Roles Nonprofits Play in the Humanity AI Ecosystem
The Humanity AI Initiative creates opportunities for nonprofits to engage in three distinct but complementary roles. Understanding all three helps organizations identify where their current strengths align with funder priorities and where they may need to build new capabilities.
Role 1: Grantee and Program Implementer
Receiving funding to demonstrate people-centered AI in practice
The most direct engagement with Humanity AI comes through the grant relationship. Organizations doing direct service, policy advocacy, research, or capacity building that connects to any of the five priority areas can apply for funding to advance people-centered AI. The key is connecting your existing mission to the AI governance questions the initiative cares about most.
Nonprofits working in education can propose programs that increase equitable access to AI tools for underserved students while building community understanding of how AI shapes educational outcomes. Labor-focused organizations can fund worker education programs that help employees understand their rights in AI-transformed workplaces and advocate for job transition support. Arts nonprofits can document and protect cultural heritage while fighting for fair treatment of human artists in an age of generative content.
- Connect existing mission activities to one or more of the five focus areas
- Demonstrate that your AI use or AI programming centers community voice
- Show measurable impact beyond efficiency, focused on equity and access
Role 2: Responsible AI Practitioner
Demonstrating what good AI governance looks like in mission-driven operations
Even nonprofits that do not work directly on AI policy can align with Humanity AI's values by demonstrating responsible practices in their own AI use. Foundations are increasingly interested in how grantees treat the communities they serve when deploying AI, not just what outcomes they produce. An organization that automates client intake screening without testing for demographic bias, for example, creates the kind of AI harm that Humanity AI is designed to prevent, regardless of how beneficial its overall mission may be.
Building responsible AI practices into your operations means developing clear policies about what AI can and cannot be used for, training staff to apply those policies consistently, and regularly auditing AI-assisted decisions for unintended disparities. It means being transparent with the communities you serve about when AI is involved in decisions that affect them. It means maintaining meaningful human oversight rather than simply deferring to algorithmic outputs. These practices take time and resources to develop, but they increasingly differentiate organizations in competitive grant landscapes. You can learn more about the broader governance foundation in our article on AI policy and governance gaps.
- Develop formal AI governance policies before deploying AI in client-facing programs
- Test AI tools regularly for disparate impact on the populations you serve
- Be transparent with communities about AI use and maintain clear accountability structures
Role 3: Advocate and Accountability Agent
Representing community interests in AI policy and holding technology companies accountable
Humanity AI was created in part because major foundations believe that civil society voices are underrepresented in decisions about how AI develops. Nonprofits that work in policy advocacy, community organizing, legal services, or research are well positioned to play the accountability role that funders believe is essential. This means documenting how AI tools affect vulnerable populations, advocating for regulatory frameworks that protect communities, and challenging AI deployments in high-stakes contexts like benefits eligibility, criminal justice, and housing that lack adequate oversight.
Organizations do not need to be technology-focused to play this role effectively. Housing nonprofits that document how AI is used in tenant screening are doing exactly the kind of accountability work Humanity AI values. Labor organizations that negotiate AI use policies in collective bargaining agreements represent workers in AI governance decisions. Civil rights organizations that monitor predictive policing algorithms are defending the communities most at risk from AI bias. The through-line is not technical expertise but community trust and advocacy capacity.
- Document AI impacts on the communities you serve, especially disparate or harmful effects
- Build organizational capacity to engage AI policy debates on behalf of your constituents
- Join coalitions of organizations advocating for public-interest AI governance
How Foundations Are Evaluating AI Readiness
As Humanity AI prepares to make its first grants, the question of how foundations evaluate AI readiness in grant applications is becoming more consequential. The answer is still evolving, but the emerging direction is clear: foundations are moving from asking "do you use AI?" toward asking "do you use AI responsibly, and how do you know?"
The most forward-looking foundations are beginning to ask applicants about their AI governance frameworks, equity testing practices, and staff training investments. Some grantmakers are adopting AI-assisted tools to evaluate applicant financial health and organizational capacity, making it increasingly important for nonprofits to present clear, accurate, and well-organized data about their operations. The Patrick J. McGovern Foundation's Grant Guardian tool, for example, is being adopted by roughly 200 grantmakers to generate instant financial health reports on applicants, a development that rewards organizations with good financial documentation and data hygiene.
Questions Foundations May Ask About AI Readiness
Prepare thoughtful answers to these increasingly common evaluation criteria
- Does your organization have a formal AI governance policy? If so, how is it enforced and reviewed?
- How do you ensure AI tools you use do not produce discriminatory outcomes for the communities you serve?
- Who on your leadership team has responsibility for AI oversight, and what training have they received?
- What is your plan for data security and privacy compliance as you expand AI use?
- How will you measure mission impact from AI, not just efficiency gains?
- How are the communities you serve involved in decisions about how AI affects their experience with your organization?
Nonprofits that can answer these questions clearly and credibly are positioned as serious AI practitioners rather than organizations that have simply adopted tools without reflection. The difference matters because Humanity AI's founding premise is that the nonprofit sector can demonstrate what responsible AI looks like, creating a counterexample to the approach that prioritizes speed and scale over accountability and equity.
For organizations that are still building their AI governance frameworks, the timeline is urgent but not impossible. Starting with a clear AI policy that establishes what tools your organization uses, how data is handled, who reviews AI-generated content or decisions, and how clients can raise concerns creates a governance foundation that can grow more sophisticated over time. The AI policy governance gap article offers a practical starting point for organizations at the beginning of this process.
The Equity Imperative at the Heart of Humanity AI
No aspect of Humanity AI's agenda is more urgent, or more challenging, than the equity dimension. The initiative reflects a conviction that AI bias is not merely a technical problem to be solved by better engineering. It is a social and political problem that reflects existing inequities in data, power, and institutional design, and that the nonprofit sector is uniquely positioned to address because of its proximity to affected communities.
The stakes are particularly high for nonprofits serving vulnerable populations. Healthcare organizations that use AI to triage patients or allocate scarce resources need to know whether their algorithms perform equally well across racial, economic, and geographic lines. Housing nonprofits that use AI to assess client eligibility need to test whether those assessments reflect historical discrimination embedded in training data. Criminal justice reform organizations that engage with predictive policing or risk assessment tools need to understand the equity implications of those systems before deciding how to respond.
Known Equity Risks in Nonprofit AI
- Healthcare algorithms that deprioritize already-marginalized groups due to biased training data
- Eligibility determination systems that perpetuate historical patterns of exclusion from benefits and services
- Hiring and volunteer screening tools that reflect demographic biases in existing workforce data
- Fundraising models that systematically undervalue or overlook donors from particular communities
Equity Practices That Demonstrate Readiness
- Regular audits of AI-assisted decisions disaggregated by race, income, geography, and other relevant demographic factors
- Meaningful community input into AI deployment decisions before launch, not post-hoc consultation
- Clear processes for clients or community members to challenge AI-influenced decisions affecting them
- Ongoing staff training that addresses both technical AI capabilities and their equity implications
The challenging reality is that equity awareness is outpacing equity action across the nonprofit sector. A growing percentage of organizations express concern about AI bias, yet the proportion implementing concrete equity practices has not kept pace. This gap is exactly what Humanity AI and its founding coalition are trying to close, by funding organizations that move from awareness to practice and by building a field of knowledge about what effective equity-centered AI governance looks like in different nonprofit contexts.
Strategic Actions for Nonprofits Right Now
With Humanity AI's first grants expected in 2026 and a broader philanthropic shift toward AI accountability underway, the window to position your organization effectively is open now. The following actions build organizational capacity while aligning with funder priorities, creating compounding returns as the philanthropic AI landscape matures.
Immediate Actions (Next 90 Days)
- Conduct an AI inventory: Document every AI tool your organization currently uses, who uses it, what decisions it informs, and what data it processes. Most organizations discover they are using more AI than they realized once they do this exercise systematically.
- Draft a basic AI policy: Even a two-page policy that establishes your principles, identifies oversight responsibilities, and sets boundaries around sensitive use cases demonstrates governance commitment. Many resources exist to guide this process; the key is starting and iterating rather than waiting for a perfect policy.
- Map your mission to Humanity AI's focus areas: Identify which of the five priority areas (democracy, education, labor and work, arts and culture, safety and security) most closely align with your core programs. Develop a one-paragraph description of how your work connects to people-centered AI outcomes that you can use in grant narratives.
- Identify your AI champion: Designate someone on your leadership team as the point person for AI governance and funder relationships. This does not require technical expertise; it requires organizational authority and the time to stay current on philanthropic AI trends.
Longer-Term Strategic Investments (6 to 18 Months)
- Build equity testing into AI deployments: For every AI tool you use in client-facing work, identify what an equity audit would look like and commit to conducting one within a defined timeframe. Starting with your highest-stakes applications and working down builds accountability without creating unsustainable workload.
- Invest in staff AI literacy: The organizations pulling ahead in the nonprofit AI landscape are those that have moved beyond individual staff experimenting with tools to building shared organizational knowledge about what AI can and cannot do. Even modest investments in structured learning, shared reflection, and peer education create durable capacity that funders increasingly value.
- Measure mission impact, not just efficiency: Document cases where AI use contributed to better outcomes for the people you serve, not just faster processing or cost savings. These impact stories are what differentiate organizations in competitive grant applications and what Humanity AI funders most want to see.
- Engage in sector coalitions: Organizations that participate in peer learning networks, advocacy coalitions, and field-building conversations about responsible AI gain both knowledge and visibility. Funders pay attention to which organizations are contributing to shared knowledge in the field, not just receiving it.
These investments serve your organization beyond their value in securing Humanity AI grants. A comprehensive AI strategy that centers equity and community voice, maintains meaningful human oversight, and documents real mission impact creates organizational resilience in an environment where AI use will face increasing scrutiny from funders, regulators, and the communities you serve.
The Bigger Picture: A Sector-Defining Moment
Humanity AI is not just a grant program. It represents a diagnosis and a bet: a diagnosis that AI development has been shaped too narrowly by technology company interests, and a bet that coordinated philanthropic investment in civil society can produce a more equitable and accountable alternative. Whether or not that bet pays off will depend significantly on what nonprofits do with the opportunity.
The nonprofit sector has a credibility that technology companies lack in many communities. Organizations that have spent decades building trust with vulnerable populations, advocating for policy change, and measuring impact against mission rather than shareholder returns are exactly the kind of institutions that should be shaping AI governance. Humanity AI is making a major investment in creating space for those institutions to exercise that influence. The question for nonprofit leaders is whether their organizations are ready to step into that role.
The organizations that will benefit most from this philanthropic moment are those that see responsible AI not as a compliance burden but as a genuine expression of their values. For an organization whose mission is to serve communities fairly and effectively, committing to equitable AI practices is not extra work. It is an extension of the same accountability and community centeredness that defines good nonprofit practice in every other domain.
The strategic gap between AI adoption and AI impact across the sector is real, but it is also an opening. Nonprofits that invest now in building the governance frameworks, equity practices, and measurement systems that funders like Humanity AI are looking for will not just access more funding. They will demonstrate that AI can serve people and communities as effectively as it serves the technology companies that created it.
Conclusion
The Humanity AI Initiative marks a turning point in how philanthropy approaches artificial intelligence. By committing $500 million across five years to build people-centered AI development, ten of the world's most influential foundations are signaling that the nonprofit sector has a critical role to play, not just as a beneficiary of AI tools but as an architect of how AI develops in ways that serve communities rather than exploit them.
For nonprofit leaders, the most important action is clarity about which role your organization is best positioned to play. Whether you pursue Humanity AI grants directly, build internal practices that demonstrate responsible AI deployment, or develop your capacity to advocate for AI accountability on behalf of the communities you serve, the window to position your organization is open now. The first grants will be distributed in 2026, and the foundations making those decisions are looking for organizations that have already done the work of becoming serious AI practitioners.
The philanthropic sector's pivot toward AI accountability is accelerating. Nonprofits that engage proactively with this shift, building governance capacity, investing in equity practices, and connecting AI use to measurable mission impact, will find themselves better positioned not just for Humanity AI funding but for the entire emerging landscape of philanthropic investment in responsible technology. The choice between waiting to see how this plays out and building readiness now is itself a strategic decision with compounding consequences.
Ready to Build Your AI Governance Framework?
Position your nonprofit to access Humanity AI funding and meet the growing expectations of funders who care about responsible AI. Our team helps organizations develop governance frameworks, equity practices, and AI strategies that align with philanthropic priorities.
