64% Know About AI Bias, 36% Do Anything: Closing the Equity Practice Gap
The nonprofit sector has learned to talk about AI bias. What it has not yet learned is how to systematically act on that knowledge. Understanding the specific barriers between awareness and practice, and the concrete steps that bridge them, is the difference between organizations that use AI responsibly and those that inadvertently harm the communities they serve.

Something troubling is happening in the nonprofit sector's relationship with AI equity. Awareness of AI bias is rising steadily: in recent sector research, 64% of nonprofit professionals reported familiarity with AI bias, up significantly from prior years. Yet the percentage of organizations actively implementing equity practices in their AI work has declined over the same period, dropping to just 36%. More awareness. Less action. The gap between what nonprofit professionals know and what their organizations do is widening precisely as AI adoption accelerates.
This is not a knowledge problem. Nonprofit leaders understand, at least in general terms, that AI systems trained on biased data can produce biased outputs. They know that algorithms used to allocate services, screen job candidates, or prioritize donor outreach can embed and amplify existing inequities. More than half of nonprofits surveyed report concern that AI could harm marginalized communities. The sector is not ignorant of the risk. It is failing to translate that risk awareness into organizational practice.
The consequences are not abstract. Nonprofits exist to serve communities, often including the most vulnerable and historically marginalized populations. When an AI-assisted case management system systematically underserves certain demographic groups, when an AI donor scoring model reflects historical giving patterns shaped by racial wealth gaps, when an AI hiring tool screens out applicants with non-Western name formats, these are not theoretical harms. They represent failures of mission that can undermine decades of trust-building with the communities an organization exists to support.
This article examines why the awareness-to-action gap exists, what specific barriers prevent equity practice in nonprofit AI, and what concrete steps organizations can take to close the gap. The goal is not to induce paralysis about AI adoption, but to help nonprofit leaders build the habits, structures, and accountability mechanisms that make AI use genuinely consistent with their equity commitments.
Why Awareness Doesn't Translate to Action
Several interconnected factors explain why nonprofit organizations that sincerely care about equity still fail to implement equity practices in their AI work. Understanding these barriers is the first step toward designing interventions that actually work.
Vague Awareness Without Concrete Knowledge
Knowing that "AI can be biased" is very different from understanding specifically how to evaluate whether a particular tool used by your organization exhibits bias in a way that affects your beneficiaries. Many nonprofit leaders can articulate the concept but cannot name a single concrete evaluation step they would take to assess a new AI tool for equity problems.
When awareness is general rather than specific, it tends to produce anxiety rather than action. People defer to someone else who presumably knows more, wait for guidance that hasn't arrived, or reassure themselves that the tools they're using probably aren't the problematic kind.
No Clear Ownership or Accountability
Most nonprofits have not assigned responsibility for AI equity to any specific person, team, or role. When equity concerns about an AI tool arise, there is no established process for who evaluates the concern, what criteria they use, or what authority they have to modify or discontinue the tool's use.
Without clear ownership, AI equity issues tend to get discussed in general terms and then shelved, with everyone assuming that someone else is handling it. This is not negligence but a structural gap that allows individual commitment to equity to disconnect from organizational practice.
The Complexity Illusion
AI bias auditing is sometimes presented as requiring technical expertise that most nonprofits don't have: statistical testing, explainable AI tools, algorithmic fairness metrics. This technical framing leads many organizations to conclude that responsible AI equity practice is beyond their reach without specialized staff.
In reality, many of the most important equity questions nonprofits should ask about their AI tools are not technical at all. They are questions about who was involved in building the tool, what populations were represented in the training data, what the tool's outputs mean for different demographic groups, and what recourse exists when the tool produces harmful results.
Missing Community Voice
Equity practice in AI is fundamentally about whose interests and values shape how a technology is built and deployed. When the communities most affected by an AI tool are not consulted about its design, implementation, or outcomes, the result is almost inevitably that the tool serves the interests of the organization deploying it rather than the communities it touches.
Most nonprofits do not have established mechanisms for community input into technology decisions. Building those mechanisms takes time and intentionality that tends to get crowded out by implementation timelines and operational demands.
There is also a deeper structural issue: the 36% of nonprofits implementing equity practices are not evenly distributed across the sector. Organizations with larger budgets, staff with data science backgrounds, or explicit DEI mandates from funders are more likely to have equity practices in place. Smaller organizations, those serving rural communities, and those operating in sectors where AI adoption has been slower are less likely to have them. The equity practice gap is itself inequitably distributed, which means that the communities served by under-resourced nonprofits are at elevated risk of AI-related harms.
What AI Equity Practice Actually Looks Like
The phrase "AI equity practice" can sound like something that requires a dedicated ethics team and a multi-month audit process. For most nonprofits, the practical starting point is much simpler: building specific questions into existing decision-making processes and establishing a minimal baseline of review before deploying AI tools that affect service delivery or beneficiary outcomes.
Oxfam International's January 2025 submission to the UN Working Group on Business and Human Rights offers a useful framing. Grounding AI safeguards in the UN Guiding Principles on Business and Human Rights, Oxfam argues that AI governance for international nonprofits should be rooted in fairness, accountability, and transparency, and should remain attentive to the diverse cultural contexts in which technology operates. This rights-based framing is valuable because it anchors AI ethics in existing human rights commitments that most nonprofits already claim, rather than treating AI ethics as a separate technical domain requiring new expertise.
For most nonprofits, equity practice means three concrete things: asking the right questions before adopting a tool, monitoring outputs for disparate impacts after deployment, and establishing a process for responding when problems are identified. None of these requires a data scientist. All of them require intentionality and organizational commitment.
Pre-Adoption Equity Questions for AI Tools
Ask these before deploying any AI tool that affects beneficiaries, staff, or service decisions
About the Tool's Design
- What populations were represented in the training data?
- Was the tool designed with communities it affects?
- Has the developer published equity or bias testing results?
- What languages, dialects, or cultural contexts does it perform poorly for?
About the Tool's Use in Your Context
- Which demographic groups in our community would this affect?
- What is the consequence if this tool gets it wrong for a beneficiary?
- Is there a human review step before the tool's output affects anyone?
- How would a affected community member appeal or challenge a decision?
These questions do not require technical expertise to ask, but they require organizational discipline to make routine. The organizations that have made the most progress on AI equity have embedded these questions into their technology adoption process, not as a separate equity review, but as part of standard due diligence when evaluating any new tool that touches beneficiary or staff data.
Monitoring for disparate impact after deployment is the second pillar of equity practice. This means regularly examining whether the AI tool's outputs differ systematically across demographic groups in ways that disadvantage certain populations. For a client referral algorithm, that might mean checking whether referrals are equally distributed across racial groups or whether certain groups are systematically referred to lower-quality services. For a donor prospect scoring model, it might mean examining whether the model effectively ignores donors from communities with lower historical giving rates even when those donors have genuine capacity and interest.
You do not need sophisticated statistical tools to conduct basic disparate impact monitoring. If your organization collects demographic data on the populations it serves (and many do for reporting purposes), you can segment AI-influenced outcomes by demographic category and examine whether meaningful differences exist. When they do, that is a signal requiring investigation, not necessarily a finding of bias, but a reason to look more carefully at what the tool is doing and why.
Building Structural Accountability for AI Equity
Individual commitment to equity, however genuine, does not produce consistent organizational practice without structural support. The organizations that have successfully moved from awareness to action have typically done so by building accountability into their processes rather than relying on individuals to remember to ask equity questions when implementing new technology.
The most effective structural intervention is designating someone with explicit responsibility for AI equity review. This does not require creating a new role or hiring a specialist. It means identifying an existing staff member, ideally someone with both mission understanding and reasonable comfort with technology, and giving them clear authority and time to conduct pre-adoption equity reviews, monitor for disparate impacts, and escalate concerns. In larger organizations, this might be a working group that includes program staff, communications, and leadership. In smaller ones, it might be a single person with a defined process.
Assign Ownership
Designate a specific person or team responsible for AI equity review. Without clear ownership, accountability dissipates.
- Name who reviews new AI tools
- Define their authority to raise concerns
- Protect time for this work
Build Processes
Embed equity questions into technology adoption workflows so they happen routinely rather than only when someone remembers.
- Add equity checklist to vendor evaluation
- Require quarterly disparate impact review
- Document decisions and rationale
Involve Community
Create pathways for the communities you serve to provide input on technology decisions and report concerns about AI-influenced outcomes.
- Include community voice in AI governance
- Create accessible feedback mechanisms
- Share what you learn and what you change
Community voice in AI governance deserves particular attention because it is both most important and most consistently absent. Organizations that engage the communities they serve in decisions about how AI is used in service delivery build something more valuable than a compliance checklist: they build the kind of trust and feedback loops that surface equity problems early, before they cause significant harm. This does not mean convening a community advisory board for every technology decision. It means having existing channels (client advisory groups, community listening sessions, feedback mechanisms in service delivery) that are explicitly empowered to surface concerns about how technology affects community members.
There is a meaningful parallel between the work of building AI equity practice and the broader work of building equitable organizations. Both require moving from stated values to embedded practices, from individual commitment to structural accountability, from reactive crisis management to proactive design. Organizations that have already done significant work on equity in hiring, leadership, and program design tend to find AI equity work conceptually familiar even if the specific technical dimensions are new. For organizations earlier in their equity journey, AI equity practice can actually be an entry point for broader conversations about whose voices shape organizational decisions.
Tools and Frameworks for Nonprofit AI Equity
Several organizations have developed frameworks and tools specifically designed to help nonprofits move from AI bias awareness to equity practice. These resources range from conceptual frameworks to practical checklists and technical tools, and most are free for nonprofit use.
Vera Solutions: Nine Principles of Responsible AI
Nonprofit-specific framework for ethical AI implementation
Vera Solutions has published nine principles of responsible AI specifically tailored for nonprofits, addressing transparency, accountability, privacy, and equity in accessible language that does not assume technical expertise.
- Grounded in nonprofit operational context
- Addresses power dynamics in AI deployment
- Free, publicly available framework
ORCAA and Eticas.ai: Independent AI Auditing
Third-party audit services for organizations ready to go deeper
For organizations deploying AI tools that significantly affect beneficiary outcomes, independent bias audits provide an external perspective that internal review cannot fully replace. ORCAA and Eticas.ai both offer audit services focused on algorithmic accountability.
- Statistical testing for disparate impact
- Explainable AI root-cause analysis
- Recommendations for bias reduction
For organizations that want to assess their AI tools systematically without engaging an external auditor, the Algorithmic Justice League publishes educational resources and advocacy tools that help organizations understand and communicate about AI bias. The Center on Race and Digital Justice focuses specifically on how AI affects communities of color and offers frameworks for rights-centered technology evaluation that are directly applicable to nonprofits serving these communities.
It is worth acknowledging that some AI tools present higher equity stakes than others. An AI tool that generates social media captions presents different risks than one that determines which clients are prioritized for housing assistance, which job candidates advance in your hiring process, or which donors are approached for major gifts. Organizations should calibrate the depth of their equity review to the stakes involved. Not every AI tool requires an independent audit. But every AI tool that significantly affects the life chances of people in marginalized communities deserves serious scrutiny before and after deployment.
The question of AI governance policy is also relevant here. Organizations that have written AI policies, even simple ones, are more likely to have equity practices in place because the policy development process forces the organization to articulate what it cares about and what commitments it is making. If your organization does not yet have an AI policy, developing one in partnership with staff, leadership, and community stakeholders is itself an equity practice: it creates a shared foundation of values and expectations that makes subsequent decisions about specific tools easier and more consistent. Our article on bridging the AI governance gap provides practical guidance on building that foundation.
Sector-Specific AI Equity Considerations
High-stakes contexts where AI equity practice is most critical
Service Delivery AI
- Client prioritization and triage algorithms
- Housing and resource matching systems
- Case management recommendation tools
Organizational AI
- Recruiting and applicant screening tools
- Donor prospect scoring and wealth screening
- Communications personalization algorithms
The Mission Coherence Argument for AI Equity Practice
There is sometimes a tendency to frame AI equity practice as a compliance concern, something organizations do to avoid criticism or regulatory risk. This framing misses the deeper point. For mission-driven organizations, AI equity practice is not primarily about risk management. It is about mission coherence.
If an organization's mission is to reduce housing insecurity for people experiencing poverty, and it deploys an AI tool that systematically deprioritizes people from certain racial or ethnic backgrounds for assistance, it is not merely violating an equity principle in the abstract. It is failing its mission. The organization exists to serve those people, and its AI tool is working against that purpose. This is not a reputational problem or a policy compliance problem. It is a fundamental organizational failure.
The most compelling argument for closing the AI equity practice gap is not that bias is wrong in principle (though it is) or that regulatory pressure is increasing (though it is). It is that mission-driven organizations cannot afford to use tools that undermine their mission. Every nonprofit that works with marginalized communities and deploys AI without equity practice is taking on mission risk that should be unacceptable to its leadership, its board, its funders, and its community.
The good news is that closing the practice gap does not require resources most nonprofits do not have. It requires asking specific questions before adopting tools, building simple monitoring processes after deployment, designating clear ownership for equity review, and creating genuine channels for community voice. These are habits, not capabilities. They can be developed through intentional practice rather than technical investment. Organizations that build these habits now will be better positioned to use AI effectively and responsibly as both the technology and the regulatory landscape continue to evolve rapidly. For a broader look at how to build an AI-ready organization, our guide to the nonprofit AI maturity curve provides a useful framework for assessing where your organization stands and what comes next.
From Awareness to Accountability
The widening gap between AI bias awareness and equity practice in the nonprofit sector reflects something familiar in organizational change: the distance between knowing something is important and building the structures that ensure it happens consistently. Awareness is necessary but not sufficient. It does not produce equity. Practice does.
Closing the practice gap requires moving from general concern about AI bias to specific organizational commitments: naming who is responsible for equity review, building equity questions into technology evaluation processes, monitoring for disparate impact after deployment, and creating genuine mechanisms for community voice. These are achievable for most nonprofits without hiring specialists or commissioning expensive audits.
The organizations that will use AI most responsibly in the coming years are not necessarily the ones with the largest budgets or the most technical staff. They are the ones that have built AI equity into their organizational culture, treating it as an expression of mission commitment rather than a regulatory burden. For organizations that exist to serve marginalized communities, there is no more important AI investment than making sure that the tools deployed in service of that mission actually work for everyone they are intended to help.
Ready to Build AI Equity Practice in Your Organization?
One Hundred Nights helps nonprofits develop AI governance frameworks that align with their equity commitments and mission. We can help you move from awareness to practice.
