Back to Articles
    AI Ethics & Equity

    Oxfam's Rights-Based AI Framework: A Model for Social Justice Organizations

    Most AI ethics discussions stay at the level of abstract principles. Oxfam and a growing coalition of human rights organizations are demanding something more concrete: enforceable accountability grounded in international human rights law. Here is what their approach means for your organization.

    Published: March 8, 202614 min readAI Ethics & Equity
    Rights-based AI framework for social justice organizations

    When Oxfam published its 2025 research on AI deployment in the Middle East and North Africa, the findings were difficult to ignore. Across the region, AI technologies were deepening existing inequalities rather than reducing them. Facial recognition systems were targeting activists. Predictive policing tools were amplifying discriminatory enforcement patterns. Smart city infrastructure was being used to monitor and restrict the movements of women and LGBTQIA+ people. The paper's conclusion was clear: voluntary ethics commitments were not sufficient to prevent these harms.

    This is the central tension that rights-based AI frameworks are designed to resolve. Social justice organizations, more than most nonprofit actors, operate in direct contact with communities who bear the highest risks from poorly deployed AI. They also frequently have the deepest relationships with those communities, the clearest mandate to protect their rights, and the most direct stake in ensuring AI tools advance rather than undermine their missions. The question is no longer whether to engage with AI, but how to do so in a way that is consistent with the values that drive the work.

    Oxfam's approach, developed in parallel with frameworks from Amnesty International, the International Committee of the Red Cross, NetHope, and other humanitarian and human rights organizations, offers a practical model. It moves beyond the abstract language of fairness and transparency that characterizes most AI ethics discourse and grounds accountability in binding international human rights law. It insists that communities most affected by AI must govern it, not merely be consulted about it. And it provides concrete tools for organizations to assess, audit, and challenge the AI systems that touch their work and the populations they serve.

    This article examines what a rights-based approach to AI means in practice, how leading organizations are implementing it, and what steps social justice nonprofits can take to adopt these principles in their own operations. If your organization is already thinking about how to get started with AI, this framework provides the ethical foundation that should underpin every decision you make.

    Ethics vs. Rights: Why the Distinction Matters

    The AI industry has produced an enormous volume of ethics guidelines, responsible AI principles, and fairness frameworks over the past several years. Most major technology companies have published versions of these commitments. Many nonprofit sector bodies have developed their own variations. At first glance, this proliferation might look like progress. Human rights organizations, however, have raised a fundamental concern: voluntary ethics commitments are not enforceable, and they have not prevented documented harms.

    Amnesty International's position captures this critique directly. In their February 2025 statement at the AI Action Summit in Paris, they called for "binding and enforceable regulation to curb AI-driven harms," explicitly distinguishing this from abstract ethics discussions that have failed to prevent real-world harm. Their Algorithmic Accountability Toolkit, published in December 2025, provides a practical complement to this advocacy: concrete methods for investigating and seeking accountability for algorithmic harms in welfare systems, policing, healthcare, and education across multiple countries.

    The rights-based approach differs from ethics-only frameworks in several ways. Rights are not aspirational; they are legal claims grounded in binding instruments like the International Covenant on Civil and Political Rights, the Convention on the Rights of the Child, and regional human rights treaties. Rights create corresponding duties for states, corporations, and organizations that deploy systems affecting people. Rights can be enforced through courts, regulatory bodies, and international mechanisms. And rights belong to specific people in specific circumstances, not to the abstract "users" and "stakeholders" that ethics frameworks typically address.

    Ethics-Only Approaches

    What voluntary frameworks typically offer

    • Aspirational principles (fairness, transparency, accountability)
    • Self-assessment and internal governance
    • Consultation with affected communities
    • Reputational rather than legal accountability

    Rights-Based Approaches

    What enforceable frameworks add

    • Legal rights grounded in international human rights law
    • Enforceable duties for deploying organizations
    • Community governance with real decision-making authority
    • Judicial, regulatory, and advocacy enforcement pathways

    What Oxfam's Framework Actually Looks Like

    Oxfam's work on AI does not take the form of a single policy document. Instead, it reflects an analytical approach developed through concrete research and refined in dialogue with international governance bodies. The October 2025 MENA research paper, for instance, does not simply document harms. It provides a three-part analytical framework organizing AI-related challenges into co-optation (where AI tools are used to advance harmful agendas), engagement (where organizations find ways to use AI within constrained environments), and resistance (where communities and civil society push back against harmful applications).

    Oxfam's January 2025 submission to the UN Working Group on Business and Human Rights extends this analysis into governance recommendations, arguing for AI frameworks grounded in fairness, accountability, and transparency as understood through international human rights law rather than corporate self-governance. This submission aligns Oxfam with a broader movement in the humanitarian and human rights sector that includes the ICRC's Human Rights Impact Assessment protocols, NetHope's Humanitarian AI Code of Conduct, and Vera Solutions' nine-principle framework for mission-aligned AI use.

    For practical adoption by social justice nonprofits, the Oxfam approach can be distilled into four operational commitments: mission alignment as the first filter for any AI adoption decision, human rights impact assessment as a required step before deployment, meaningful community governance rather than tokenistic consultation, and active policy engagement to strengthen external accountability frameworks. Each of these commitments has concrete organizational implications that go well beyond aspirational principle-setting.

    Mission Alignment First

    Before adopting any AI tool, ask whether it advances the rights of the communities you serve. This single question filters out many commonly promoted tools that may be neutral or beneficial in corporate settings but carry specific risks in social justice contexts.

    • Map AI use cases to specific programmatic goals
    • Explicitly prohibit uses that could harm beneficiaries
    • Require mission alignment review before any new tool adoption

    Human Rights Impact Assessment

    HRIAs are adapted from environmental impact assessment methodology and apply a human rights lens to AI deployment decisions. They are practical tools, not theoretical exercises, and they produce actionable findings about who is affected and how.

    • Identify which rights are implicated by the system
    • Assess differential impacts across demographic groups
    • Establish mitigation measures and monitoring commitments

    Meaningful Community Governance

    Rights-based frameworks insist that affected communities must govern AI systems that affect them, not merely be consulted. This requires structural mechanisms, not just feedback loops.

    • Formal community representation in AI governance bodies
    • Real decision-making authority, including veto power over harmful uses
    • Accessible grievance mechanisms for those affected by AI decisions

    Active Policy Engagement

    Internal governance is necessary but not sufficient. Rights-based organizations also engage external policy processes to strengthen the legal and regulatory environment for everyone.

    • Document and report AI harms through established channels
    • Participate in regulatory comment processes
    • Engage in coalitions advocating for binding AI accountability

    Understanding the Stakes: Where AI Has Caused Harm

    A rights-based approach is not theoretical caution. It is a response to documented, ongoing harms that affect specific communities. Social justice organizations need to understand this landscape not to be paralyzed by it, but to make informed decisions about where AI can safely advance their work and where it poses unacceptable risks.

    Facial recognition technology illustrates the problem clearly. Research has consistently shown that these systems perform significantly less accurately on darker-skinned faces, with Black individuals accounting for a substantial proportion of wrongful arrests attributable to faulty facial recognition matches. For organizations working with communities that have historically experienced discriminatory policing, adopting or advocating for facial recognition systems represents a direct contradiction of mission, regardless of the intended application.

    Predictive policing systems compound this problem by embedding historical discrimination into algorithmic outputs. When these systems learn from arrest records shaped by decades of racially biased enforcement, they produce predictions that systematically over-flag Black and Latino neighborhoods, generating a feedback loop that appears data-driven but is actually replicating human bias at scale. Amnesty International has documented specific cases of this dynamic across multiple jurisdictions.

    The welfare system is another high-stakes domain. Amnesty's research into Denmark's welfare AI system documented how algorithmic tools used to assess benefit eligibility created mass surveillance risks and systematically disadvantaged disabled people, low-income individuals, migrants, and refugees. Similar concerns have been documented in welfare systems in the United Kingdom, Australia, and the United States. For nonprofits that advocate for or provide services to any of these populations, these findings have direct operational relevance.

    At the same time, AI has demonstrated genuine potential to advance social justice missions when deployed thoughtfully. Learning Equality matched thousands of educational resources with relevant categories in Uganda at a scale impossible without AI assistance. Digital Green has used AI to deliver agricultural information to smallholder farmers in their own languages. Legal aid organizations are using natural language processing to accelerate document review and expand access to legal help. The rights-based framework is not anti-AI; it is pro-accountability.

    High-Risk AI Use Cases for Social Justice Organizations

    These applications warrant heightened scrutiny and often explicit prohibition

    • Facial recognition technology, particularly for identifying or tracking individuals
    • Predictive risk scoring that determines service eligibility for vulnerable populations
    • Social media monitoring or surveillance of community members
    • Sharing beneficiary data with AI vendors who may use it for model training
    • AI systems making consequential decisions without meaningful human review
    • Tools that aggregate location, behavioral, or identity data on community members

    The Broader Ecosystem of Rights-Based AI Frameworks

    Oxfam's approach is part of a wider movement that social justice organizations can draw on for practical tools, standards, and community. Understanding this landscape helps organizations avoid reinventing frameworks that already exist and enables participation in sector-wide accountability initiatives.

    Amnesty International's Algorithmic Accountability Toolkit, released in December 2025, provides detailed guidance for investigating algorithmic harms across six domains: welfare and social protection, policing and criminal justice, healthcare, education, migration and asylum, and employment. The toolkit covers practical methods including freedom of information requests, technical audits, community testimony collection, and strategic litigation. For organizations that work in any of these domains, this is a direct operational resource.

    NetHope's Humanitarian AI Code of Conduct represents a sector-specific governance standard designed for organizations delivering programs in complex, resource-constrained settings. Developed in collaboration with humanitarian organizations operating in conflict-affected and disaster-affected contexts, it addresses the particular challenges of data protection, community consent, and vendor accountability in environments where regulatory protections may be weak or absent. NetHope has complemented this with a Gender Equitable AI Toolkit that specifically addresses how AI systems can reproduce or amplify gender inequalities.

    Vera Solutions, a technology provider serving the impact sector, has published a nine-principle framework covering ethics and community values, transparency, fairness, privacy, environmental impact, robustness, accountability, human oversight, and social value. This framework is designed to be operational rather than aspirational, with each principle connected to specific practices and questions that organizations can use in AI procurement, implementation, and review processes.

    At the international level, the Council of Europe's Framework Convention on AI entered into force as the first binding international human rights treaty specifically addressing AI governance. For organizations with international operations or that are engaged in international advocacy, this convention represents a significant new accountability lever. This is also relevant to the broader regulatory landscape that nonprofits operating in global contexts need to understand.

    Amnesty International

    Algorithmic Accountability Toolkit (Dec 2025)

    Practical investigation methods for documenting and challenging algorithmic harms across welfare, policing, healthcare, education, and other high-stakes domains.

    NetHope

    Humanitarian AI Code of Conduct

    Sector-specific governance standards for humanitarian organizations, including specialized guidance on gender equity and AI training programs.

    Council of Europe

    Framework Convention on AI

    The first binding international human rights treaty specifically addressing AI governance. Applicable to signatory states and increasingly influential globally.

    Practical Implementation: Where to Start

    Adopting a rights-based AI framework does not require beginning with a comprehensive governance overhaul. Most organizations will be better served by a phased approach that builds on existing commitments while progressively deepening accountability structures. The following sequence reflects the priorities identified by Oxfam, NetHope, and other organizations that have navigated this process.

    The first step is clarity about what AI is already in use. Many organizations are surprised to discover how widely AI tools have already been adopted informally by staff using freely available consumer applications. Conducting an AI audit, mapping every AI system touching organizational operations and beneficiary data, provides the baseline information needed for any subsequent governance work. This audit should cover not just tools purchased by the organization but any AI features embedded in existing software subscriptions, including CRM systems, fundraising platforms, and communication tools. Organizations that have already built foundational knowledge management systems may find this process smoother because they have existing documentation practices to build on.

    Vendor accountability is a second early priority. Most nonprofits do not scrutinize the AI-related provisions in their vendor agreements closely, but these agreements determine whose data is used for model training, who has liability when AI systems cause harm, and what transparency obligations the vendor accepts. Requesting bias audit documentation, clarifying data usage terms, and adding explicit AI governance requirements to vendor agreements are practical steps that many organizations can implement without specialized legal expertise.

    Establishing an internal AI governance structure, even a simple one, creates the organizational capacity to make consistent decisions over time. This does not require a dedicated AI ethics committee at the outset. Designating a staff member to serve as the AI governance lead, establishing a basic policy that identifies prohibited uses, and creating a review process for new AI tool adoptions provides a workable foundation. As the organization's AI use matures, this structure can grow in sophistication. The relationship to your broader AI strategic planning should also be clear from the start.

    Immediate Actions (First 90 Days)

    Build the foundation for rights-based AI governance

    • Conduct an AI audit of all current tools and how they touch beneficiary data
    • Draft a foundational AI use policy that identifies permitted and prohibited uses
    • Designate an AI governance lead with board-level oversight and reporting
    • Review vendor agreements for data usage transparency and bias audit access
    • Train staff on data minimization and when to exercise human override of AI outputs

    Medium-Term Development (3-12 Months)

    Deepen accountability and community governance structures

    • Conduct Human Rights Impact Assessments for any AI tools that affect beneficiaries
    • Establish formal community governance mechanisms with real decision-making authority
    • Request bias audits and fairness documentation from key AI vendors
    • Develop accessible grievance mechanisms for community members affected by AI decisions
    • Join sector-wide accountability coalitions (NetHope, Amnesty, relevant networks)

    Longer-Term Commitments

    Mature governance for established AI programs

    • Develop a comprehensive rights-based AI framework document tied to organizational values
    • Engage in policy advocacy for binding AI accountability legislation
    • Build cross-disciplinary AI governance capacity (program, legal, data, community organizing)
    • Integrate environmental impact assessment into AI procurement decisions
    • Contribute organizational experience to sector knowledge-sharing on rights-based AI

    AI as an Advocacy Tool: The Other Side of the Equation

    A rights-based framework governs both how organizations use AI internally and how they respond to AI deployed by others. Social justice organizations are often in a unique position to use AI tools to advance their advocacy and accountability missions, even as they scrutinize AI in other contexts.

    Natural language processing tools can dramatically accelerate document analysis, enabling organizations to process large volumes of public records, regulatory filings, and court documents that would otherwise require prohibitive staff time. Satellite imagery analysis, used by environmental and land rights organizations, can document patterns of harm at scales impossible through direct field observation. Data visualization tools can make complex quantitative arguments accessible to non-specialist audiences, including policymakers and journalists.

    Pattern analysis is particularly powerful for organizations working on systemic rather than individual harms. AI tools that identify disparate patterns in public datasets, housing records, environmental violations, or policing data can surface evidence of structural discrimination that is difficult to establish through anecdote. This kind of data-driven advocacy has been central to several high-profile civil rights campaigns and is increasingly accessible to organizations without dedicated data science teams. The connection to AI-powered data visualization is particularly relevant here, as these tools can make quantitative findings compelling and communicable.

    The key is to apply the same rights-based standards to advocacy-oriented AI use as to operational AI use. Data sources should be verified and bias-assessed. AI-generated analysis should be reviewed by humans before being used as the basis for public claims. Community members whose experiences inform data analysis should be involved in how that analysis is framed and used. And organizations should be transparent with their audiences about when and how AI has contributed to their work.

    Organizations that are building internal AI capacity should ensure that this advocacy dimension is integrated into how staff learn about and experiment with AI tools. The same champions who support operational AI adoption can often become equally valuable in identifying opportunities to use AI for mission-advancing advocacy.

    Conclusion: Rights-Based AI Is Consistent AI

    The case for a rights-based approach to AI is ultimately a case for consistency. Social justice organizations that work to protect the rights of communities most harmed by structural inequalities are, by definition, committed to those same standards in their own operations. An organization that advocates for algorithmic accountability in public welfare systems while deploying unaudited AI tools in its own service delivery is not being hypocritical out of carelessness. It is operating without a framework that would make the inconsistency visible.

    Oxfam's approach, and the broader rights-based AI movement it is part of, provides that framework. It is not a set of restrictions that makes AI adoption harder. It is a set of standards that makes AI adoption coherent with the values that motivate the work. Organizations that adopt these standards are better positioned to identify AI tools that genuinely advance their missions, exclude tools that pose unacceptable risks, build trust with the communities they serve, and contribute to the broader accountability infrastructure that the entire sector needs.

    The work is not easy. Human rights impact assessments take time. Community governance requires genuine power-sharing. Vendor accountability requires negotiating leverage that smaller organizations may not always have. But none of these challenges are more demanding than the work social justice organizations already do. They are simply extensions of that work into a new domain, one that will shape every other domain of nonprofit operations for the foreseeable future.

    The organizations that engage with this challenge proactively, that develop robust rights-based AI governance before it is required by law or demanded by funders, will be better partners to the communities they serve and more credible advocates for the systemic change they seek. The tools are available. The frameworks exist. The question is whether social justice organizations will extend their commitment to rights-based practice into the AI era.

    Ready to Build an Ethical AI Framework?

    One Hundred Nights helps social justice organizations develop AI governance frameworks that are consistent with their values and accountable to the communities they serve.