The Right to Explanation: Should Beneficiaries Know When AI Influenced Their Service?
When an AI system recommends that a client be placed on a waitlist, flagged for follow-up, or denied a resource, does that person have a right to know? The question is no longer hypothetical. Nonprofits are deploying AI tools in case management, eligibility screening, and resource allocation, and the ethical and legal landscape around beneficiary notification and explanation is evolving rapidly.

Imagine a family applying for emergency housing assistance through a nonprofit that has recently implemented an AI-powered intake system. The system analyzes the family's case history, income verification documents, and geographic data, and assigns a priority score that determines how quickly they receive help. The family waits weeks without a clear explanation of where they stand or why. Eventually, they learn they have been deprioritized, but no one can explain why the score was what it was. The algorithm's logic is opaque, even to the case workers who are supposed to be using it as a tool.
This scenario is not hypothetical. It reflects real experiences documented across public benefits systems, child welfare agencies, and social service nonprofits that have deployed AI tools without adequate thought about beneficiary communication and accountability. The algorithmic decisions affecting people's access to essential services are often invisible to the people they affect, creating conditions where errors, biases, and systemic inequities can persist unchallenged.
The right to explanation is a principle rooted in fundamental ideas about dignity, fairness, and accountability. When consequential decisions about a person's life are made using opaque processes, that person loses the ability to understand, contest, or correct those decisions. The principle holds that people should be able to know the basis for decisions that significantly affect them, regardless of whether a human or an algorithm made those decisions. For nonprofits committed to the communities they serve, this principle should feel intuitive, even when the practical implications are complex.
This article examines what the right to explanation means in the nonprofit service context, what legal frameworks are beginning to formalize it, why transparency about AI use is ultimately in a nonprofit's interest, and how organizations can build practical systems for beneficiary communication and accountability. It connects directly to the broader work of building ethical AI practices and the specific challenges documented in handling algorithmic denials in service delivery.
The Legal Landscape: What Regulations Already Require
The right to explanation is not merely an ethical aspiration. It has legal roots that are becoming more concrete as AI regulation advances around the world, and nonprofits operating internationally or serving populations in jurisdictions with strong AI governance frameworks need to understand their obligations.
The European Union's General Data Protection Regulation established the foundational legal framework with its Article 22 provisions on automated decision-making. GDPR requires that when decisions based solely on automated processing produce legal or similarly significant effects on individuals, organizations must provide meaningful information about the logic involved, the significance of that logic, and the likely consequences of the processing. The EU AI Act, which entered its implementation phase in 2025, strengthens these requirements through Article 86, which mandates that deployers of high-risk AI systems provide "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken." High-risk AI systems include those used in education, employment, essential public services, and social benefits.
In the United States, federal law has not yet caught up to the European framework. The Algorithmic Accountability Act, which has been introduced in several versions including a 2025 iteration, would require companies to conduct impact assessments when using automated systems that make critical decisions affecting consumers in areas including health care, housing, education, financial services, and comparable domains. This legislation remains pending, but it reflects the direction of policy momentum. Several states are moving independently: Colorado has enacted AI-related consumer protection rules, and other states are considering similar measures.
Beyond formal regulation, nonprofits face an increasingly important practical consideration: funders and accrediting bodies are beginning to ask pointed questions about how AI systems are used in service delivery and what accountability mechanisms are in place. Organizations that cannot answer these questions clearly risk both reputational harm and loss of funding relationships. This intersects with the growing scrutiny documented in how foundations are using AI to evaluate grantees, which includes examining grantees' own AI governance practices.
EU Framework (Currently in Force)
- GDPR Article 22: Right to human review and meaningful explanation for automated decisions with legal or similar effects
- EU AI Act Article 86: Mandatory explanation requirements for high-risk AI systems in social services, education, and employment
- Right to contest: Individuals must be able to challenge automated decisions and request human review
U.S. Regulatory Direction
- Algorithmic Accountability Act (pending): Would require impact assessments for automated systems in health, housing, education, and social services
- State-level rules: Colorado and other states are developing AI consumer protection frameworks independently
- FTC enforcement: The FTC has signaled active enforcement of civil rights and fairness requirements for AI systems
What AI Is Actually Doing in Nonprofit Service Delivery
To think clearly about the right to explanation, it helps to understand the specific ways AI is being used in the service delivery chain and which uses create the most significant accountability obligations. Not all AI use in nonprofits raises the same ethical concerns. An AI tool that helps a development director write a more compelling grant letter has different accountability implications than a tool that prioritizes which families on a waitlist receive housing support first.
The accountability concerns intensify when AI systems are involved in decisions that directly determine who receives services, at what level, and on what timeline. These include eligibility screening tools that process intake information and assess whether a client meets program criteria, prioritization algorithms that rank clients on waitlists based on assessed need or predicted outcomes, risk assessment tools that flag clients for intervention or flag cases for additional review, and resource allocation systems that determine how staff time, emergency funds, or specialized services are distributed.
Pennsylvania's Allegheny County provides one of the most studied examples of AI in social service delivery through its Allegheny Family Screening Tool, which assists child welfare hotline screeners in assessing maltreatment risk. The tool has been extensively analyzed by researchers who have documented both its potential to standardize screenings and its risk of perpetuating systemic biases if the training data reflects historical disparities in how different communities have been treated by child welfare systems. The key finding is that AI systems in high-stakes service contexts can amplify existing inequities unless actively designed to detect and correct for them.
The Dutch welfare fraud detection system offers a cautionary counterexample. Over 30,000 people were wrongly flagged by an algorithm over multiple years, resulting in benefits terminations, forced repayments, financial crises, and profound harm to families. The system's opacity made it difficult for affected individuals to challenge its outputs and made it slow for administrators to identify that something was systematically wrong. Transparency would not have prevented the algorithm from making errors, but it would have created the conditions under which errors could be more quickly identified and challenged.
AI Use Cases by Accountability Risk Level
How the accountability obligation varies depending on AI's role in decision-making
High Accountability Obligation
AI directly determines or heavily influences who receives services, at what level, or on what timeline. Includes eligibility screening, waitlist prioritization, risk scoring, and resource allocation. Beneficiaries have the strongest interest in explanation and the most to lose from uncontestable errors.
Moderate Accountability Obligation
AI informs case management decisions, flags cases for review, or generates recommendations that staff regularly act on. Human judgment remains in the loop but may be systematically influenced by AI outputs in ways beneficiaries should understand.
Lower Accountability Obligation
AI supports administrative tasks, internal planning, or communications with no direct service impact on individual clients. The right to explanation is less acute, though transparency about AI use in general remains a good practice.
The Ethical Case for Transparency: Beyond Compliance
Even where legal obligations do not yet require it, the ethical case for transparency about AI use in service delivery is compelling. Nonprofits occupy a particular position of trust in the communities they serve. People who come to a food bank, a homeless shelter, or a domestic violence program are often in vulnerable circumstances, relying on the organization to treat them with dignity and to make decisions about their access to help in ways that are fair and understandable. Deploying opaque AI systems in this context without disclosure or explanation represents a form of disrespect for that trust, even when the AI system is working as intended.
The transparency argument becomes even stronger when you consider the mechanics of how biased AI systems perpetuate harm. An algorithm that systematically disadvantages certain populations can do so in ways that are invisible to the individuals affected, invisible to the staff using the tool, and invisible to leadership reviewing aggregate outcomes. The only mechanism that can surface and challenge these patterns is the ability of affected individuals to understand, question, and contest the decisions being made about them. Transparency is not just a matter of individual rights; it is a structural requirement for ensuring that AI systems in service delivery remain fair over time.
Leading ethical frameworks for AI in the nonprofit sector coalesce around this point. UNESCO's Recommendation on the Ethics of Artificial Intelligence, released in 2021 and since updated, emphasizes explainability as a core requirement alongside fairness, accountability, and human rights protection. Vera Solutions' widely cited Nine Principles of Responsible AI for Nonprofits includes explainability of decisions as a foundational element, alongside community involvement, human oversight, and fairness. IBM's guidance for the social impact sector consistently emphasizes that AI systems affecting vulnerable populations require higher standards of transparency than AI used in lower-stakes contexts.
Practically speaking, transparency about AI use also strengthens trust rather than undermining it. Research on beneficiary attitudes toward AI in service delivery suggests that people are generally willing to accept AI assistance when they understand how it is being used and believe that humans remain accountable for the outcomes. What erodes trust is not AI itself but the experience of being subject to opaque, uncontestable processes. Organizations that openly communicate about their AI use, explain the rationale for decisions, and provide clear channels for questions and appeals are better positioned to maintain the trust relationships that are central to their missions.
Why Transparency Builds Trust
- Beneficiaries can understand and accept decisions made fairly, even unfavorable ones
- Errors surface more quickly when people can identify and report suspicious patterns
- Staff retain meaningful accountability rather than deferring to unexplainable algorithms
- Community relationships remain grounded in honesty about how decisions are made
Why Opacity Creates Risk
- Systematic biases can persist for years without being detected or challenged
- Errors affecting vulnerable people become impossible to contest
- Staff lose the ability to exercise meaningful professional judgment
- Regulatory and reputational risk accumulates as AI governance expectations rise
The Efficiency-Transparency Tension: A Real Tradeoff
Acknowledging the genuine tension between transparency and efficiency is important. More explainable AI systems are sometimes less accurate than more complex ones. The statistical models that produce the most accurate predictions are often the least interpretable, while simpler models that can produce plain-language explanations may sacrifice some predictive precision. For organizations that have adopted AI specifically to improve the quality of their decisions, this tradeoff is not trivial.
Research on this question has found that users who receive only AI recommendations, without context or explanations, tend to over-rely on those recommendations and perform worse when the AI makes errors. Explanation improves the quality of human-AI collaboration by helping human decision-makers understand when to trust the AI's output and when to apply additional scrutiny. This means that the efficiency benefits of high-accuracy opaque AI may be partially offset by the errors that occur when humans blindly defer to outputs they cannot evaluate.
The practical resolution that leading implementations have arrived at is a human-in-the-loop design philosophy where AI systems provide information and recommendations but humans retain decision authority, with documented rationale for both following and departing from AI recommendations. This approach preserves the efficiency benefits of AI pattern recognition while maintaining the accountability that service contexts require. For life-critical or high-stakes decisions, the argument for transparency over maximum predictive accuracy is particularly strong: the cost of an uncontestable error is high, and the accountability benefits of explainability directly outweigh modest accuracy losses.
Organizations developing AI governance frameworks should address this tradeoff explicitly. For which decision types will you prioritize interpretability? For which will you accept more complex, less explainable models? What accountability mechanisms compensate for opacity when it is accepted? These questions should be answered in writing, with input from beneficiaries and frontline staff, before AI systems are deployed in service-critical contexts. This connects to the broader challenge of building internal AI champions who can navigate these tradeoffs with both technical and ethical literacy.
Bias, Discrimination, and the Communities Nonprofits Serve
The bias question is where the right to explanation becomes most consequential. AI systems learn from historical data, and historical data in social services often encodes historical inequities. Systems trained on data from communities that have been overpoliced, underserved, or systematically disadvantaged may produce outputs that perpetuate those disadvantages in new forms. The mechanism is often subtle: the AI is not explicitly programmed to discriminate, but the patterns it learns from historical outcomes produce systematically different recommendations for different demographic groups.
Research from 2024-2025 documents specific patterns of algorithmic discrimination across various service contexts. AI-powered screening tools have been found to disadvantage applicants from communities with historically lower wealth levels, not because of explicit programming but because the features the AI treats as predictive are correlated with race and class in ways the system designers did not fully anticipate. In hiring contexts, AI résumé screening has been found in some studies to systematically favor white-associated names. These patterns extend to social service contexts when training data reflects the outcomes of historical systems that themselves embodied discrimination.
For nonprofits whose explicit mission is to serve marginalized or historically disadvantaged communities, deploying AI systems that amplify these patterns is not merely a technical failure. It is a mission failure. Transparency is one of the primary mechanisms for detecting and correcting these patterns. When beneficiaries can see the basis for decisions affecting them and can challenge those decisions, patterns of systematic disadvantage become visible and contestable. When systems are opaque, those patterns can persist indefinitely.
The National Institute of Standards and Technology recommends a socio-technical approach to bias mitigation: rather than treating AI bias as a purely technical problem to be solved by data scientists, organizations should involve the communities whose data will be used in training data design, model evaluation, and ongoing monitoring. For nonprofits, this means bringing community perspectives into AI governance decisions, not just into program design.
Bias Detection and Mitigation Practices
How transparency enables organizations to identify and correct discriminatory patterns
- Demographic outcome auditing: Regularly analyze AI-influenced decisions by demographic characteristics to identify whether certain groups are systematically receiving different outcomes
- Community review panels: Involve representatives from the communities you serve in reviewing AI tool performance and governance decisions
- Algorithmic Impact Assessments: Before deploying any AI tool in service delivery, conduct a structured evaluation of potential harms, bias risks, and mitigation strategies using frameworks from the OECD or Data and Society
- Decision logging: Maintain records of AI recommendations and human decisions, including departures from AI recommendations, to enable retrospective analysis of patterns
- Explainability techniques: For complex models where full interpretability is not feasible, use LIME, SHAP, or similar model-agnostic methods to generate explanations of specific outputs
Practical Implementation: Building Accountable AI Systems
Moving from principle to practice requires concrete decisions about system design, staff training, beneficiary communication, and governance processes. Organizations that have successfully implemented accountable AI in service delivery have typically built these elements in sequence, with governance and communication practices established before AI tools are deployed in high-stakes contexts.
1Establish Human-in-the-Loop Design
For all high-stakes service decisions, AI should provide information and recommendations, not final decisions. Human staff should retain authority and accountability. Document clearly what AI contributes, what humans decide, and who bears responsibility for outcomes.
2Conduct Algorithmic Impact Assessments
Before deploying any AI tool in service delivery, complete a structured assessment using OECD or Data and Society frameworks. Identify potential harms, bias risks, affected populations, and mitigation strategies. Document findings and share relevant conclusions with stakeholders.
3Train Staff on AI's Role and Limits
Staff using AI tools in service delivery need to understand what the AI is actually doing, what it can and cannot know, where its outputs are most likely to be unreliable, and how to explain AI involvement to beneficiaries. Over-reliance on AI recommendations without this understanding degrades decision quality and accountability.
4Develop a Beneficiary Communication Protocol
Create clear, plain-language materials that explain to beneficiaries when and how AI is involved in decisions about their services. This does not require technical detail; it requires honest communication about the fact that AI assists human decision-makers and a description of how those decisions are made.
5Create a Clear Appeals Process
Beneficiaries who believe an AI-influenced decision was incorrect should have a clear path to request human review. The process should be accessible, the timeline should be reasonable, and the outcome should reflect genuine re-evaluation rather than automatic affirmation of the original decision.
6Monitor for Bias on an Ongoing Basis
AI system performance changes over time as client populations shift, program designs change, and the system accumulates more data. Build regular bias audits into your AI governance calendar, not just as a one-time pre-deployment check but as a sustained operational practice.
The practical challenge of beneficiary communication deserves particular attention because it is where the abstract principle of the right to explanation meets real operational complexity. Communicating about AI involvement in decisions requires language that is honest without being alarming, specific without being technical, and accessible to people who may be in distress, may have limited English proficiency, or may have limited prior exposure to AI concepts.
Effective approaches typically explain AI involvement in terms of its function rather than its mechanics. A statement like "We use a tool that helps our team review intake information and identify clients with the most urgent needs. Our staff review every recommendation before making decisions, and you can always ask to speak with a supervisor if you have questions about how your case was assessed" is more useful than a technical description of the algorithm. The goal is to ensure beneficiaries understand that AI is involved, that humans remain accountable, and that they have recourse if they believe the decision was wrong.
What a Responsible AI Policy Should Cover
The statistics on AI governance in the nonprofit sector are sobering. Research consistently finds that the vast majority of nonprofits using AI have no formal AI policy, and only a small fraction have documented, repeatable AI governance processes. This means that many organizations deploying AI in service-critical contexts are doing so without clear internal rules about how those systems should be used, monitored, and governed.
A responsible AI policy for a service-delivery nonprofit should address at minimum six areas. First, scope: which decisions can AI inform, and which decisions require human judgment without AI input? Second, transparency obligations: what are staff required to tell beneficiaries about AI involvement in their cases? Third, accountability assignment: who is responsible when an AI-influenced decision turns out to be wrong? Fourth, monitoring requirements: how will the organization check whether AI tools are performing equitably across different populations? Fifth, beneficiary rights: what process exists for beneficiaries to question or contest AI-influenced decisions? Sixth, training requirements: what do staff need to understand about the AI tools they use?
Developing this policy should involve frontline staff who use the tools, program leadership who understand service context, and representatives of the communities served. A policy developed only by leadership or only by technology staff is likely to miss critical operational realities or community concerns. The process of developing the policy is also an opportunity to build the shared understanding that makes ethical AI deployment possible in practice.
For organizations building these policies, the AI vendor evaluation checklist provides useful questions to ask about transparency and explainability before selecting tools for service-critical use. Vendors who cannot explain how their systems make recommendations or who resist the accountability requirements described here are not appropriate partners for nonprofit service delivery contexts.
Essential Elements of a Responsible AI Policy
- Decision scope: Clear rules about which decisions AI can inform and which require human judgment alone
- Transparency obligations: What staff must communicate to beneficiaries about AI involvement in their cases
- Accountability assignment: Who is responsible when AI-influenced decisions turn out to be wrong
- Monitoring requirements: How and how often AI tools will be audited for bias and equitable performance
- Beneficiary rights and appeals: Process for questioning or contesting AI-influenced decisions, including timelines and escalation paths
- Staff training requirements: What staff must understand about AI tools before using them in service-critical contexts
- Vendor requirements: Minimum standards for explainability, audit access, and accountability that vendors must meet
Conclusion
The question posed in this article's title has a clear answer for organizations that take their missions seriously: yes, beneficiaries should know when AI influenced their service, and they should have the ability to understand the basis for that influence and to contest it when they believe it was wrong. This is not a burdensome regulatory requirement to be complied with grudgingly. It is a natural expression of the values that most nonprofits already hold about treating the people they serve with dignity, transparency, and genuine accountability.
The Dutch welfare fraud detection scandal, the Allegheny Family Screening Tool debates, and countless smaller examples from service delivery organizations around the world share a common lesson: AI systems in high-stakes service contexts require robust accountability mechanisms from the beginning, not after harm has been done. Building those mechanisms is harder than deploying the AI tool itself. It requires policy development, staff training, beneficiary communication systems, bias monitoring protocols, and the organizational courage to acknowledge that AI tools can make mistakes.
But the organizations that do this work are better positioned not just to avoid harm but to deliver genuinely better services. Human-AI collaboration that preserves human judgment, maintains accountability, and creates conditions for bias detection produces better outcomes over time than automated systems that optimize for efficiency at the expense of explainability. The right to explanation, properly implemented, is not just an ethical obligation. It is a quality improvement mechanism for the people and communities nonprofits exist to serve.
Ready to Build Ethical AI Practices?
We help nonprofits develop AI governance frameworks, design accountable service delivery systems, and build the staff capacity to use AI responsibly. Connect with us to discuss your organization's AI accountability needs.
