Back to Articles
    AI Ethics

    Informed Consent for AI: How to Tell Clients Their Data Feeds Machine Learning

    Most nonprofits that use AI have not told their clients about it. That silence is becoming both a legal problem and an ethical one, as new state laws, sector-specific regulations, and evolving client expectations create pressure to make AI use transparent in meaningful ways.

    Published: March 13, 202613 min readAI Ethics
    AI informed consent for nonprofit client relationships

    Imagine you are a client at a social services organization. You fill out an intake form with your housing history, health conditions, family situation, and prior service use. You assume that information will be used by a case manager to help you. What you may not know is that the same data is being fed into an algorithm that rates your housing stability risk, influences which services are offered to you first, and may predict how likely you are to become a repeat client. An AI system is making consequential judgments about your life, using your own words to do it, and no one ever mentioned that this was happening.

    This scenario is not hypothetical. The vast majority of nonprofits using AI tools have not developed formal processes for informing clients about how their data feeds machine learning systems. Research from Whole Whale in 2025 found that only 10% of nonprofits have formal AI policies at all, and an even smaller fraction have addressed client disclosure as part of those policies. This gap exists not because organizations are indifferent to their clients' rights, but because the question of how to explain AI to vulnerable populations in plain language, without causing confusion or undermining trust, is genuinely difficult.

    That difficulty is no longer a reason to defer the conversation. Multiple state laws enacted in 2025 and 2026 now require explicit disclosure of AI use in healthcare, social services, and employment contexts. The EU AI Act's transparency obligations take full effect in August 2026. The proposed federal AI CONSENT Act, if passed, would require opt-in consent before using personal data to train AI models. The legal landscape is moving, and nonprofits that have not built AI disclosure into their intake and consent processes are increasingly at risk of being caught unprepared.

    This guide addresses the practical challenge: what informed consent for AI actually means in nonprofit contexts, what the law currently requires, how to communicate complex AI concepts to clients with varying literacy levels, and what distinguishes genuine disclosure from the kind of checkbox consent that creates liability without creating understanding. The goal is not just legal compliance. It is maintaining the trust relationships that are the foundation of effective service delivery.

    Why Traditional Consent Frameworks Break Down for AI

    Informed consent has been a cornerstone of ethical practice in healthcare, social work, and research for decades. The principle is straightforward: before doing something that affects someone, explain what you plan to do, obtain their agreement, and give them a real option to say no. Traditional consent works well for discrete, bounded activities. It struggles with AI for three specific reasons that the Harvard Petrie-Flom Center articulated clearly in 2025.

    Black-Box Opacity

    Most AI systems are opaque even to their developers. A case manager can describe to a client exactly what will happen during an intake assessment: what questions will be asked, how answers will be recorded, who will see the information. A machine learning system cannot be described this way. Even if you fully explain that an algorithm analyzes housing stability risk, you cannot explain the internal reasoning by which it reaches any particular score. Clients and even technically sophisticated practitioners cannot fully understand what they are consenting to, because the system itself cannot fully be explained. This creates a genuine tension between the spirit of informed consent, which requires meaningful understanding, and the technical reality of how modern AI works.

    Data Persistence and Irrevocability

    Traditional consent can be revoked. A client who decides they no longer want their health records shared can withdraw consent, and the records can be removed from a shared system. Once personal data has been used to train an AI model, however, it becomes virtually impossible to remove. The model has encoded the patterns in that data into its weights, and there is no practical mechanism to "unlearn" what it has learned from a specific individual's records. This means that retroactive consent withdrawal, a core right under GDPR and state privacy laws, is largely symbolic when it comes to AI training. Organizations should be candid about this limitation when obtaining consent, rather than implying a level of revocability they cannot actually deliver.

    Evolving Systems and Scope Creep

    GDPR and most privacy frameworks were designed for static data uses: data collected for purpose A should only be used for purpose A. AI models continuously incorporate new data and can be applied to purposes beyond what was originally envisioned. A model trained to predict shelter bed demand might later be repurposed to predict which clients are likely to exit homelessness quickly, a very different use with different equity implications. Consent obtained at intake for one AI application does not necessarily cover derivative uses. Organizations need to think carefully about what they are actually seeking consent for, and whether their consent processes will remain appropriate as their AI capabilities evolve.

    What the Law Currently Requires

    The legal landscape for AI disclosure is fragmentary and evolving rapidly. There is no single federal law in the United States that governs AI consent across all sectors. Instead, requirements emerge from a patchwork of sector-specific regulations, state consumer protection laws, and international frameworks. Nonprofits operating across multiple states or serving populations covered by sector-specific regulations may face multiple overlapping obligations.

    Key U.S. State Laws (2026)

    • Colorado SB24-205 (effective June 2026): Requires deployers of high-risk AI to provide plain-language disclosures of the system's purpose, the nature of consequential decisions, and instructions for accessing disclosures. Adverse decisions must include the principal reasons AI contributed to the outcome, plus the right to appeal.
    • Texas TRAIGA (effective January 2026): Licensed healthcare practitioners must provide conspicuous written disclosure of AI use in diagnosis or treatment before or at the time of interaction.
    • California AB 3030: Healthcare providers must disclose AI use in patient care and obtain explicit consent before using AI-powered systems with patients.
    • New Mexico: Counselors, therapists, and mental health practitioners must share information about AI tools (including intended use, purpose, and risks) and obtain informed consent before use.

    International and Federal Framework

    • EU AI Act (transparency obligations, August 2026): For limited-risk AI (chatbots, content generation tools), deployers must ensure users know they are interacting with AI. High-risk AI in healthcare, education, and employment faces more stringent requirements.
    • GDPR: Requires clear explanation of how personal data will be used in AI training and deployment, specification of automated processing types, and granular consent options. Explicit consent is the most stringent lawful basis; bundling AI consent into general terms of service is insufficient.
    • Federal AI CONSENT Act (proposed): Would require opt-in consent before using personal data to train AI models. Consent must be obtained separately from terms of service and cannot be inferred from inaction. Not yet passed as of early 2026, but its introduction signals federal direction.
    • HMIS Regulations: Clients in homeless services systems already have the right to a copy of their data, the right to change consent at any time, and the right to be served even if they decline to consent to data collection.

    Beyond specific legal requirements, professional associations are increasingly addressing AI disclosure in their ethical standards. The National Association of Social Workers' Code of Ethics (updated in 2021 with ongoing guidance) specifies that social workers who use technology must obtain informed consent during initial screening or prior to initiating services, and clients must be informed of relevant benefits and risks. Similar guidance is emerging from nursing, counseling, and medical professional associations. For nonprofits in licensed service areas, professional ethics codes may create obligations that go beyond what any single state law requires.

    The practical implication for most nonprofits is that you should be building AI disclosure into your consent and intake processes now, even where you are not yet legally required to do so. The trend in every jurisdiction is toward greater transparency requirements, not less. Organizations that have already developed thoughtful disclosure practices will be well ahead of compliance deadlines, and will have built the internal expertise to update those practices as requirements evolve. See our overview of new state AI laws taking effect in 2026 for more detail on specific regulatory timelines.

    Three Levels of AI Disclosure: Matching Depth to Risk

    Not every AI application requires the same depth of disclosure. Using an AI tool to draft internal emails is categorically different from using an algorithm to determine which clients receive priority housing placement. Research in the healthcare AI consent literature has developed a useful tiered framework that translates well to nonprofit contexts.

    Level 1: Basic Notification

    Appropriate for low-stakes AI applications that do not affect individual client outcomes

    At this level, clients are simply informed that AI tools are part of the organization's operations. This is appropriate for AI uses that affect organizational efficiency but do not directly influence decisions about individual clients, such as AI-assisted scheduling, internal document drafting, or general communications support.

    An example disclosure at this level:

    "Our organization uses technology tools, including AI-powered software, to support our administrative work and improve how we serve our community. These tools help our staff work more efficiently so we can focus more time on supporting you."

    Level 2: Education and Trust-Building

    Appropriate for AI that informs but does not determine individual service decisions

    At this level, clients receive a fuller explanation of how AI tools are used in their service experience. This is appropriate when AI provides recommendations to staff (housing resource suggestions, program matching, risk screening) but a human professional retains decision authority. The goal is calibrated trust: clients who understand the role of AI in their care are better positioned to ask questions, provide corrections to inaccurate data, and participate meaningfully in their own service planning.

    An example disclosure at this level:

    "When you complete our intake process, the information you share helps our staff understand your situation and identify what resources might help you most. We also use a computer program that looks at patterns across many cases to help our case managers think about what has worked for people in similar situations. Your case manager reviews those suggestions and makes the actual decisions about your care. You can ask your case manager at any time to explain what resources are being considered for you and why."

    Level 3: Full Informed Consent

    Required when AI significantly influences or determines consequential individual decisions

    The most demanding level, appropriate when AI plays a substantial role in high-stakes decisions: housing eligibility scoring, priority placement in competitive programs, risk classification that triggers different service pathways, or any algorithmic process that determines what a client receives. Colorado's SB24-205 essentially mandates this level of disclosure for high-risk AI in consequential decisions.

    Full informed consent requires:

    • A plain-language explanation of what the AI tool does and what decision it influences
    • Identification of what data is used as input and how that data affects the output
    • A description of the client's rights: to see the decision, to understand the reasons, to challenge the outcome, and to request human review
    • A genuine mechanism to opt out without losing access to services (where legally required and operationally feasible)
    • Contact information for follow-up questions or appeals

    Sector-Specific Challenges and Considerations

    The appropriate approach to AI disclosure varies significantly across nonprofit service areas. Power dynamics, literacy levels, legal requirements, and the nature of AI use differ enough across sectors that a single template cannot cover every context. The following covers the most common challenges in sectors where AI use is growing fastest.

    Homeless Services and Coordinated Entry

    Coordinated Entry systems for homeless services use assessment tools and matching algorithms to prioritize individuals for scarce housing resources. The stakes for individual clients are extraordinarily high: a lower algorithm score can mean months or years in shelter rather than permanent housing. Yet clients participating in these systems often have limited information about how the process works.

    A 2025 qualitative study on AI matching in homeless services found a troubling tension: some providers deliberately withhold information about how matching criteria work to prevent clients from "gaming" the system. While this concern has some operational logic, it creates a direct conflict with transparency principles. The appropriate resolution is to explain what the assessment measures and how it connects to housing placement, without revealing specific algorithmic weights that would allow manipulation.

    Under existing HMIS regulations, clients already have the right to a copy of their data, the right to change their consent at any time, and the right to receive services even if they decline to provide consent for data collection. Extending these principles to AI disclosure, and making clients aware that these rights exist, is an important first step for housing-focused nonprofits. This connects to the broader principles covered in our article on AI for homeless services organizations.

    Mental Health and Counseling Services

    Mental health contexts are receiving particular regulatory attention. Florida's proposed law requires written, informed consent obtained at least 24 hours before AI is used to transcribe therapy sessions. New Mexico already requires therapists to share information about AI tools and obtain consent before use. The therapeutic relationship depends fundamentally on trust and confidentiality, making AI disclosure in this context especially sensitive.

    For mental health nonprofits, AI disclosure conversations are best framed within the existing informed consent discussion that ethical practitioners already conduct with new clients. Explaining that the organization uses AI-assisted documentation tools, what those tools do with session content, and what protections are in place fits naturally into the conversation about confidentiality limits, emergency protocols, and record-keeping practices that happens at the start of a therapeutic relationship. Clients entering crisis services, outpatient therapy, or peer support programs should have explicit, plain-language information about any AI tools involved in their care.

    Child Welfare and Family Services

    Child protective services AI tools are among the most ethically contested in the sector. Predictive risk scores are used in some jurisdictions to inform decisions about child removal, family service intensity, and foster care placement. These are among the highest-stakes decisions any system makes, and they often involve families who have limited power to challenge outcomes.

    ABA guidance on algorithmic decision-making in child welfare emphasizes that parents need to know when an algorithm was used in their case, and that families and communities should be engaged before information is entered into an algorithm, not just notified after decisions have been made. For nonprofits in this sector, AI disclosure should be part of a broader commitment to family engagement and participatory practice, not simply a compliance exercise. This means explaining in plain language how AI tools inform (but should not replace) the judgment of trained professionals, and providing parents with a genuine mechanism to question or challenge assessments that feel incorrect.

    Community Health Centers

    FQHCs and community health centers operate under overlapping obligations from HIPAA, state AI disclosure laws in California and Texas, and the general duty of informed consent that governs clinical practice. California AB 3030 and Texas TRAIGA have both created specific requirements around AI disclosure in clinical settings that took effect in 2025 and 2026 respectively.

    For health-focused nonprofits, the disclosure conversation is somewhat more familiar, because the concept of explaining technology to patients is already embedded in clinical practice. The challenge is ensuring that AI-specific disclosure is not buried in the existing paperwork but is instead given appropriate prominence. Research on health literacy consistently shows that patients who understand their care have better outcomes, and the same principle applies to AI use in clinical contexts.

    Communicating AI to Clients Who Aren't Technical

    The most common failure mode in AI disclosure is not deception but inaccessibility. Organizations draft technically accurate disclosures that describe AI systems in ways that clients with limited education, low health literacy, or limited English proficiency simply cannot understand. Research on plain language and health equity consistently shows that unintelligible consent is not meaningfully different from no consent at all.

    Plain Language Framework for AI Disclosure

    How to explain AI to clients without technical jargon

    Avoid technical terms entirely

    Replace "machine learning algorithm," "predictive model," "training data," and similar terms with everyday language. Clients do not need to understand how AI works technically. They need to understand what it does to them and what rights they have as a result.

    Use the analogy approach

    Concrete analogies help clients build accurate mental models without requiring technical explanation. For example: "We use a computer program that has learned from thousands of past cases to help our staff think about what resources might work for you. It's a bit like asking a very experienced colleague for a second opinion, except that the computer learned from data instead of years of working directly with people."

    Focus on impact, not mechanism

    Tell clients what the AI tool will do for them or to them, not how it works internally. "This tool may influence which housing options our case manager recommends to you" is more useful than any description of the algorithm's architecture.

    Use a three-part structure

    (1) What the tool does, in one plain sentence. (2) What data it uses from the client. (3) What the client can do if they have questions or disagree with a recommendation. This structure is memorable and covers the minimum a client needs to understand to exercise meaningful consent.

    Target 8th-grade reading level or below

    Plain language standards recommend an 8th-grade reading level for documents intended for the general public. Research shows that most AI-generated explanations, ironically, score above this threshold. Organizations must manually simplify AI disclosures rather than generating them with AI tools and assuming the result is accessible.

    Provide oral options for low-literacy clients

    For clients with very low literacy, a written form is not sufficient. Train front-line staff to explain AI disclosure verbally, using the teach-back method ("Can you tell me in your own words what this means?") to confirm comprehension before proceeding. Document that oral disclosure was provided.

    Language access is a separate and equally important dimension. Any client who does not communicate primarily in English is disadvantaged by an English-only AI disclosure. Organizations should translate disclosure materials into the primary languages of their client communities. AI translation tools can assist with initial drafts, but consent materials require human review to ensure accuracy, particularly for concepts that do not have direct equivalents across cultures.

    Visual communication can bridge literacy gaps that text cannot. A simple diagram showing the data flow (client fills out intake form, information enters our system, computer analyzes patterns, case manager reviews suggestions, together we make a plan) makes the process concrete in a way that prose descriptions often cannot. Some organizations working with clients who have cognitive disabilities or communication challenges have developed visual consent tools that use images and simple symbols to communicate key concepts about data use.

    What Genuine Consent Requires vs. Checkbox Compliance

    The gap between compliant disclosure and genuine informed consent is wide, and closing it requires confronting uncomfortable questions about power dynamics and the voluntary nature of consent when clients are dependent on services. Organizations that rush toward compliance without grappling with these questions will end up with consent processes that satisfy auditors but do not meaningfully inform clients.

    Checkbox Compliance

    What creates legal exposure without genuine transparency

    • Hiding AI disclosure in a submenu or multi-page privacy policy
    • Using ambiguous language like "empathetic digital assistant" without clarifying it is automated
    • Pre-checked consent boxes that clients must actively uncheck to opt out
    • Bundling AI consent into general terms of service rather than providing a standalone notice
    • Obtaining blanket consent at intake that covers all future AI uses without specificity

    Genuine Informed Consent

    What actually respects client autonomy and builds trust

    • Standalone, plain-language AI disclosure separate from general intake paperwork
    • Active opt-in that requires a deliberate positive action from the client
    • Granular consent options for different AI uses (data analysis, automated communications, training future models)
    • A genuine ability to decline specific AI uses without losing access to services
    • Clear, accessible mechanism to ask questions, request human review, or appeal AI-influenced decisions

    The voluntary consent paradox deserves honest acknowledgment. When a client seeking emergency shelter must accept AI-driven intake processes or lose access to a bed, their consent is not truly voluntary. No disclosure process can fully resolve this structural reality. What it can do is ensure that the power dynamic is transparent rather than hidden, that clients understand what is happening to their data, and that the organization has a genuine commitment to human review and appeal processes that give clients a meaningful voice even within a constrained situation.

    Comprehensive AI consent practices align directly with the kind of responsible AI governance that foundations and regulators are increasingly expecting from nonprofit grantees. Building your AI disclosure processes now, as part of a broader commitment to ethical AI use, positions your organization well for both the legal landscape ahead and the trust relationships that make your programs effective.

    Building AI Consent Into Your Organization

    A Practical Implementation Checklist

    • Inventory your AI tools. Before you can disclose AI use, you need a complete list of every AI tool currently used in your organization, what data each tool accesses, and what decisions or recommendations each tool influences. Many organizations discover through this process that AI use is more extensive than leaders realized.
    • Categorize by risk level. Apply the three-level disclosure framework to each tool. Not every AI application requires full informed consent. Distinguishing low-stakes operational tools from high-stakes client-facing systems helps you prioritize where to invest in more robust disclosure processes.
    • Draft plain-language disclosures. For each AI application that affects individual clients, write a disclosure at the appropriate level. Have front-line staff and, where possible, clients from your community review drafts for comprehensibility before finalizing.
    • Integrate into intake and consent processes. Add AI disclosures as a distinct component of your intake paperwork, separate from general terms of service. Train front-line staff to introduce and explain these disclosures as part of the intake conversation, not simply hand them to clients as additional forms to sign.
    • Build appeal and review mechanisms. For any AI tool that significantly influences service decisions, establish a clear process by which clients can request human review, see the reasons for a decision, and challenge outcomes they believe are incorrect. This is both ethically necessary and legally required under Colorado's law for high-risk AI.
    • Create a review cycle. AI tools change, regulations change, and your organization's AI use will evolve. Build an annual review of your AI disclosure practices into your organizational calendar, and assign ownership to a specific role so the review actually happens.
    • Include community voice in AI governance. Multiple frameworks, including Oxfam's rights-based approach and the AI Social Worker Governance Framework, recommend involving clients and affected communities in AI governance decisions before tools are deployed. Consider creating client advisory structures that give community members input into how AI is used in your programs.

    Conclusion

    The question of AI informed consent is ultimately a question about what kind of organization you want to be. Compliance is a floor, not a ceiling. You can technically satisfy the disclosure requirements of Colorado's law, California's healthcare AI rules, or GDPR while still leaving clients with no genuine understanding of how AI affects their relationship with your organization. The organizations that will build lasting trust are those that treat AI disclosure as a genuine commitment to transparency rather than a legal checklist.

    That commitment requires facing some uncomfortable truths. The voluntary consent paradox in service-dependent relationships is real and cannot be fully resolved through better paperwork. The technical opacity of AI systems means that truly informed consent is aspirational in many contexts. The power imbalance between clients and organizations that control access to essential services shapes every consent interaction. Acknowledging these realities honestly, rather than pretending that a well-designed consent form resolves them, is itself a form of transparency that clients and communities deserve.

    What nonprofit leaders can control is whether clients know that AI is involved in their care, whether they understand at a basic level what that means, whether they have genuine recourse when AI-influenced decisions feel wrong, and whether the organization treats their trust as something to be earned rather than assumed. The legal requirements are coming regardless. The organizations that build thoughtful AI consent practices now, grounded in genuine respect for client autonomy, will be better positioned for the regulatory environment ahead and more worthy of the trust their missions depend on.

    Pair AI consent practices with a clear AI governance framework and transparent ethical AI use practices to build an organization that clients, funders, and the communities you serve can genuinely trust.

    Build AI Programs Your Clients Can Trust

    One Hundred Nights helps nonprofits develop responsible AI governance, including client disclosure frameworks, ethics policies, and training for front-line staff. Let's design an approach that fits your organization.