Illinois, Nevada, New York, Utah: The State-by-State Patchwork of AI Mental Health Laws Nonprofits Must Track
Four states enacted major AI mental health legislation in 2025, each taking a different approach. Whether you run a crisis hotline, peer support program, licensed counseling service, or deploy any AI chatbot that touches emotionally sensitive conversations, this guide tells you exactly what changed and what you need to do.

In March 2026, the family of Jonathan Gavalas filed a federal lawsuit against Google, alleging that months of unmonitored interactions with Gemini drove him into delusional states, encouraged an emotional dependency on the AI, and ultimately contributed to his death by suicide. No self-harm detection triggered. No human counselor intervened. No escalation protocol activated. The lawsuit landed like a shockwave across the nonprofit sector, because the scenario it describes, an AI chatbot deployed without adequate safeguards in an emotionally sensitive context, is not hypothetical for many organizations. It is the current state of affairs.
Legislators were already moving. In 2025, Illinois, Nevada, New York, and Utah each passed laws directly regulating AI in mental health contexts. California joined them with SB 243, effective January 2026. Washington, Iowa, and Oregon followed with their own requirements. By early 2026, 11 states had enacted 20 laws meaningfully regulating AI mental health interactions, and 43 states introduced over 240 health-AI bills in 2026 alone. The regulatory patchwork is now real, varied, and carrying penalties that reach $15,000 per violation per day.
For nonprofits, the stakes are particularly acute. Crisis hotlines, peer support organizations, licensed counseling agencies, faith-based counseling programs, social service case managers, and community mental health centers all sit somewhere on this regulatory map. Some face outright prohibitions on autonomous AI therapy. Others must implement disclosure requirements, crisis detection protocols, or human escalation systems. A handful have access to safe harbor provisions that reward proactive compliance. Understanding exactly which law applies to your organization, and what it demands, is no longer optional.
This guide walks through each state law in plain language, explains who is and is not covered, identifies what compliance actually requires, and provides a practical starting framework for nonprofits assessing their exposure. It is not a substitute for legal counsel. Given the pace of change, with 240+ bills moving through state legislatures in 2026, ongoing legal monitoring is essential. But it is a starting point for understanding a landscape that affects virtually every nonprofit touching human wellbeing.
Why 2025 Was the Year States Moved
The legislative wave of 2025 was not spontaneous. It built on years of accumulating concerns: AI chatbots being marketed to vulnerable users as therapeutic tools, teen mental health apps collecting sensitive data without meaningful safeguards, and chatbot interactions that mimicked clinical relationships without any of the professional oversight that protects real therapy clients.
Several trigger events accelerated the timeline. Character.AI faced lawsuits alleging its chatbot encouraged self-harm in teenagers. Woebot Health shut down its consumer-facing CBT chatbot on June 30, 2025, citing the regulatory environment and business viability, a signal that standalone AI therapy chatbots face extreme headwinds. The FTC issued orders to seven major AI chatbot companies in September 2025, requesting information about safety assessment processes and data practices. And then the Gavalas lawsuit arrived in March 2026, providing legislators with the exact cautionary narrative they had been warning about.
The American Medical Association and American Psychological Association both issued formal guidance in 2025-2026 urging transparency requirements, clinical oversight mandates, crisis detection protocols, and restrictions on AI representing itself as a licensed clinician. Their frameworks now track closely with what states are legislating.
The Gavalas v. Google Case: Key Implications for Nonprofits
The March 2026 wrongful death lawsuit established that deploying AI without safeguards in emotionally sensitive contexts is legally actionable
- Nonprofits cannot rely on vendor indemnification alone: the organization deploying the AI system bears duty of care to users in crisis
- "No safeguards triggered, no human intervened" is now a legally documented failure mode, not just a theoretical risk
- Following the lawsuit, Google added crisis detection modules and one-touch crisis hotline connections, the minimum standard the field is now converging around
- The case reinforces that AI presenting as emotionally engaged or sentient, without human oversight, creates liability exposure regardless of the deploying organization's nonprofit status
The Four State Laws: Side by Side
Each state took a meaningfully different approach. Understanding the distinctions matters because nonprofits serving clients across multiple states must comply with all applicable laws, and what is permitted in Utah may be prohibited in Nevada.
Illinois: Wellness and Oversight for Psychological Resources Act (Public Act 104-0054)
Enacted August 4, 2025. Illinois was the first state to explicitly prohibit autonomous AI therapy.
Illinois drew the clearest line of any state: AI may not independently provide, advertise, or offer therapy or psychotherapy services. The law prohibits AI from making independent therapeutic decisions, directly engaging in therapeutic communication with clients, generating treatment plans without licensed review, or detecting emotions or mental states autonomously. Before any AI tool is used in recorded or transcribed therapeutic sessions, licensed professionals must obtain informed patient consent.
Penalties reach $10,000 per incident. The law explicitly requires that a licensed professional conduct the session, meaning AI can support a clinician but cannot substitute for one.
Who Is Covered
- Mental health counseling agencies with licensed therapists
- Behavioral health clinics using AI in clinical workflows
- Any nonprofit where AI interacts directly with therapy clients
Who Is Exempt
- Peer support programs (explicitly carved out)
- Religious and faith-based counseling organizations
- Self-help materials and programs
Key nonprofit implication: If your organization runs peer support, 12-step facilitation, or faith-based counseling, Illinois's law does not apply. If you employ licensed therapists and use AI in their workflow, it does.
Nevada: Assembly Bill 406
Enacted June 5, 2025. Effective July 1, 2025. The broadest prohibition approach of the four states.
Nevada took the most restrictive approach: it broadly prohibits offering interactive AI systems to Nevada residents that provide or claim to provide professional mental or behavioral healthcare services. Unlike Illinois, which focuses on the clinical relationship, Nevada's prohibition extends to any interactive AI system that presents itself as providing mental health services, regardless of whether a licensed professional is technically involved.
For licensed healthcare professionals, Nevada restricts AI use to administrative functions only. Any AI output used in a clinical context requires independent review by the licensed provider. Misrepresentations about AI capabilities in mental health contexts are separately prohibited. Penalties reach $15,000 per violation, and no formal safe harbor exists.
High risk: Nevada's categorical prohibition leaves little room for compliant AI mental health products serving Nevada residents. Nonprofits deploying AI chatbots that could be construed as providing behavioral health services, including crisis text services, AI-assisted counseling intake, and wellness apps, face significant exposure under this law.
New York: AI Companion Safeguard Law (A6767)
Enacted May 2025 as part of NY Budget Bill. Effective November 5, 2025.
New York's law targets "AI companions," interactive AI systems designed for social or emotional engagement. Rather than prohibiting them outright, the law requires disclosure and crisis detection. Operators must provide clear disclosures that users are engaging with AI rather than a human, with daily reminders or notifications every three hours during continuing interactions. Governor Hochul personally notified AI companion companies in writing that safeguards were in effect as of November 2025.
The crisis detection requirement is the most operationally demanding element: operators must maintain reasonable protocols to detect expressions of suicidal ideation or self-harm, and upon detection, must refer users to crisis service providers, including the 988 Suicide and Crisis Lifeline. Penalties reach $15,000 per day for violations, enforced by the New York Attorney General.
- The law targets the "operator" of the AI system, so nonprofits deploying third-party chatbots bear compliance responsibility, not just the vendor
- Separate pending legislation (S.8484) would extend clinical oversight requirements to licensed professionals using AI in therapy, mirroring Illinois's approach
- Any nonprofit operating AI-assisted peer emotional support or crisis chat platforms serving New York residents should treat this law as applicable
Utah: House Bill 452
Enacted March 2025. Effective May 7, 2025. The most nonprofit-friendly framework with an explicit safe harbor.
Utah did not ban AI mental health chatbots. Instead, it created a disclosure-first framework with an explicit safe harbor for compliant organizations. Mental health chatbots must clearly and conspicuously disclose they are AI at first access, when a user returns after seven or more days of inactivity, and any time a user directly asks whether they are talking to AI. The law also prohibits selling or sharing individually identifiable health information to third parties beyond the chatbot's core function or with explicit user consent under HIPAA.
Utah's Safe Harbor: Organizations that create, maintain, and implement a written compliance policy filed with Utah's Division of Consumer Protection, and maintain documentation of their AI development, training data, HIPAA compliance measures, and data sharing practices, receive an affirmative defense against civil and administrative liability. This is the clearest compliance pathway any state has offered.
- File a written compliance policy with Utah's Division of Consumer Protection
- Maintain documentation covering AI development, training data sources, HIPAA compliance, and data sharing practices
- Implement the required AI disclosure language at all specified touchpoints
Beyond the Four: Other States Nonprofits Must Watch
Illinois, Nevada, New York, and Utah are the most discussed, but they are not alone. California's SB 243, effective January 1, 2026, requires chatbot operators to detect mental health crises and suicidal ideation, mandates user disclosures, and includes protections for minors. It also creates a private right of action for injured users, meaning individuals can sue operators directly, not just wait for state enforcement. Washington, Iowa, and Oregon have each passed requirements stipulating that chatbot operators implement mental health crisis detection capabilities and refer users to crisis resources and suicide hotlines.
New Jersey's Assembly Bill 5603, which cleared committee as of mid-2025, would ban advertising presenting an AI system as a licensed mental health professional. Other states are following their own paths. The critical operational point for nonprofits is that compliance is determined by where your users are located, not where your organization is headquartered. A national crisis text platform based in Illinois, serving users across 50 states, must track and comply with the law of every state it serves.
Federal vs. State: Understanding the Gap
No comprehensive federal AI mental health law exists as of May 2026. The patchwork of state laws is the operative compliance framework.
HIPAA
Mental health data collected by AI chatbots is subject to HIPAA when the operator is a Covered Entity or Business Associate. Nonprofits running licensed counseling services or partnering with health plans are generally covered entities. Peer support organizations, crisis hotlines, and wellness apps outside the clinical context may not be covered entities, meaning HIPAA's full protections do not apply to their AI data practices. This creates a significant gap in protection for some of the most vulnerable users.
42 CFR Part 2 (Substance Use Disorder Records)
Stricter than HIPAA. Nonprofits doing substance use counseling or peer recovery support must apply Part 2 protections to any AI tools processing SUD-related disclosures, including chatbot conversations.
FTC Authority
The FTC's authority covers consumer-facing AI chatbots not run by HIPAA-covered entities. Many nonprofit mental health apps fall into this gap: not HIPAA-covered, but FTC-regulated. The FTC's September 2025 enforcement orders signal active interest in AI chatbot safety practices. Deceptive claims about AI capabilities (implying AI is a therapist) constitute unfair/deceptive trade practices under FTC Act Section 5.
Which Nonprofits Are Most Affected
Not all nonprofits face equal exposure. The laws draw important distinctions based on what kind of service you provide, whether you employ licensed professionals, and what your AI tools actually do in conversations with users.
High Exposure
- Licensed counseling agencies using AI in client-facing workflows
- Crisis hotlines and text lines operating AI chatbot pre-screening or routing
- Nonprofits offering AI companion or emotional support tools
- Community mental health centers using AI in any client-facing capacity
- Substance use disorder programs using AI tools that process SUD disclosures
Lower Exposure (But Not Zero)
- Peer support organizations in Illinois (explicitly exempt), but still subject to NY/UT chatbot laws
- Faith-based counseling nonprofits in Illinois (explicitly exempt)
- Social services and case management organizations using AI for non-clinical tasks
- Organizations where AI is used only for administrative tasks, not user interactions
A critical nuance: peer support organizations are explicitly exempt from Illinois's prohibition, but that exemption does not extend to New York's AI companion law or Utah's chatbot disclosure requirements. A peer support nonprofit operating an AI chatbot that engages with users about emotional wellbeing still needs to implement disclosure requirements and crisis detection in states where those laws apply. The Illinois exemption is narrow and specific to that state's definition of therapy and psychotherapy.
The 988 Suicide and Crisis Lifeline, as a federally funded program, maintains human counselors at the center of its response. AI is used in supporting and routing roles only. SAMHSA's 2026 988 Lifeline Administrator grant requirements maintain human counselor primacy. If your organization operates or partners with a 988-affiliated crisis line, this operational model, human-led with AI in a support role, is the appropriate framework.
What the Medical Community Is Asking For
The AMA testified before Congress in 2026, urging five safeguards that closely mirror what states are legislating. The APA issued a November 2025 advisory urging human oversight requirements and restrictions on AI chatbots positioning themselves as therapeutic replacements. These frameworks matter for nonprofits because they represent the emerging professional and regulatory standard of care. Organizations that align with AMA and APA guidance now are less likely to face liability later, regardless of which specific state laws apply.
AMA's Five Safeguards for AI Mental Health Chatbots
- Transparency: Require clear AI disclosure; prohibit systems from impersonating licensed clinicians at any point in the conversation
- Limit Commercial Influence: Ban advertising within mental health chatbots, with heightened protections for interactions involving minors
- Data Protection: Strict limits on collection, retention, and sharing of sensitive mental health data; BAAs required for any vendor processing health-related conversations
- Regulatory Boundaries: No AI diagnosis or treatment of mental health conditions without regulatory oversight; AI supports humans, never replaces them
- Safety Monitoring: Mandatory crisis-detection protocols with escalation to human resources and referral to crisis services when self-harm signals are detected
Practical Compliance Steps for Nonprofits
The regulatory landscape is complex, but the compliance path for most nonprofits begins with the same foundational steps. Before you can determine which laws apply, you need to know what AI your organization is actually using and how it is being used. That sounds obvious, but many organizations discovered during 2025 that individual program staff had deployed AI tools without organizational awareness, let alone legal review. The use-case audit is the essential first step.
Immediate Actions (30 Days)
- Conduct a use-case audit: map every AI tool touching mental health interactions, including intake screeners, chatbots, routing tools, and staff-facing AI assistants
- Determine your state exposure: identify which state laws apply based on where your users are located, not where you are headquartered
- Classify your organization: are you a HIPAA covered entity? Do you process 42 CFR Part 2 substance use data? Do you operate AI companions under New York's definition?
- Execute Business Associate Agreements with all AI vendors that process health-related conversations
Implementation Actions (90 Days)
- Implement AI disclosure language at the start of every chatbot interaction (required in multiple states; best practice everywhere)
- Build crisis detection and escalation protocols: at minimum, connect users to 988 and local crisis resources when self-harm language appears
- If serving Utah residents, file a written compliance policy with Utah's Division of Consumer Protection to access safe harbor protection
- Document that peer support or faith-based programs are not clinical therapy if operating under Illinois's exemptions
- Engage legal counsel for ongoing monitoring: 240+ health-AI bills are moving through state legislatures in 2026 alone
One practical note on vendor relationships: do not assume your AI vendor handles compliance on your behalf. As the Gavalas case reinforces, the organization deploying the AI bears duty of care to users. Vendors like Wysa have pivoted toward hybrid models that integrate licensed professional oversight, the direction most compliant AI mental health products are now moving. When evaluating or renegotiating vendor contracts, ask specifically what crisis detection capabilities are built in, how escalation is handled, whether the vendor will sign a BAA, and what documentation they provide for Utah's safe harbor requirements. These are no longer nice-to-have negotiating points; they are essential compliance criteria.
What Is Coming Next
The regulatory momentum is accelerating, not slowing. State legislatures have discovered that AI mental health is politically popular territory for action: it combines visible technology risks with sympathetic harms involving vulnerable people. The 240+ bills moving through state houses in 2026 represent every variation of approach: disclosure requirements, crisis detection mandates, clinical oversight requirements, data protection rules, age-specific protections for minors, and outright prohibitions.
Federal action remains uncertain. Congress has held hearings and received AMA testimony, but has not passed comprehensive legislation. The FTC's enforcement interest is growing. SAMHSA is increasingly specific about AI's role (supporting, not replacing human counselors) in federally funded crisis services. The overall trajectory is toward greater requirements, not fewer.
For nonprofits, the practical response to regulatory uncertainty is to build toward the highest defensible standard rather than the minimum current requirement. Organizations that implement genuine AI disclosure, meaningful crisis detection with human escalation, HIPAA-compliant data practices, and documented compliance programs will be well-positioned regardless of which specific laws apply to them. Those waiting for a single federal law to clarify everything may be waiting through years of worsening patchwork while their exposure grows.
For context on how AI compliance intersects with other legal and governance considerations, see our overview of what the Gavalas lawsuit means for nonprofits deploying AI chatbots in mental health contexts. For organizations thinking about how to structure their overall AI governance approach, our guide on responsible AI practices for nonprofits covers the broader framework.
Conclusion
The AI mental health regulatory landscape of 2026 is genuinely complex, and the stakes for getting it wrong are high. Illinois prohibits autonomous AI therapy outright. Nevada prohibits interactive AI systems that present as mental health providers. New York mandates disclosure and crisis detection for AI companions. Utah offers a workable safe harbor for organizations that build documented compliance programs. California creates a private right of action. The patchwork is real, and it applies wherever your users are located, not just where you operate.
The Gavalas lawsuit against Google crystallized what was already becoming legally and ethically clear: deploying AI in emotionally sensitive contexts without adequate safeguards is not a compliance gap that can be ignored. It is a risk to real people. For nonprofits whose missions center on human wellbeing, the alignment between legal compliance and mission integrity is unusually direct. Building AI into crisis services, counseling programs, and peer support in a way that genuinely protects users is not just a legal obligation; it is the right way to do the work.
Start with the use-case audit. Know what AI is in your systems and what it is doing. Then map your state exposure. Then build the infrastructure, disclosure language, crisis detection, human escalation, BAAs, and documentation, that positions your organization as a trustworthy operator in this rapidly evolving space. Given that 240+ state bills are in motion in 2026 alone, do not attempt to navigate this without legal counsel. But do not wait for perfect regulatory clarity before moving either. The direction is clear, even when the destination is still being mapped.
Need Help Assessing Your AI Compliance Exposure?
One Hundred Nights helps nonprofits understand and navigate AI governance, compliance, and responsible deployment. Let us help you build a framework that protects your organization and the people you serve.
