California's AI Transparency Laws: Compliance Guide for Nonprofits
California has enacted the most sweeping state-level AI regulations in the United States. Here is what nonprofit leaders need to understand about which laws actually apply to their organizations, what compliance looks like in practice, and where to focus their limited time and resources.

When California's Governor signed 16 AI-related bills in 2024 and 2025, nonprofit leaders across the state faced a wave of headlines warning about sweeping new compliance obligations. The reality is more nuanced, but also more urgent in specific areas than many organizations realize. Some California AI laws apply broadly to nonprofits, others don't, and a few that seem irrelevant have critical implications for vendor relationships.
Understanding which laws genuinely apply to your organization, and what they actually require, is the essential first step. A nonprofit's compliance exposure looks very different from a tech company's. Most of California's high-profile AI legislation targets developers of AI systems, which means the obligations flow primarily to vendors rather than to the nonprofits that use their tools. The exception, and it is a significant one, is the area of AI in employment decisions, where California has enacted landmark regulations that apply directly to all employers, including nonprofits.
This guide organizes California's AI regulatory landscape into what you actually need to act on, what requires vendor attention, and what is currently not applicable to most nonprofit organizations. It also includes a practical compliance checklist so your team can assess your current position and prioritize next steps. For context on the broader regulatory environment, see our overview of new state AI laws taking effect in 2026 and the federal versus state AI regulation landscape.
The Law Most Nonprofits Haven't Heard Of, But Should Have
While high-profile bills like SB 1047 dominated AI news coverage in California, the most consequential AI law for nonprofits is one that received far less attention: the California Fair Employment and Housing Act (FEHA) Automated Decision Systems Regulations, finalized in June 2025 and in effect since October 1, 2025. Unlike many California AI laws that target large tech developers, FEHA applies to any California employer with five or more employees.
If your nonprofit uses AI tools in any part of the hiring process, including resume screening software, applicant ranking systems, job matching platforms, interview scheduling tools, or performance evaluation systems, these regulations apply to you right now. The FEHA regulations are also not limited to tools your organization builds. Off-the-shelf AI software that your HR or recruiting team uses to make or support employment decisions falls within scope.
What makes these regulations particularly impactful is their prohibition on adverse impact. Even if an AI system treats all candidates the same on its face, if it produces discriminatory outcomes for protected groups under California law, including race, national origin, sex, gender, disability, religion, pregnancy, age, and sexual orientation, that constitutes an impermissible employment practice unless the employer can demonstrate job-relatedness and business necessity. This is a high standard borrowed from decades of employment discrimination law, now applied directly to algorithmic systems.
FEHA AI Employment Regulations: Key Requirements
Effective October 1, 2025 for all California employers with 5+ employees
- Bias prohibition: AI systems used in hiring, screening, performance evaluation, or pay/benefits decisions must not produce discriminatory adverse impact on any FEHA-protected class
- Proactive testing: Employers are expected to audit AI employment tools for bias and document those testing efforts (documentation matters for legal defense)
- Four-year record retention: All inputs, outputs, and configuration settings for automated decision systems used in employment must be retained for four years
- Vendor liability extended: AI vendors providing employment decision tools on an employer's behalf may face direct exposure, but employer responsibility is not eliminated
- Existing FEHA remedies apply: Violations can result in compensatory damages, back pay, reinstatement, and attorney's fees under existing employment law frameworks
In practice, compliance starts with an honest audit of your current hiring technology. Many nonprofits use applicant tracking systems with built-in AI scoring features they may not even realize are active. Others use LinkedIn's algorithmic candidate suggestions or Indeed's smart matching features. Some use interview scheduling platforms or video interview analysis tools. Each of these tools falls within the FEHA scope if it contributes to employment decisions.
Once you identify these tools, the next step is requesting documentation from vendors about their bias testing practices and outcomes. Vendors who cannot provide this documentation present a compliance risk. For new vendor contracts, include explicit language assigning FEHA compliance responsibility and requiring ongoing bias audit reporting.
California AI Transparency Laws for Content and Communications
California's California AI Transparency Act (originally SB 942, amended by AB 853 in 2025) addresses a different kind of transparency: disclosure when AI generates or substantially modifies visual and audio content. For nonprofits that use AI to create or edit images, videos, or audio for their communications, fundraising campaigns, or program delivery, understanding this law is important.
The law requires covered providers of generative AI systems to embed invisible "latent" disclosures in AI-generated image, video, and audio content, provide a free detection tool so users can check whether content was AI-generated, and offer users the option to include visible disclosure markers. A "covered provider" is defined as one with over one million monthly users accessible within California. This means major AI image and video tools your organization might use are likely covered providers with these obligations baked in.
The key implication for nonprofits is not a direct compliance obligation in most cases, but a due diligence responsibility: confirm your AI content vendors are complying with this law, and understand what disclosures are embedded in the content you distribute. If your organization runs a platform with over one million monthly visitors that uses AI-generated media, a more careful review is warranted. The law is effective through August 2026, with expanded requirements applying to large online platforms, hosting services, and camera manufacturers.
What SB 942/AB 853 Requires of AI Providers
- Embed invisible latent disclosures in AI-generated image, video, and audio
- Provide free public detection tool to check if content is AI-generated
- Offer users option to add visible AI-generated content labels
- Contractually require licensees to maintain disclosure capabilities
What This Means for Your Nonprofit
- Confirm AI image/video vendors comply with disclosure requirements
- Understand that AI-generated content you publish carries embedded metadata
- Develop editorial guidelines for when your team should add visible AI labels
- Review exposure if operating a platform with 1M+ California visitors
Training Data Transparency: AB 2013 and What It Means for AI Vendors
California's AB 2013, the Generative AI Training Data Transparency Act, took effect January 1, 2026. It requires any entity that designs, codes, produces, or substantially modifies a generative AI system for public use to publish detailed disclosures about the data used to train their models. The required disclosures cover data sources, whether personal information is included, IP and copyright considerations, data acquisition details, and processing history.
For most nonprofits, AB 2013 does not create a direct compliance obligation because it applies to AI developers, not users. If your organization uses ChatGPT, Claude, Gemini, or other commercial AI tools, you are not the developer and are not required to file disclosures. However, the law creates an important new lever for nonprofit vendor due diligence: the AI tools you use should now have published training data disclosures on their websites that you can review.
This matters because training data transparency is directly connected to questions nonprofits frequently ask about their AI tools: Was the model trained on personal information? Could it reproduce copyrighted content? Does it embed biases from historical data? AB 2013 compliance disclosures won't answer every question, but they create a new baseline of information that thoughtful nonprofits should incorporate into their AI selection and governance processes.
One important caveat: as of early 2026, AB 2013 faces an active First Amendment legal challenge. Elon Musk's xAI filed a federal lawsuit in late December 2025 arguing the law constitutes compelled speech. The law remains in effect while litigation proceeds, but the legal landscape may shift. Organizations should monitor developments while continuing to incorporate training data disclosure review into their vendor practices regardless of the law's ultimate fate.
AB 2013 Vendor Due Diligence Checklist
Questions to ask AI vendors about their training data transparency compliance
- Has the vendor published an AB 2013-compliant training data disclosure on their website?
- Does the disclosure indicate whether personal information (as defined under CCPA) was included in training data?
- Are training data sources identified with sufficient specificity to assess relevance to your use case?
- Does the vendor have a process for updating disclosures when models are substantially modified?
- If a vendor operates in California and has not published disclosures, what is their plan for compliance?
The CCPA Exemption: What Nonprofits Are Generally Protected From
California's Privacy Rights Act (CPRA) and the California Consumer Privacy Act (CCPA) contain one of the most significant protections for nonprofits in the state's regulatory framework: a broad statutory exemption from their requirements. The CCPA generally applies only to for-profit businesses meeting revenue or data processing thresholds. Most nonprofits, as tax-exempt organizations, fall outside this definition.
This exemption is significant in practice because the California Privacy Protection Agency (CPPA) finalized sweeping Automated Decision-Making Technology (ADMT) regulations in July 2025, effective January 1, 2026. These regulations, which would otherwise require businesses to provide pre-use notices before using AI in consequential decisions, give consumers opt-out rights, and conduct detailed risk assessments for high-stakes AI applications, do not apply to most nonprofits because of the CCPA exemption.
However, the exemption is not absolute and should not breed complacency. Nonprofits that operate commercial enterprises alongside their charitable missions, or that meet the revenue thresholds through business activities, may have exposure. Any nonprofit uncertain about whether it qualifies for the CCPA exemption should consult with legal counsel. Additionally, the exemption covers the nonprofit as an organization but does not eliminate the compliance obligations of AI vendors who are for-profit businesses. This means data governance in your vendor contracts remains important regardless of your exempt status.
Generally NOT Applicable to Most Nonprofits
- CCPA/CPRA Automated Decision-Making Technology (ADMT) regulations
- AB 1008 (CPRA expansion to AI model outputs)
- SB 53 (Frontier AI Transparency) - applies only to large frontier model developers
- AB 2013 (Training Data Transparency) as a direct obligation - applies only to AI builders
Does Apply to Nonprofits
- FEHA AI Employment Regulations (all CA employers with 5+ employees)
- AB 316 (no "AI acted autonomously" defense in civil litigation)
- AB 489 (healthcare AI disclosure requirements) for health/social service nonprofits
- SB 243 (companion chatbot disclosure) for youth-serving organizations using AI chat
Sector-Specific Laws That Affect Certain Nonprofits
Beyond the broadly applicable FEHA regulations, several California AI laws target specific use cases that are particularly relevant to mission-driven organizations. Understanding these is especially important for nonprofit leaders in health, social services, and youth programming, where AI tools are increasingly common and where the stakes of misuse are highest.
AB 489: Healthcare and Social Services AI Disclosure
Effective January 1, 2026
AB 489 prohibits AI systems from using terms, titles, or phrases that imply a user is receiving care from a licensed healthcare professional when no such human oversight exists. For nonprofits operating health clinics, community health programs, mental health referral services, crisis lines, or social service navigation tools that use AI chat interfaces, this law creates a direct obligation: your AI-facing communications must clearly disclose when users are interacting with an AI system and cannot imply professional licensure.
The practical implication is an audit of any AI chat or conversational tool used in health or social service contexts. Chatbot introductions, automated response systems, and AI-powered screening tools should include clear, conspicuous disclosure of their AI nature and the absence of human professional oversight. This is also simply good practice from an ethical standpoint, independent of legal requirements.
SB 243: Companion Chatbot Transparency for Youth-Serving Organizations
Effective January 1, 2026
SB 243 requires operators of AI companion chatbots to be transparent with users, and particularly with minors, that they are interacting with an AI rather than a human. For nonprofits running after-school programs, youth development organizations, mentorship platforms, or any program that deploys conversational AI tools accessible to children, this law creates clear obligations around disclosure and transparency.
Even if your organization's AI tools are not designed as companion applications, the law's spirit applies to any AI-powered conversation system accessible to minors. Youth-serving nonprofits should establish a clear policy requiring explicit AI identity disclosure in all such tools and conduct an audit of any digital tools used in programming that involve AI conversation, screening, or support features.
AB 316: You Cannot Blame the AI
Effective January 1, 2026
AB 316 closes a potential loophole that might have allowed organizations to disclaim liability for AI-caused harms by arguing the system acted autonomously. Under this law, defendants in civil cases cannot assert that "the AI acted on its own" as a defense to liability for harms caused by AI systems they developed, modified, or deployed.
For nonprofits, this reinforces a principle that should already guide AI governance: organizational responsibility for AI decisions does not diminish because an algorithm made them. If your AI-powered case management system misclassifies a client's need, or your AI-generated recommendation leads to a poor outcome, your organization retains accountability. This underscores the importance of maintaining meaningful human review of consequential AI-generated decisions, a practice that is both legally prudent and ethically required.
Vendor Management as the Central Compliance Strategy
A recurring theme across California's AI regulatory framework is that nonprofits are primarily AI users rather than AI developers, which means their most important compliance surface is not internal systems but vendor relationships. The laws that apply most directly to the AI companies whose tools nonprofits use, including AB 2013 training data transparency, SB 942 content detection requirements, and SB 53 frontier model disclosures, create a new category of information that nonprofits should be requesting and reviewing as part of standard vendor due diligence.
This matters for several reasons beyond strict legal compliance. A vendor's compliance with California transparency laws provides useful signals about their operational practices, the nature of their training data, and their overall approach to responsible AI development. An AI company that cannot provide training data disclosure documentation, or whose content tools lack the detection capabilities required by SB 942, may have other governance gaps that create risk for your organization.
Vendor contracts signed before 2025 may not include provisions addressing California's new AI laws. This is the right moment to review existing agreements, especially with AI vendors in high-stakes domains like hiring, client services, and health applications. For new vendor agreements, consider including AI governance representations, compliance certification requirements, and clear liability allocation for AI-related compliance failures. This is also related to the broader AI liability and risk management considerations nonprofits should address.
AI Vendor Contract Provisions for California Compliance
- FEHA compliance representation: Vendor warrants their AI employment tools comply with FEHA's anti-discrimination requirements and will provide bias audit results on request
- Training data disclosure: Vendor certifies AB 2013 compliance and will maintain publicly accessible training data disclosures
- Content detection: For generative AI image/video/audio tools, vendor certifies SB 942 compliance including detection tool availability
- Incident notification: Vendor will notify organization within specified timeframe if AI system produces harmful, discriminatory, or non-compliant outputs
- Data retention support: Vendor will retain or make available records necessary for FEHA's four-year ADS record retention requirement
- Regulatory monitoring: Vendor agrees to notify organization of material changes in California AI law compliance that affect their services
Special Considerations for Government-Funded Nonprofits
Nonprofits that receive funding from California state agencies, or that operate under state contracts to deliver services, face an additional layer of consideration beyond the laws described above. Governor Newsom's Executive Order N-12-23, issued in September 2023, directed state agencies to establish AI risk assessment procedures, AI procurement guidelines, and mandatory staff training requirements. While this order applies to state agencies rather than nonprofits directly, its influence is increasingly felt by organizations that work with those agencies.
State agency AI procurement guidelines now require executive-level oversight of AI tools, continuous monitoring, and demonstrated alignment with responsible AI principles. As these standards become embedded in state contract requirements, nonprofits that deliver state-funded services may find themselves asked to demonstrate AI governance practices that parallel what state agencies are required to do internally. This is a trend worth monitoring closely, particularly for organizations in health and human services, child welfare, workforce development, and other heavily state-contracted service areas.
Even without explicit contractual requirements, building AI governance infrastructure now positions your organization well for the direction regulations are heading. This includes maintaining an AI use register documenting what tools are deployed and for what purposes, establishing an internal AI point of contact, creating an incident response protocol for AI failures, and developing staff policies around acceptable AI use in program delivery. For more on building this foundation, our guide on building AI champions within your organization offers practical starting points.
Your California AI Compliance Checklist for 2026
Compliance with California's AI regulatory framework is achievable for nonprofits with focused effort. The most common mistake is treating AI governance as a single project to be completed rather than an ongoing practice to be embedded. The regulatory landscape is evolving rapidly. More than 22 California AI bills were in the legislative pipeline heading into the second half of the 2025-2026 session, meaning additional requirements are likely before the year is out.
Immediate Actions (Already Required)
- Audit all AI tools used in hiring, screening, performance evaluation, and pay/benefits decisions for FEHA compliance
- Document bias testing efforts for any AI tools used in employment decisions
- Establish four-year record retention for ADS inputs, outputs, and settings used in employment
- Update AI vendor contracts to address FEHA compliance and liability
- Ensure AI chat tools in health or youth services clearly disclose their AI nature (AB 489, SB 243)
- Brief leadership and program staff that AI tools do not eliminate organizational liability (AB 316)
Vendor Due Diligence (Ongoing)
- Request AB 2013 training data disclosures from generative AI vendors
- Confirm AI image/video/audio vendors comply with SB 942/AB 853 content detection requirements
- Add California AI compliance questions to all new vendor RFPs and renewals
- Review existing AI vendor contracts for California AI compliance provisions and gaps
Governance Infrastructure (Build Over Time)
- Designate an internal AI point of contact responsible for monitoring regulatory developments
- Create an AI use register documenting what tools are in use, for what purposes, and who is responsible
- Develop an AI incident response protocol for harmful or non-compliant outputs
- Establish a staff acceptable use policy that addresses client data, content disclosure, and human oversight requirements
- For state-funded programs, monitor government contracts for emerging AI governance requirements
Navigating a Moving Target
California's AI regulatory landscape is genuinely complex, and it is still evolving. But for most nonprofits, the compliance picture is clearer and more manageable than the volume of legislative activity might suggest. The most urgent and universally applicable obligation is the FEHA employment AI regulations, which require action on any AI tools used in hiring and employment decisions. Beyond that, sector-specific laws apply to organizations in health, social services, and youth programming. And across all areas, vendor due diligence has become a central compliance practice.
The organizations that will navigate this landscape most successfully are those that approach AI governance not as a compliance checkbox but as an organizational capability. Building internal practices around AI transparency, vendor accountability, and human oversight of consequential decisions serves your compliance obligations and your mission simultaneously. When AI is used thoughtfully, with clear accountability and appropriate safeguards, it genuinely does improve organizational capacity and service quality.
California's regulatory environment is being watched closely by other states, and federal AI governance frameworks are developing in parallel. The infrastructure you build for California compliance today is likely to serve you well as the national regulatory picture clarifies. Use this moment to establish the governance habits and vendor relationships that will position your organization to remain compliant, trustworthy, and competitive as AI continues to reshape the sector. For further reading on responsible AI adoption, explore our guidance on ethical AI implementation and creating an AI policy for your nonprofit.
Get Help With Your AI Compliance Strategy
Navigating California's AI regulatory landscape requires both legal awareness and practical AI expertise. Our team helps nonprofits assess their current AI use, identify compliance gaps, and build governance frameworks that work in practice.
