Back to Articles
    AI Regulation

    New State AI Laws Taking Effect in 2026: What Your Nonprofit Needs to Know

    California, Colorado, Texas, and a growing list of states have passed AI laws that affect how nonprofits can develop, deploy, and communicate about AI tools. This practical guide explains what changed, what it means for your organization, and the steps you should take now.

    Published: February 21, 202612 min readAI Regulation
    Nonprofit AI compliance and state regulations 2026

    For most of AI's recent history, the regulatory environment in the United States has been a patchwork of guidance documents, voluntary frameworks, and executive orders. That era is ending. States across the country have now passed substantive AI legislation that creates real legal obligations, and 2026 is the year those obligations come into force.

    Nonprofit organizations face a compliance landscape that is simultaneously simpler and more complicated than for-profit businesses. Simpler because many of the most stringent requirements target commercial AI developers with large user bases. More complicated because nonprofits often operate across multiple states, serve vulnerable populations where AI errors have serious consequences, and may use AI in ways that touch categories like healthcare, housing, education, and employment, exactly the sectors most heavily regulated in new AI laws.

    This article provides a practical overview of the most significant state AI laws taking effect in 2026, how they apply to nonprofit organizations specifically, and the concrete steps your organization should take to achieve and maintain compliance. We will cover California's AI transparency requirements, Colorado's high-risk AI framework, Texas's governance act, and the broader trend these laws represent.

    This article provides general educational information, not legal advice. The laws discussed are complex and their application depends on your organization's specific situation. Consult with legal counsel specializing in AI and nonprofit law for guidance specific to your organization.

    Why the Regulatory Shift Is Happening Now

    State legislatures have moved on AI regulation for several interconnected reasons. The rapid mainstream adoption of AI tools following the launch of ChatGPT in 2022 made AI a technology that ordinary citizens interact with regularly, not just a matter for technical specialists. High-profile failures, including AI-generated misinformation, discriminatory hiring algorithms, and erroneous decisions in healthcare settings, created political pressure for oversight. And the absence of federal legislation created a vacuum that states moved to fill.

    The result is a state-by-state regulatory environment that resembles the early years of privacy law, when California's CCPA drove many organizations to adopt national compliance standards even if they weren't legally required to. Organizations that adopt good AI governance practices now, prompted by state laws, are likely to be in a stronger position when federal AI regulation eventually arrives, which most observers expect within the next few years.

    For nonprofits, the regulatory pressure intersects with a broader accountability context. Donors, foundations, and the communities nonprofits serve are increasingly asking about how organizations use AI responsibly. Regulatory compliance is increasingly part of stakeholder expectations, not just a legal matter. This connects directly to questions of building public trust in your nonprofit's AI implementation that go beyond legal requirements.

    The Compliance Stakes for Nonprofits

    Noncompliance with state AI laws can carry real consequences, including civil penalties, enforcement actions, and reputational damage. For nonprofits, the stakes extend beyond fines to include potential impacts on funding relationships, government contracts, and the trust of the communities you serve.

    • Civil penalties for violations of California's AI Transparency Act
    • Enforcement by state attorneys general for Colorado and Texas violations
    • Private rights of action for consumers harmed by non-disclosed AI decisions in some states
    • Potential impact on government contracts and funding eligibility

    California: AI Transparency Requirements

    California enacted two major AI transparency laws that took effect in 2026: the AI Transparency Act (SB 942) and the Generative AI Training Data Transparency Act (AB 2013). Understanding which of these laws applies to your nonprofit requires careful analysis of your role in the AI ecosystem.

    SB 942: California AI Transparency Act

    Effective August 2026 (extended from January 2026)

    SB 942 requires large generative AI providers to make their AI-generated content identifiable through visible labels and embedded metadata. It also mandates a free, publicly accessible detection tool so that content generated by covered AI systems can be identified by third parties.

    The key question for nonprofits is whether you qualify as a "covered provider." The law defines this as any entity that creates or produces a generative AI system with over one million monthly users in California. Very few nonprofits will meet this threshold. However, if your organization uses AI tools built by covered providers, those providers must label AI-generated content and make detection tools available.

    What this means for most nonprofits:

    • You are likely a user of covered AI systems, not a covered provider yourself
    • Your AI tool vendors (OpenAI, Anthropic, Google, etc.) bear the primary compliance obligations
    • You should still consider voluntary AI disclosure practices for content you publish

    AB 2013: Generative AI Training Data Transparency Act

    Effective January 1, 2026

    AB 2013 requires developers of AI systems to publicly disclose information about the training data used to build their models. Unlike SB 942, AB 2013 has no minimum user threshold, meaning it applies broadly to AI developers regardless of scale.

    For nonprofits, the critical question is whether you are "developing" AI in a meaningful sense. If your organization is training custom AI models using your own data, or significantly fine-tuning foundation models, you may have disclosure obligations under AB 2013. If you are simply using commercially available AI tools without modifying their underlying models, you are a user, not a developer, and AB 2013's requirements fall primarily on your AI vendors.

    Questions to ask your legal counsel:

    • Does our use of AI tools constitute "developing" an AI system under California law?
    • If we fine-tune or customize AI models, what are our disclosure obligations?
    • How do these requirements interact with our data privacy commitments to clients and donors?

    Colorado: High-Risk AI Systems

    Colorado's Artificial Intelligence Act (SB 24-205) is the most significant state AI law for nonprofits that work in human services, healthcare, education, housing, or employment. The law took effect June 30, 2026, after being delayed from its original February 1, 2026, effective date. It targets "high-risk AI systems" that make consequential decisions about people in regulated domains.

    What Qualifies as a "High-Risk AI System"

    Colorado's definition is broad and directly relevant to many nonprofit operations

    A high-risk AI system, under Colorado's law, is one that makes or substantially factors into "consequential decisions" affecting consumers. The law defines consequential decisions as those that have significant effects on people's access to education, employment, financial services, healthcare, housing, insurance, and legal services.

    For nonprofits, this definition is striking because it encompasses much of what mission-driven organizations do. A nonprofit running a housing program that uses AI to prioritize waitlist placement may be deploying a high-risk AI system. An organization using AI to screen job applicants in hiring is likely subject to the law. A human services agency using predictive analytics to allocate case management resources may fall within its scope.

    • Education: AI that influences student placement, admission, or educational program access
    • Employment: AI used in hiring, screening, performance evaluation, or termination decisions
    • Healthcare: AI that influences clinical decisions, care access, or resource allocation
    • Housing: AI that factors into housing program eligibility, waitlist prioritization, or placement
    • Legal services: AI that influences access to legal aid or case prioritization

    Colorado Compliance Requirements for Deployers

    Organizations that deploy high-risk AI systems in Colorado must meet several requirements. These obligations apply to "deployers," meaning organizations that put high-risk AI into operational use, even if they didn't build the AI themselves.

    • Risk management policy: Implement a written policy governing the deployment and use of high-risk AI systems
    • Annual impact assessments: Complete annual impact assessments that evaluate the potential for algorithmic discrimination
    • Consumer disclosure: Inform consumers when they are interacting with or being evaluated by an AI system
    • Adverse decision notice: Notify consumers when an AI system has made a decision adverse to their interests
    • Anti-discrimination obligation: Use reasonable care to protect consumers from known or foreseeable algorithmic discrimination

    Special Consideration for Service-Providing Nonprofits

    If your nonprofit serves clients in Colorado and uses any AI system that helps prioritize services, screen eligibility, or make recommendations that affect service access, carefully evaluate whether that system falls within Colorado's definition of high-risk AI. The law's breadth means that predictive tools used in case management, housing programs, or employment services may require compliance action. Engage legal counsel familiar with both Colorado's law and nonprofit operations to assess your specific situation before the June 30, 2026, effective date.

    Texas: The Responsible AI Governance Act

    Texas Governor Greg Abbott signed the Responsible Artificial Intelligence Governance Act (TRAIGA) into law in June 2025, with most provisions taking effect January 1, 2026. TRAIGA takes a different approach from Colorado's law, focusing less on high-risk systems and more on prohibiting specific harmful AI uses while imposing transparency requirements for certain applications.

    TRAIGA's core prohibitions cover uses of AI that most nonprofits would never engage in: behavioral manipulation, creation of deepfakes for deceptive purposes, AI that infringes on constitutional rights, and AI used for discriminatory purposes. These prohibitions apply broadly, including to nonprofit organizations operating in Texas.

    TRAIGA's Biometric Data Requirements

    One area of TRAIGA that may affect nonprofits more directly involves biometric data and AI. The law expanded Texas's existing biometric privacy rules to address AI-specific issues, including requirements around consent for AI systems that process biometric identifiers like facial recognition data. Nonprofits that use any AI system that captures or processes biometric data, even something as common as a security camera system with AI-powered features, should review their practices against TRAIGA's biometric provisions.

    Government Interaction Transparency

    TRAIGA explicitly applies to government agencies that use AI to interact with the public. For nonprofits that partner with or receive contracts from Texas government agencies to deliver public services, this creates an important consideration. If your organization is delivering services under a government contract and using AI in those interactions, understand how your contractual relationship with the government agency affects your compliance obligations.

    Beyond California, Colorado, and Texas: The Broader Trend

    California, Colorado, and Texas represent the leading edge of state AI legislation, but they are not the only states that have moved in this direction. According to tracking from organizations like the International Association of Privacy Professionals (IAPP), dozens of states have introduced or passed AI-related legislation, with many more expected in 2026 legislative sessions.

    For nonprofits operating in multiple states, the patchwork nature of state AI law creates complexity. A nonprofit headquartered in New York but operating programs in California, Colorado, and Texas faces potential obligations under multiple legal frameworks simultaneously. The operational reality is that organizations often establish a compliance approach based on the most demanding applicable standard and apply it consistently, rather than trying to maintain separate compliance programs for each state.

    There is also a federal dimension to this landscape. The Trump administration's executive order on AI policy in early 2025 emphasized reducing regulatory burdens on AI development and signaled skepticism toward state-level AI regulation. This has created uncertainty about whether federal preemption might ultimately supersede state AI laws. Legal observers have varying views on how this will play out. For now, state laws are in effect and organizations must comply with them regardless of federal regulatory direction.

    States to Watch: Emerging AI Legislation

    Additional state AI laws that may affect nonprofits in the coming year

    • Illinois: Has active AI bills addressing algorithmic discrimination in employment and insurance decisions
    • New York: Proposed legislation addressing automated employment decisions and AI transparency in hiring
    • Virginia: High-risk AI legislation modeled partly on Colorado's framework is advancing through the legislature
    • Washington: Active AI bills addressing automated decision systems and data protection for AI-assisted services

    Practical Compliance Steps for Nonprofits

    The good news for nonprofits is that many of the practices required by state AI laws align with responsible AI governance that organizations should be pursuing anyway. If your organization has been thoughtful about AI adoption, you may already be taking steps that address many compliance requirements. The following framework helps nonprofits systematically assess and build their compliance posture.

    Step 1: Conduct an AI Inventory

    Before you can assess your compliance obligations, you need a clear picture of all the AI tools and systems your organization uses. This includes obvious AI tools like chatbots and content generation tools, but also AI-powered features in your existing software like CRM platforms, email marketing systems, hiring tools, and program management systems.

    • List all software tools used across the organization, noting which have AI-powered features
    • Document what decisions each AI tool influences or makes
    • Identify which tools affect clients, program participants, employees, or other stakeholders
    • Note which states your programs operate in, to determine applicable legal frameworks

    Step 2: Assess High-Risk AI Exposure

    Using your AI inventory, evaluate whether any of your AI tools meet the definition of high-risk AI under applicable state laws. Pay particular attention to tools that influence service eligibility, resource allocation, hiring decisions, or any outcome that materially affects an individual's access to programs, employment, or benefits.

    • Review Colorado's definition of "consequential decisions" against each AI tool's function
    • Ask vendors whether their AI products have been designed with Colorado and other state law compliance in mind
    • Engage legal counsel to assess specific tools for high-risk status in your operational context
    • Document your assessment with rationale, for both compliance purposes and organizational learning

    Step 3: Develop or Update Your AI Policy

    Colorado's law specifically requires deployers of high-risk AI to have a written risk management policy. Even if you are not legally required to have one, an AI policy is a fundamental building block of responsible AI governance. It sets expectations, guides decision-making, and demonstrates to funders and stakeholders that your organization takes AI accountability seriously.

    This is an area where your internal AI champions can play an important role. They can help translate legal requirements into operational language and ensure that policies reflect how AI is actually used in your organization.

    • Define acceptable and unacceptable AI uses for your organization
    • Establish processes for evaluating new AI tools before adoption
    • Include requirements for human review of consequential AI-assisted decisions
    • Define disclosure practices for AI-generated content and AI-assisted decisions

    Step 4: Build Transparency and Disclosure Practices

    Multiple state AI laws emphasize transparency: telling people when they're interacting with AI, disclosing when AI has influenced a decision that affects them, and being clear about AI-generated content. Building these practices into your operations is both a compliance action and a trust-building measure with your stakeholders.

    • Add disclosures to client-facing AI interactions, such as chatbots or automated intake systems
    • Establish processes for notifying clients when AI has played a significant role in decisions affecting their services
    • Consider disclosing AI use in published content, particularly for communications that represent the organization
    • Update privacy notices and terms to reflect AI data processing practices

    Connecting Compliance to Broader AI Governance

    Legal compliance and responsible AI governance are related but not identical. Compliance means meeting the minimum standards set by applicable law. Responsible governance means operating in ways that protect your mission, your stakeholders, and your organization's integrity, which often goes beyond what the law requires.

    For nonprofits, responsible AI governance matters for reasons that go beyond legal risk. Your mission is to serve people and communities. Using AI in ways that discriminate, mislead, or harm the people you serve undermines your mission regardless of whether a law has been broken. Building a robust AI governance framework, including the AI policy work discussed above, creates a foundation that serves both compliance and mission.

    The AI policy governance gap that many nonprofits are still addressing is increasingly a compliance issue as well as a strategic one. Organizations that have not yet developed AI governance policies should treat that work as an urgent priority, particularly those operating in California, Colorado, or Texas.

    If your board has not yet engaged with AI governance questions, this is an important time to bring them into the conversation. Board members need to understand your organization's AI use, the legal landscape, and the governance policies you're putting in place. This kind of board engagement, which we explored in the context of communicating about AI with your board, becomes even more important when legal obligations are involved.

    Conclusion: Act Now, Before You Must

    The state AI laws taking effect in 2026 mark a meaningful shift in the regulatory environment for organizations that use AI. The era of treating AI as a compliance-free technology is ending. Whether your nonprofit is directly subject to California's transparency requirements, Colorado's high-risk AI framework, Texas's governance act, or simply operating in a state where similar legislation is advancing, the direction of travel is clear: AI use will require governance, documentation, and accountability.

    For most nonprofits, the compliance obligations themselves are manageable. The fundamental requirements, having AI policies, conducting impact assessments, disclosing AI interactions, protecting against algorithmic discrimination, are things that responsible organizations should be doing regardless of legal requirements. The challenge is not the difficulty of compliance; it is the organizational work of building the practices, documentation, and culture to comply systematically.

    Organizations that begin this work now, before regulatory enforcement increases and before your funders and stakeholders begin asking harder questions, will be in a fundamentally stronger position. Proactive compliance is always less costly and disruptive than reactive remediation.

    The regulatory landscape will continue to evolve. Watching for new state legislation, monitoring federal regulatory developments, and staying connected to sector-wide guidance through organizations like the National Council of Nonprofits and the Alliance for Nonprofit Management will help your organization stay ahead of requirements as they emerge.

    Need Help Building Your AI Governance Framework?

    One Hundred Nights helps nonprofits develop AI policies, assess compliance exposure, and build the governance practices that state laws increasingly require and that responsible AI use demands.