Texas TRAIGA and Colorado's AI Act: State-by-State Compliance for Multi-State Nonprofits
Two major state AI laws are reshaping how nonprofits must govern their use of artificial intelligence. This guide breaks down what Texas TRAIGA and Colorado's AI Act actually require, how they differ, and how to build a compliance strategy that works across both states.

The era of informal AI governance is ending. While the federal government has moved cautiously on AI regulation, states are acting independently, creating a patchwork of laws that multi-state nonprofits must now navigate. Two of the most consequential are Texas's Responsible AI Governance Act (TRAIGA) and Colorado's AI Act (SB 24-205). Both are now in effect or imminent, both carry real penalties, and neither provides blanket exemptions for nonprofits.
For nonprofit leaders, this creates a practical challenge. Organizations that deliver services across state lines, partner with national funders, or employ staff in multiple states face overlapping and sometimes conflicting compliance obligations. A homeless services organization operating in Denver and Austin simultaneously may be subject to both Colorado's impact assessment requirements and Texas's prohibited practices rules, each with different triggers, different documentation expectations, and different enforcement mechanisms.
Understanding these laws deeply, rather than at a surface level, is essential for effective governance. The good news is that the laws share an important common thread: both recognize the NIST AI Risk Management Framework as a path to safe harbor protection. Organizations that build their AI governance around this framework can satisfy both states' requirements while also aligning with broader international standards. This guide walks through each law, compares them, and offers a practical roadmap for multi-state compliance.
Before we dive in, a note about scope: this article focuses on Texas TRAIGA and Colorado's AI Act specifically. California's approach and the EU AI Act are covered in companion articles. For nonprofits that encounter AI regulation across even more jurisdictions, see our overview of new state AI laws taking effect in 2026 and our analysis of the federal versus state AI regulation landscape.
Texas TRAIGA: Already in Effect
Texas Governor Greg Abbott signed HB 149, the Texas Responsible AI Governance Act (TRAIGA), into law on June 22, 2025. It became effective January 1, 2026, making it one of the first comprehensive state AI governance laws in the United States to take full effect. Texas also passed a companion healthcare-focused law, SB 1188, which addresses AI use in clinical settings and added data localization requirements that also took effect January 1, 2026.
What TRAIGA Prohibits
Texas uses a prohibited practices model focused on intent
- Intentional discrimination: AI systems cannot be deployed with the sole intent to unlawfully discriminate against a protected class. Disparate impact alone does not establish a violation; intent must be proven.
- Behavioral manipulation: AI cannot be designed to incite self-harm, harm to others, or criminal activity.
- Minor exploitation: AI cannot be used to produce child sexual abuse material or explicit deepfakes involving minors.
- Constitutional rights violations: AI cannot be deployed with the intent to infringe, restrict, or impair an individual's constitutional rights.
Government Entity Rules
Government-adjacent nonprofits face stricter obligations
- No social scoring: Government entities (and potentially entities acting as government contractors) cannot use AI to rank or score people based on behavior or personal characteristics.
- No non-consensual biometric identification: Using publicly available data for biometric identification that would infringe constitutional rights is prohibited.
- Mandatory disclosure: AI use must be disclosed to individuals before or at the time of interaction.
Enforcement and Penalties Under TRAIGA
Enforcement authority rests exclusively with the Texas Attorney General. There is no private right of action, meaning individuals cannot sue organizations directly under TRAIGA. Before pursuing enforcement action, the AG must provide written notice giving organizations a 60-day cure period. This is a meaningful protection: organizations that discover and correct a violation before it becomes uncurable can avoid the heaviest penalties.
Civil penalties under TRAIGA scale significantly based on whether a violation is curable. Violations that are cured within the 60-day window avoid penalties entirely. Uncured violations carry penalties of $10,000 to $12,000 per violation. Violations that the AG determines are inherently uncurable carry penalties of $80,000 to $200,000 per violation, with ongoing violations accumulating $2,000 to $40,000 per day. For nonprofits accustomed to relatively limited financial exposure from regulatory violations, these numbers deserve serious attention.
Importantly, TRAIGA creates an affirmative safe harbor for organizations that substantially comply with the NIST AI Risk Management Framework or an equivalent recognized standard. This means the best protection against TRAIGA liability isn't just avoiding the prohibited practices; it's building a documented compliance program that can demonstrate you've implemented responsible AI governance aligned with established frameworks.
TRAIGA's Regulatory Sandbox: An Opportunity for Innovating Nonprofits
TRAIGA creates an optional regulatory sandbox administered by the Texas Department of Information Resources. Approved participants can develop and test AI systems for up to 36 months with temporary exemption from certain state licensing requirements. The AG cannot file charges against sandbox participants during the active testing period.
For nonprofits building custom AI tools, developing AI-powered service matching systems, or piloting novel uses of AI in social services, the sandbox could provide valuable runway to innovate without full regulatory exposure. Applications require a system description, benefit assessment, consumer and privacy impact analysis, and mitigation plans. Participants submit quarterly performance reports.
Note that core TRAIGA prohibitions apply even within the sandbox. The exemption covers certain licensing requirements, not the fundamental prohibited practices rules.
Healthcare Nonprofits: Additional Requirements Under SB 1188
Nonprofits operating healthcare services face additional obligations under SB 1188, which took effect September 1, 2025. Healthcare practitioners must personally review all AI-generated diagnostics or treatment suggestions before acting on them. Providers must disclose AI involvement to patients when AI contributes to diagnosis or treatment decisions. Electronic health records must be physically stored within the United States, a data localization requirement that took effect January 1, 2026. Violations carry civil penalties of $5,000 to $250,000 per incident, making compliance investment clearly worthwhile.
Community health centers, mental health nonprofits, addiction treatment organizations, and any nonprofit that provides health-related services to clients in Texas should conduct a specific SB 1188 compliance review separate from their general TRAIGA assessment.
Colorado's AI Act: A Compliance-Oriented Model
Colorado Governor Jared Polis signed SB 24-205 on May 17, 2024, making Colorado the first state to enact broad restrictions on private companies' use of AI. The original effective date was February 1, 2026, but the governor subsequently signed a delay bill pushing implementation to June 30, 2026. That deadline is now firm, and nonprofits serving Colorado residents have a clear target for compliance readiness.
Colorado's approach differs fundamentally from Texas's. Rather than focusing on prohibited bad acts, Colorado's AI Act is a compliance-oriented framework that requires organizations to build ongoing governance infrastructure, maintain documentation, and provide consumers with specific rights around AI-influenced decisions that affect their lives.
What Triggers Colorado's AI Act
The law applies when AI makes or substantially influences a "consequential decision"
Colorado's AI Act applies to deployers of "high-risk AI systems" that make or are "a substantial factor in making a consequential decision" about a Colorado resident. The consequential decision categories are broad and directly relevant to nonprofit operations:
- Education enrollment or educational opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Healthcare services
- Housing
- Legal services
- Insurance
For many nonprofits, this list is a near-complete description of their work. Organizations providing housing assistance, case management, employment training, healthcare navigation, or educational programs may all find themselves operating high-risk AI systems even if they never thought of their technology that way.
Full Deployer Obligations Under Colorado's AI Act
Organizations that deploy high-risk AI systems in Colorado must satisfy seven categories of requirements by June 30, 2026. Each represents a distinct compliance obligation requiring planning and preparation well in advance of the deadline.
1. Risk Management Policy and Program
Organizations must implement a written policy and program describing the principles, processes, and personnel used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The program must align with the NIST AI Risk Management Framework or ISO/IEC 42001. This is not a one-time document but an ongoing governance system that must be maintained and updated as your AI use evolves.
2. Annual Impact Assessments
Before deploying a high-risk AI system, and then annually thereafter, and within 90 days of any significant modification, organizations must complete and retain documented impact assessments. These assessments must evaluate the system's purpose, potential risks of algorithmic discrimination, and the organization's risk mitigation measures. Assessments must be retained for three years and are subject to disclosure to the Colorado Attorney General upon request.
3. Pre-Decision Consumer Notice
Before any consequential AI-influenced decision is made about a consumer, the organization must provide plain-language notice disclosing that a high-risk AI system is involved, the purpose of the system, the nature of the consequential decision, contact information, and how the consumer can access documentation about the system. This notice must be available in all languages the organization uses for contracts and agreements with clients.
4. Adverse Decision Notice and Appeal Rights
When an AI system makes an adverse consequential decision affecting a consumer, the organization must provide notice and an opportunity for the consumer to correct any incorrect personal data that contributed to the decision. Consumers must also have the opportunity to appeal adverse decisions, including through human review "if technically feasible." For nonprofits, this often means building or updating intake and appeals processes to explicitly account for AI-influenced decisions.
5. Public Website Statement
Organizations deploying high-risk AI must publish a statement on their website describing the types of high-risk AI systems they deploy, how they manage algorithmic discrimination risks, and the nature and source of data used in those systems. This creates a public accountability mechanism and gives funders, clients, and partners visibility into an organization's AI governance posture.
6. AG Disclosure for Discovered Discrimination
If an organization discovers that its AI system has caused algorithmic discrimination, it must notify the Colorado Attorney General within 90 days. This proactive disclosure obligation encourages organizations to actively monitor for discrimination rather than wait for complaints, and it rewards good-faith self-reporting with the opportunity to avoid enforcement action through voluntary correction.
7. Data Rights Disclosure
Organizations must provide consumers information about their right to opt out of personal data processing consistent with Colorado's existing consumer privacy law. For many nonprofits, this links AI governance to data privacy governance, requiring coordination between programs teams who use AI tools and administrative or IT staff who manage data policies.
The Small Business Exemption: Does It Apply to Your Nonprofit?
Colorado's AI Act includes a partial exemption for small deployers: organizations with fewer than 50 full-time employees are exempt from the most demanding requirements (the website statement, impact assessments, and risk management policy) if they meet two conditions. First, they must not train the AI system on their own data. Second, they must use the system only for its intended purpose as specified by the developer and provide consumers with any impact assessment furnished by the developer.
This exemption is meaningful for small nonprofits using off-the-shelf AI tools as intended by their vendors. However, the exemption evaporates the moment an organization customizes, fine-tunes, or trains an AI system on its own data. Nonprofits that have fed their own client records, program data, or historical information into AI tools to improve performance are operating as developers under Colorado's framework, and the small business exemption does not protect them.
Enforcement mirrors Texas: the Colorado AG has exclusive enforcement authority, must provide 60 days' notice before taking action, and violations are treated as unfair trade practices under the Colorado Consumer Protection Act. Maximum penalties reach $20,000 per violation, and because violations are counted separately for each affected consumer, a single discriminatory AI decision affecting many clients could result in substantial cumulative liability.
How These Laws Compare: Key Differences Multi-State Nonprofits Must Understand
Texas and Colorado represent two philosophically distinct approaches to AI governance, and understanding the difference is essential for building a compliance strategy that actually works across both states.
Texas TRAIGA
Conduct-prohibition model, intent-based liability
- Focuses on what AI cannot be used for (prohibited bad acts)
- AG must prove wrongful intent to establish most violations
- No mandatory ongoing compliance programs for general AI use
- Innovation-friendly: regulatory sandbox available
- Already in effect as of January 1, 2026
- Penalties: up to $200,000 per uncurable violation
Colorado AI Act
Compliance-oriented model, outcome-based liability
- Requires ongoing compliance infrastructure: risk programs, assessments, disclosures
- Focuses on whether people were harmed, not on intent
- Consumers have rights: pre-decision notice, appeals, human review
- Impact-based standard: disparate impact can establish liability
- Effective June 30, 2026 (preparation window closing fast)
- Penalties: up to $20,000 per violation, per affected consumer
The most important difference is the burden of proof. Under Texas TRAIGA, the AG must demonstrate that an organization acted with discriminatory or harmful intent. That is a high bar, and organizations that document legitimate purposes for their AI systems are well positioned to defend against claims. Under Colorado's AI Act, the question is whether discrimination occurred and whether the organization took reasonable care to prevent it. Documenting good intentions is necessary but not sufficient; organizations must demonstrate ongoing, systematic risk management.
Both laws share a critical common element: the NIST AI Risk Management Framework activates a safe harbor or affirmative defense under both statutes. This creates a powerful incentive for multi-state nonprofits to adopt NIST AI RMF as their compliance backbone. Organizations that build AI governance around NIST's four functions (Govern, Map, Measure, Manage) are simultaneously better protected in Texas, better positioned for Colorado compliance, and increasingly aligned with international expectations as the EU AI Act's high-risk system requirements take full effect in August 2026.
Building a Multi-State Compliance Strategy
For nonprofits operating across state lines, the most practical approach is to build compliance infrastructure that satisfies the most stringent applicable requirements rather than trying to maintain separate compliance programs for each jurisdiction. In practice, Colorado's compliance-oriented model is more demanding, so building to Colorado's standards while incorporating Texas's documentation requirements gives organizations the broadest protection.
Immediate Actions: Texas Compliance (Already Required)
- Conduct an AI inventory: List every AI system in use, including embedded AI in third-party SaaS tools such as donor management platforms, CRM systems, HR software, and grant management tools. Document each system's purpose, what decisions it influences, and what data it processes. Many nonprofits are surprised to discover how many vendor tools contain AI components they did not explicitly select.
- Document legitimate intent: Maintain written records of why each AI system is deployed, what legitimate organizational purpose it serves, and what policies restrict its use to lawful purposes. This documentation is your primary defense against TRAIGA enforcement inquiries.
- Establish internal review processes: Create and document protocols for testing AI systems and monitoring for prohibited outputs. Record testing procedures and results.
- Prepare Civil Investigative Demand readiness: Texas TRAIGA explicitly allows the AG to issue demands requiring nonprofits to disclose documentation about AI systems, including purpose, training data, and inputs and outputs. Organize this documentation now so you can respond promptly if a demand arrives.
Pre-June 30, 2026: Colorado Compliance Preparation
- Determine high-risk AI system classification: Review each AI tool in your inventory and assess whether it is a "substantial factor" in any of the consequential decision categories. Client intake systems, eligibility determination tools, hiring platforms, and housing match systems are the most likely candidates.
- Check the small business exemption: If you have fewer than 50 full-time employees, evaluate whether you qualify. If you have customized or fine-tuned any AI tools using your own organizational data, you do not qualify for the most stringent-requirement exemptions regardless of staff size.
- Develop a risk management program: Create a written policy and program aligned with the NIST AI RMF. This document should describe how your organization identifies, classifies, and mitigates AI risks across the four NIST functions. It does not need to be elaborate, but it must be genuine and operational.
- Complete initial impact assessments: For each high-risk AI system, document its purpose, the populations it affects, potential risks of algorithmic discrimination, and the mitigation measures you have taken. Retain these records for three years.
- Design client-facing disclosure processes: Draft plain-language notice templates for each AI-influenced consequential decision. Ensure they cover all required disclosure elements and are available in the languages your clients use.
- Build appeals and human review mechanisms: Establish formal procedures for clients to challenge adverse AI-influenced decisions and request human review. Document how these processes work and train relevant staff.
- Review and update vendor agreements: Request documentation from AI vendors covering training data summaries, risk disclosures, and any known discrimination risks. Update contracts to allocate liability and require vendors to notify you of compliance-relevant changes to their systems.
The NIST AI RMF as Your Unifying Framework
The single most powerful compliance investment a multi-state nonprofit can make is adopting the NIST AI Risk Management Framework as its operational AI governance standard. Both Texas and Colorado recognize it as a path to safe harbor protection. The EU AI Act, while using its own terminology, is substantially aligned with NIST's approach. Organizations that genuinely implement NIST AI RMF are simultaneously better protected in Texas, Colorado, California, and increasingly in international contexts.
NIST AI RMF is organized around four functions: Govern, Map, Measure, and Manage. Govern establishes the organizational policies, roles, responsibilities, and culture for AI risk management. Map identifies the AI systems in use and their potential risks. Measure develops methods for evaluating and tracking risk. Manage responds to identified risks with appropriate mitigation strategies. For nonprofits with limited resources, starting with the Govern and Map functions, which produce the documentation and inventory work already required for compliance, creates immediate value even before the more sophisticated measurement and management functions are fully operational.
Board-level engagement matters here. Nonprofits with strong AI governance tend to have boards that understand AI as an organizational risk topic, similar to how boards engage with financial controls and data security. The AI governance frameworks emerging for nonprofit boards provide useful guidance on how to structure this oversight. Establishing board-level visibility into AI governance and assigning a staff lead responsible for compliance monitoring are foundational steps that cannot be delegated to technology staff alone.
Unique Challenges for Nonprofits: Why Compliance Is More Complex Than It Looks
Nonprofit organizations face several structural challenges in AI compliance that commercial businesses may not encounter to the same degree. Understanding these challenges is the first step to addressing them strategically.
Multi-Category Service Delivery
Many nonprofits use AI across multiple program areas simultaneously: volunteer screening (employment-adjacent), client intake (services), benefits navigation (government services), and housing assistance. Each use case may independently trigger high-risk classification under Colorado's law, requiring separate compliance processes for what feels like a unified operation.
Third-Party AI in Common Tools
Nonprofits often use AI embedded in vendor-supplied donor management platforms, volunteer screening tools, case management systems, and HR software. As deployers, nonprofits may bear compliance responsibility even when the AI is fully embedded in a vendor's product. This makes vendor contract review and documentation requests essential.
Resource Constraints
Impact assessments, risk management programs, consumer disclosure systems, and annual reviews require staff time, legal expertise, and technical capacity. Organizations without dedicated IT or legal staff must find creative ways to build compliance infrastructure, including using AI governance templates and frameworks developed by sector-specific legal aid organizations.
Sensitive Client Data
Nonprofits handle some of the most sensitive personal information in existence: mental health records, immigration status, domestic violence situations, substance use histories, and financial distress. AI compliance requirements that mandate consumer disclosures and impact assessments must be designed with extraordinary care to protect client confidentiality and trust.
The most practical approach for resource-constrained nonprofits is to leverage existing compliance investments. Organizations already building data governance infrastructure for privacy law compliance, HIPAA requirements, or funder data management requirements can often extend that work to cover AI governance with relatively modest additional effort. The documentation practices, data inventory work, and staff training that support privacy compliance are closely aligned with what AI governance requires.
Sector associations and nonprofit legal aid organizations are increasingly developing AI compliance resources specifically for the nonprofit context. Engaging with these resources, rather than trying to build compliance programs from scratch, is a practical way to reduce the burden while still meeting legal obligations. Peer learning with similar organizations in your sector, where appropriate, can also help share the intellectual work of interpreting how these laws apply to common nonprofit scenarios.
For nonprofits concerned about the EU AI Act's implications as well, it is worth noting that the compliance infrastructure required for Colorado's AI Act is substantially transferable to EU compliance contexts. The investment in risk management programs, impact assessments, and consumer disclosure systems is not state-specific; it is the foundation of responsible AI governance that regulators worldwide are converging around.
Looking Ahead: A Changing Regulatory Landscape
Texas TRAIGA and Colorado's AI Act represent two points on a spectrum of emerging state AI regulation, not the end of the story. Other states are active: California passed training data transparency requirements that took effect January 1, 2026. Additional states have bills moving through legislatures. The regulatory landscape for AI will continue to evolve, and organizations that build adaptive governance infrastructure now are better positioned to respond to future requirements without rebuilding from scratch.
The most important insight from these two laws, taken together, is that neither state views nonprofits as exempt from AI accountability. The size, mission orientation, or tax status of an organization does not reduce the obligation to use AI responsibly. If anything, the populations nonprofits serve, people experiencing housing instability, mental health crises, immigration challenges, or poverty, are precisely the populations that AI regulation is designed to protect from discriminatory or harmful technology outcomes.
For nonprofit leaders, the path forward is clear if not simple: take AI governance seriously as a leadership responsibility, invest in documentation and risk management practices that can scale with AI adoption, align compliance work with the NIST AI Risk Management Framework, and ensure that board oversight includes meaningful visibility into how AI is used and governed across the organization. The compliance window for Colorado is closing. For Texas, it has already closed. The time for action is now.
Build Your AI Compliance Foundation
Navigating multi-state AI regulation requires strategic planning and practical governance frameworks. One Hundred Nights helps nonprofit organizations assess their AI risk exposure, build compliance programs, and establish board-level governance that scales with their AI adoption.
