Back to Articles
    AI Governance & Compliance

    Colorado's AI Act Takes Effect June 30, 2026: A Compliance Deep-Dive for Nonprofits

    Colorado's landmark AI law, SB 24-205, is the first comprehensive state regulation targeting algorithmic discrimination in high-risk AI systems. Originally set for February 2026, the effective date was pushed to June 30, giving organizations a narrow window to prepare. This guide breaks down what the law requires, which nonprofit AI uses qualify as "high-risk," and the specific steps your organization should take before the deadline.

    Published: March 19, 202614 min readAI Governance & Compliance
    Colorado AI Act compliance guide for nonprofit organizations

    On May 17, 2024, Colorado Governor Jared Polis signed Senate Bill 24-205, the Consumer Protections for Artificial Intelligence Act, into law. It was the first comprehensive state-level AI regulation in the United States. The law's core goal is straightforward: prevent AI systems from discriminating against people when making decisions that significantly affect their lives, covering areas such as employment, education, healthcare, housing, insurance, and financial services.

    For nonprofits, this law is not a distant concern reserved for technology companies. If your organization operates in Colorado and uses AI tools to screen job applicants, assess client eligibility, prioritize service delivery, or make recommendations that influence people's access to programs, you may be subject to the law's requirements. The fact that you are a 501(c)(3) does not exempt you. The law applies to any entity that deploys high-risk AI systems affecting Colorado consumers, regardless of corporate structure or revenue size.

    The original effective date of February 1, 2026, was postponed during a special legislative session in August 2025, when the Colorado General Assembly passed SB 25B-004 to push enforcement back to June 30, 2026. That five-month delay was the result of intense lobbying from industry groups and a recognition that many organizations, particularly smaller ones, needed more time. However, substantive requirements remained unchanged. The clock is now ticking with roughly three months left.

    This article provides a thorough, practical walkthrough of what the Colorado AI Act requires from nonprofits. We will define the key terms, identify which AI uses are most likely to trigger compliance obligations, explain the specific duties you must fulfill, and outline a step-by-step preparation timeline. If your organization has been following the broader conversation around new state AI laws taking effect in 2026, this is the deep dive into Colorado's specific requirements.

    What the Colorado AI Act Actually Says

    The Colorado AI Act creates obligations for two categories of entities: "developers" who build AI systems and "deployers" who use them. Most nonprofits fall squarely into the deployer category, meaning you use AI tools built by others rather than creating your own models from scratch. Understanding this distinction matters because deployer obligations, while still significant, differ from developer duties.

    The law's central prohibition is against "algorithmic discrimination," which it defines as any condition in which the use of an AI system results in an unlawful differential treatment or impact that disfavors a person or group based on protected characteristics. These protected characteristics include age, color, disability, ethnicity, genetic information, national origin, race, religion, sex (including pregnancy, childbirth, and sexual orientation), and veteran status. The standard mirrors existing anti-discrimination frameworks but extends them explicitly to automated decision-making.

    The law does not regulate all AI use. It targets specifically "high-risk artificial intelligence systems," which are defined as AI systems that make, or are a substantial factor in making, a "consequential decision." This is the threshold question every nonprofit needs to answer: are any of our AI tools making or substantially contributing to decisions that carry real consequences for the people we serve?

    What Counts as a "Consequential Decision" for Nonprofits

    The law defines a consequential decision as one that has a "material legal or similarly significant effect" on the provision or denial, cost, or terms of specific categories of services. The eight covered domains are education enrollment or opportunity, employment or employment opportunity, financial or lending services, essential government services, healthcare services, housing, insurance, and legal services.

    For nonprofits, several of these categories map directly to common operational activities. Understanding where your AI use intersects with these domains is the critical first step in determining whether the law applies to you.

    Employment Decisions

    Hiring, screening, and workforce management

    If your nonprofit uses AI-powered tools to screen resumes, rank job candidates, assess employee performance, or make termination recommendations, those tools are making consequential decisions under the law. This applies even if a human reviews the AI's output before a final decision is made, because the law covers systems that are a "substantial factor" in the decision, not just the sole decision-maker.

    • AI resume screening and applicant ranking tools
    • Automated interview scheduling based on candidate scoring
    • Performance evaluation tools with AI-generated assessments

    Healthcare and Social Services

    Client eligibility, triage, and service allocation

    Health-focused nonprofits using AI for patient triage, treatment recommendations, or service eligibility determinations are clearly in scope. Social service organizations that use AI to prioritize clients for housing, food assistance, or counseling services may also be covered, particularly if the AI's output determines who receives services and who is placed on a waiting list.

    • Client intake scoring and eligibility determination
    • AI-driven coordinated entry systems for homeless services
    • Predictive models for service prioritization

    Education and Training

    Enrollment, assessment, and scholarship decisions

    Education-focused nonprofits that use AI to determine program enrollment, scholarship eligibility, student placement, or academic recommendations are subject to the law. If your tutoring program uses AI to assess student readiness or your workforce development program uses AI to place participants in training tracks, those are consequential decisions.

    • Scholarship application scoring and ranking
    • AI-driven student placement or program matching
    • Automated assessment tools that affect enrollment

    Housing and Financial Services

    Housing placement, lending, and financial assistance

    Nonprofits involved in affordable housing, rental assistance, or emergency financial aid that use AI to screen applicants, score eligibility, or prioritize placement are covered. Community development financial institutions (CDFIs) that use AI-assisted lending decisions also fall under the law's scope.

    • Housing waitlist prioritization algorithms
    • Financial assistance eligibility screening
    • AI-assisted micro-lending or credit assessments

    It is equally important to understand what the law does not cover. General-purpose AI tools used for internal productivity, such as drafting emails, summarizing meeting notes, generating social media content, or creating fundraising copy, are almost certainly not making consequential decisions under the statute. Similarly, using AI for knowledge management or internal research does not trigger compliance obligations unless the output directly feeds into a decision that affects someone's access to a covered service.

    What Deployers Must Do: The Five Core Obligations

    If your nonprofit determines that it deploys one or more high-risk AI systems, the law imposes five categories of obligations. These are not optional best practices. They are legally required duties enforceable by the Colorado Attorney General's office.

    1. Implement a Risk Management Policy and Program

    Deployers must create and maintain a written risk management policy and program that governs how they use high-risk AI systems. This policy must be "reasonable" in light of recognized AI risk management frameworks, specifically the NIST AI Risk Management Framework (AI RMF) or ISO/IEC 42001. The law does not mandate a specific format, but your policy must address how you identify risks of algorithmic discrimination, how you monitor for those risks on an ongoing basis, and what governance structures oversee your AI use.

    For most nonprofits, this means building on any existing AI policy you already have, but going further. Your policy needs to specifically address discrimination risk, not just general AI governance. It should designate who within your organization is responsible for overseeing AI compliance, establish a process for reviewing AI tools before they are deployed, and create a mechanism for ongoing monitoring after deployment.

    The NIST AI RMF, which the law references, is organized around four functions: Govern (establishing policies and accountability), Map (identifying risks in context), Measure (assessing and tracking risks), and Manage (responding to and mitigating risks). Aligning your risk management program with these functions provides both a practical framework and a strong compliance posture.

    2. Complete Annual Impact Assessments

    Before deploying any high-risk AI system, and annually thereafter, deployers must complete a documented impact assessment. If you make a substantial, intentional modification to how you use the system, you must update the assessment within 90 days. These assessments must be retained for at least three years after the last deployment of the high-risk system.

    The impact assessment must include, to the extent reasonably known by or available to the deployer, the following elements: the purpose and intended use cases for the AI system, an analysis of whether the system poses known or foreseeable risks of algorithmic discrimination, the types of data the system processes (inputs and outputs), performance metrics and known limitations, and a description of the transparency measures you have implemented.

    In practical terms, this means you need to understand what your AI tool does, what data it uses, where it might produce biased outcomes, and how well it performs. If you are using a vendor's AI tool, much of this information should come from the developer's documentation, which they are legally required to provide under their own obligations. However, you are responsible for assessing the tool's risks in the specific context of your organization's use.

    3. Notify Consumers Before Consequential Decisions

    Before using a high-risk AI system to make a consequential decision about someone, you must notify them. This notification must include a statement disclosing that an AI system is being used, the purpose of the system, the nature of the consequential decision being made or informed by the AI, contact information for your organization, and a description in plain language of what the AI system does.

    For a nonprofit running a housing placement program that uses AI to score applicants, this means telling each applicant, before a decision is made, that an AI system is involved in the scoring process, explaining what the system evaluates, and providing a way for applicants to reach a person at your organization with questions. The notification does not need to be a separate document; it can be integrated into existing intake forms or application processes, but it must be clear and accessible.

    4. Provide Adverse Decision Disclosures

    When a high-risk AI system produces an adverse decision (one that denies, limits, or otherwise negatively affects someone's access to a covered service), the deployer must provide additional disclosures. These include the principal reasons for the adverse decision, the degree to which the AI system contributed to the decision, the types of data that were processed, the data sources used, and an opportunity for the consumer to correct any inaccurate data and to appeal the decision.

    This is arguably the most operationally demanding requirement for nonprofits. It means you need to be able to explain why the AI produced a particular output, which requires some level of understanding of the system's logic. It also means you must have a process in place for people to challenge decisions and have them reviewed, likely by a human decision-maker. Organizations that have been exploring ethical frameworks for AI-driven service allocation will find that this requirement aligns with principles they may already be implementing.

    5. Report Discrimination to the Attorney General

    If a deployer discovers that a high-risk AI system has produced algorithmic discrimination, the deployer must notify the Colorado Attorney General's office within 90 days of the discovery. This creates a self-reporting obligation that requires nonprofits to not only monitor for discrimination but to act on what they find.

    The practical implication is that organizations cannot simply deploy AI tools and hope for the best. You need monitoring processes that can detect disparate treatment or disparate impact across protected characteristics. For a nonprofit using AI to screen job applicants, this might mean regularly auditing hire rates by race, gender, and age to ensure the AI system is not systematically disadvantaging particular groups.

    The Small Deployer Conditional Exemption

    The Colorado AI Act includes a conditional exemption for smaller deployers, which many nonprofits may qualify for. If your organization meets all four of the following criteria, you are exempt from the risk management program, impact assessment, and general notice requirements (though not from the adverse decision disclosure or attorney general reporting obligations).

    All Four Conditions Must Be Met:

    • Fewer than 50 full-time employees. This is a headcount threshold, not a revenue or budget threshold. Part-time employees, contractors, and volunteers do not count.
    • You do not train the AI system with your own data. If you are using an off-the-shelf AI tool without customizing or fine-tuning it on your organization's data, this condition is likely met. If you have fine-tuned a model or built a custom RAG system that feeds organizational data into the AI's decision process, you may not qualify.
    • You limit uses to those previously disclosed by the developer. You cannot repurpose the tool for uses beyond what the developer intended and documented.
    • You provide consumers with the developer's impact assessment. Rather than conducting your own assessment, you share the one provided by the tool's developer.

    This exemption will apply to many small and mid-sized nonprofits that use commercial AI tools for standard purposes. However, it is critical to understand what the exemption does not cover. Even if you qualify, you must still provide adverse decision disclosures when the AI produces a negative outcome, and you must still report discovered discrimination to the attorney general. The exemption reduces your compliance burden but does not eliminate it entirely.

    For larger nonprofits or those that have invested in customized AI systems, the full set of obligations applies. If your organization has been building internal AI capacity with custom-trained models, you should plan for the complete compliance framework.

    How the Law Is Enforced

    The Colorado AI Act grants exclusive enforcement authority to the Colorado Attorney General's office. There is no private right of action, meaning individuals cannot sue your nonprofit directly for violations of this law. This is a significant difference from other regulatory frameworks, and it means your primary enforcement risk comes from a state investigation rather than from individual lawsuits.

    The law also provides an affirmative defense. If a deployer can demonstrate that it has adopted and followed the NIST AI Risk Management Framework or ISO/IEC 42001 (or a substantially equivalent framework), this constitutes a "rebuttable presumption" that the deployer used reasonable care. In practical terms, this means that aligning your compliance program with the NIST AI RMF is not just good practice; it provides legal protection if your organization is ever investigated.

    The attorney general can bring enforcement actions that result in penalties under the Colorado Consumer Protection Act, which allows for injunctive relief, civil penalties of up to $20,000 per violation, and restitution. For nonprofits already operating on tight margins, even a single enforcement action could be financially devastating, making proactive compliance far more cost-effective than reactive remediation. Organizations concerned about the broader AI litigation landscape for nonprofits should view Colorado compliance as part of a larger risk management strategy.

    A Step-by-Step Compliance Timeline for Nonprofits

    With the June 30, 2026 deadline approaching, nonprofits need a clear action plan. The following timeline is designed for organizations that have not yet begun compliance work. If you have already started, use this as a checklist to identify gaps.

    Phase 1: AI Inventory and Classification (Now through April 15)

    Identify all AI tools your organization uses and classify their risk level

    • Catalog every AI tool. Survey all departments and programs to create a comprehensive list of AI tools in use, including tools embedded in existing software platforms (CRM systems, HR platforms, case management software) that may use AI without being explicitly marketed as "AI tools."
    • Classify each tool. For each AI tool, determine whether it makes or substantially contributes to a consequential decision in one of the eight covered domains. Tools that only assist with internal productivity (drafting, summarizing, scheduling) can be noted but set aside.
    • Assess the small deployer exemption. Determine whether your organization meets all four conditions for the conditional exemption. Document your analysis in writing, even if you believe you qualify, because you may need to demonstrate this later.

    Phase 2: Risk Management Framework (April 15 through May 15)

    Build or adapt your AI risk management policy

    • Draft your risk management policy. Use the NIST AI RMF as your framework. The policy should cover governance (who oversees AI), risk identification (how you spot problems), monitoring (how you track performance), and incident response (what you do when something goes wrong).
    • Designate an AI compliance lead. This does not need to be a new hire, but someone in your organization needs clear responsibility for overseeing AI compliance. For smaller nonprofits, this might be the operations director or a senior program manager.
    • Contact your AI vendors. Request their developer documentation, including the statements about purpose, intended uses, known risks, and limitations that they are required to provide under the law. Request their impact assessment documentation if available.

    Phase 3: Impact Assessments and Notifications (May 15 through June 15)

    Complete required assessments and build notification processes

    • Complete impact assessments. For each high-risk AI system, document the purpose, use cases, data inputs and outputs, known risks of algorithmic discrimination, performance metrics, and transparency measures. Use vendor-provided information as a starting point but supplement it with your organization-specific context.
    • Create consumer notification templates. Draft the notifications you will provide to people before using AI for consequential decisions. Make them plain-language, accessible, and specific to each AI use case.
    • Build adverse decision processes. Create a workflow for when the AI produces a negative outcome: who explains the decision, how the person can challenge it, and who reviews appeals. This is where human oversight becomes operationally essential.

    Phase 4: Testing and Go-Live (June 15 through June 30)

    Validate your compliance processes before the deadline

    • Run a compliance drill. Walk through each high-risk AI system's workflow from start to finish. Verify that notifications are triggered at the right points, that adverse decision disclosures are generated correctly, and that your appeal process works.
    • Train affected staff. Every employee who interacts with a high-risk AI system needs to understand the notification requirements, how to handle adverse decisions, and when to escalate concerns. Build this into your existing training processes.
    • Document everything. The law's affirmative defense depends on demonstrating that you exercised reasonable care. Documentation of your process, decisions, assessments, and training is your strongest protection.

    Common Misconceptions Nonprofits Should Avoid

    As organizations begin grappling with the Colorado AI Act, several misconceptions have emerged that could leave nonprofits exposed. Addressing these now will save time and reduce risk.

    "We're a nonprofit, so we're exempt."

    The law contains no exemption based on tax-exempt status, organizational mission, or revenue level. If you deploy high-risk AI systems affecting Colorado consumers, you are subject to the law. The only conditional exemption is for small deployers meeting all four criteria described earlier, and that exemption is based on employee count and usage patterns, not nonprofit status.

    "We don't operate in Colorado, so it doesn't apply."

    The law applies to entities that "do business" in Colorado, which can include organizations headquartered elsewhere that serve Colorado residents. If your nonprofit serves clients, hires employees, or operates programs in Colorado, you may be covered regardless of where your main office is located. This is particularly relevant for organizations following the multi-state compliance landscape, where different states impose overlapping requirements.

    "A human reviews the output, so it's not automated decision-making."

    The law covers AI systems that are a "substantial factor" in a consequential decision, not just systems that make decisions autonomously. If the AI's recommendation or score significantly influences the human decision-maker's final choice, the law applies. Rubber-stamping an AI recommendation does not constitute meaningful human oversight for compliance purposes.

    "We use off-the-shelf tools, so it's the vendor's problem."

    Developers and deployers have separate, independent obligations under the law. Your vendor is responsible for their duties (providing documentation, disclosing known risks, maintaining a public-facing website about their systems), and you are responsible for yours (risk management, impact assessments, consumer notifications, adverse decision procedures). Using a third-party tool does not transfer your compliance obligations to the vendor.

    Practical Tips for Resource-Constrained Nonprofits

    For many nonprofits, the compliance burden of the Colorado AI Act feels disproportionate to their AI use. The good news is that the law's requirements can be integrated into existing governance structures without building an entirely new compliance apparatus. Here are concrete strategies for managing compliance on a nonprofit budget.

    Start with your existing AI policy. If your organization already has an AI use policy, you have a foundation to build on. The Colorado AI Act requires a risk management policy that goes deeper than a general use policy, but the structure, governance model, and approval processes you have already established can be extended rather than replaced.

    Leverage vendor documentation heavily. Developers are required by law to provide deployers with detailed information about their AI systems, including known risks, intended uses, and performance data. Request this documentation proactively and use it as the foundation for your impact assessments. Many enterprise AI vendors are already preparing or distributing compliance packages specifically for the Colorado AI Act.

    Consider whether you actually need high-risk AI. The simplest compliance strategy is to avoid triggering the law entirely. If your AI use is limited to internal productivity tools (writing assistance, meeting summaries, research) and does not influence consequential decisions about people, you may not need to do anything beyond documenting that assessment. Some nonprofits may choose to remove AI from certain decision processes rather than take on the compliance burden.

    Collaborate with peer organizations. Nonprofits in Colorado are likely facing identical compliance challenges. Consider forming or joining a working group to share templates, impact assessment methodologies, and vendor evaluations. State nonprofit associations may be organizing compliance resources. The strategic planning approach that works for individual organizations can also be applied at the sector level through collaboration.

    Budget for ongoing compliance, not just initial setup. The impact assessment requirement is annual, and the monitoring obligation is continuous. Factor these recurring costs into your operating budget. For many nonprofits, the ongoing effort will be modest once the initial framework is in place, but it is not a one-time project.

    Colorado in the Broader State AI Regulation Landscape

    Colorado's AI Act does not exist in isolation. It is part of a wave of state-level AI regulation that accelerated in 2025 and 2026. Understanding how it fits into the broader landscape helps nonprofits build compliance programs that can scale as more states act.

    The Colorado AI Act was modeled in part on the EU AI Act, which also takes a risk-based approach to AI regulation, classifying systems by the severity of potential harm and imposing obligations accordingly. However, the Colorado law is narrower in scope, focusing specifically on algorithmic discrimination in consequential decisions rather than the EU's broader categories of prohibited and high-risk AI practices.

    Other states are following Colorado's lead. The federal versus state AI regulation debate remains unresolved, and in the absence of federal legislation, states are filling the gap. New York, Illinois, and California have all introduced or passed AI-specific legislation, though each takes a different approach. Nonprofits operating across multiple states should build their compliance programs with enough flexibility to accommodate additional state requirements as they emerge.

    The silver lining of Colorado being first is that the compliance infrastructure you build now will serve you well as other states adopt similar frameworks. Investing in a NIST-aligned risk management program, developing impact assessment capabilities, and establishing consumer notification processes are foundational capabilities that will transfer across regulatory regimes.

    Conclusion

    The Colorado AI Act represents a fundamental shift in how organizations, including nonprofits, must think about AI deployment. It moves AI from an unregulated productivity tool to a governed system with legal obligations attached. For nonprofits that use AI to make decisions affecting people's access to employment, education, healthcare, housing, or other covered services, the law requires documented risk management, impact assessments, consumer notifications, adverse decision procedures, and discrimination reporting.

    The June 30, 2026 deadline is close, but the compliance work is manageable, particularly for nonprofits that qualify for the small deployer exemption or whose AI use does not extend into high-risk territory. The key is to start now: inventory your AI tools, classify their risk level, and take action on the gaps you identify. The organizations that will struggle most are those that wait until the last minute or assume the law does not apply to them.

    For nonprofits already committed to using AI responsibly and ethically, much of what the Colorado AI Act requires aligns with best practices you may already be following. The law adds documentation, notification, and reporting requirements that formalize good governance into legal obligations. Viewed through that lens, compliance is not just a regulatory burden; it is an opportunity to strengthen your organization's AI practices and build trust with the communities you serve.

    Ready to Build Your AI Compliance Framework?

    We help nonprofits navigate AI regulation, build risk management policies, and prepare for compliance deadlines. Whether you need a full compliance program or a quick assessment, we can help you get ready before June 30.