Back to Articles
    AI Policy & Compliance

    Federal vs. State AI Regulation: Navigating the Preemption Battle as a Nonprofit

    The federal government is pushing to dismantle state AI laws while those laws keep taking effect. For nonprofits operating across state lines, understanding this collision is no longer optional. Compliance obligations exist right now, regardless of how the legal battle resolves.

    Published: February 23, 202615 min readAI Policy & Compliance
    Federal vs state AI regulation compliance guide for nonprofits

    In the span of a few years, AI regulation in the United States has gone from largely theoretical to aggressively contested. Colorado passed the first comprehensive state AI law in the country, Texas enacted its own governance framework, California passed multiple AI transparency requirements, and Illinois strengthened its prohibition on discriminatory AI in employment. Meanwhile, the Trump administration revoked the Biden-era executive order on AI safety, released a deregulatory action plan, and in December 2025 signed an executive order directing the Department of Justice to challenge state AI laws in court.

    For nonprofit organizations trying to operate responsibly in this environment, the situation is genuinely confusing. Federal signals favor deregulation. State laws keep taking effect. Legal challenges have not yet been resolved. And no comprehensive federal AI statute exists to provide clarity. The result is a patchwork compliance environment that particularly burdens organizations operating across multiple states with limited legal and compliance resources.

    This article provides a practical guide to the current landscape. It covers what federal and state rules exist, what the preemption battle means in practice, how these rules affect nonprofits specifically, and what compliance steps organizations should take regardless of how the legal conflict eventually resolves. It also addresses the EU AI Act, which carries its own implications for nonprofits with international operations or donors.

    One important framing note: this article provides general educational information, not legal advice. The pace of change in AI regulation is rapid, and organizations with significant AI use should consult qualified legal counsel about their specific obligations.

    The Federal Landscape: Deregulation Without Replacement

    The most important fact about federal AI regulation in 2026 is that there is no comprehensive federal AI law. The United States Congress has debated, introduced, and discussed numerous AI bills but has not passed legislation that establishes a unified national framework governing how AI can be used, what disclosures are required, or what protections individuals have against AI-driven decisions.

    Federal AI governance currently depends on three mechanisms: agency enforcement of existing laws (the FTC applying deceptive practices doctrine to AI outputs, for example), executive orders from the president, and voluntary frameworks like the NIST AI Risk Management Framework. Of these, only agency enforcement carries legal force, and its scope is narrow.

    Key Federal Actions in 2025-2026

    January 20, 2025: Biden EO 14110 Revoked

    On his first day in office, President Trump revoked the Biden administration's executive order on safe and trustworthy AI, which had established safety testing requirements, government oversight structures, and civil rights guidelines for federal AI use.

    January 23, 2025: EO "Removing Barriers to American Leadership in AI"

    Replaced the Biden order with a framework focused on promoting AI development "free from ideological bias" and establishing a 180-day AI Action Plan directive. Framed national AI policy around maintaining global dominance rather than safety and civil rights.

    July 23, 2025: "Winning the Race: America's AI Action Plan"

    Released 90 policy positions across innovation, infrastructure, and international diplomacy. Recommended revising the NIST AI RMF to remove references to misinformation, DEI, and climate change, signaling a narrower federal framework focused on security and competitiveness.

    December 11, 2025: EO "Ensuring a National Policy Framework for AI"

    The most consequential executive action for nonprofits. Created a DOJ AI Litigation Task Force to challenge state AI laws in court, directed Commerce Department to identify "onerous" state laws, directed FTC to issue policy statements on AI preemption, and encouraged federal agencies to condition grants on states not enacting conflicting AI laws.

    The December 2025 executive order represents the most aggressive federal push against state AI regulation to date. But there is a critical legal caveat that every nonprofit leader needs to understand: executive orders cannot preempt state law. Only Congress has that power. The federal government's legal theories for challenging state AI laws, including the Dormant Commerce Clause, conflict preemption, and First Amendment compelled speech claims, will produce years of litigation with uncertain outcomes. Until courts or Congress definitively resolve these questions, state laws remain fully enforceable.

    The grant funding leverage is potentially the most direct federal tool affecting nonprofits. Agencies have been encouraged to condition grants on states not enacting conflicting AI laws. This could create compliance pressure for nonprofits receiving federal grants, particularly if state governments where they operate enact new AI rules. Nonprofits should monitor this development closely and consult legal counsel if federal grant conditions begin incorporating AI regulatory provisions.

    The State Law Landscape: What Is Actually in Effect

    While federal direction has been deregulatory, state legislatures have moved aggressively in the opposite direction. Over 250 AI bills were introduced across more than 34 states in 2025, and several comprehensive frameworks are now in effect or taking effect in 2026. For nonprofits operating across state lines, the cumulative compliance burden from this patchwork can be substantial.

    Colorado (SB 24-205): First Comprehensive State AI Law

    Effective June 30, 2026

    Colorado's law is the most broadly applicable comprehensive AI statute in the US. It applies to "deployers" of "high-risk AI systems" that make or substantially influence consequential decisions in employment, housing, lending, education, healthcare, legal services, and insurance. Nonprofits using AI tools that influence these types of decisions about clients, beneficiaries, or employees are squarely in scope.

    Key Requirements
    • Annual impact assessments for high-risk AI systems
    • Consumer disclosure when AI makes adverse decisions
    • Written risk management policy and governance program
    • Anti-discrimination protections against algorithmic bias
    Penalties and Defenses
    • $20,000 per violation, AG enforcement only (no private right of action)
    • Affirmative defense: compliance with NIST AI RMF

    Texas (TRAIGA): Responsible AI Governance Act

    Effective January 1, 2026

    Texas enacted a narrower framework focused on prohibited practices: behavioral manipulation, unlawful discrimination, CSAM generation, and deepfakes. Healthcare providers specifically must disclose AI use to patients. The law's safe harbor for compliance with the NIST AI RMF mirrors Colorado's approach, suggesting a path to compliance that works across both states.

    • Penalties: $10,000-$12,000 (curable violations) to $80,000-$200,000 (uncurable violations) per violation
    • Healthcare-specific: AI use in clinical decisions must be disclosed to patients
    • Safe harbor: documented compliance with NIST AI RMF or equivalent standard

    California: Multiple Laws Taking Effect

    Multiple effective dates in 2025-2026

    California passed several AI laws after Governor Newsom vetoed the more expansive SB 1047. These are narrower but still create obligations for organizations operating in California or making AI-generated content available to California residents.

    SB 53 (AI Transparency Act for Frontier Models)

    Requires developers of large frontier models (trained with 10^26+ FLOPS) to publish safety frameworks, report critical safety incidents within 15 days, and implement whistleblower protections. Most nonprofits are deployers, not developers at this scale, so this law primarily affects your AI vendors.

    AB 2013 (Training Data Transparency)

    Requires generative AI developers to publicly post training data documentation. Applies retroactively to systems released since January 1, 2022. Again, this primarily affects AI vendors, but nonprofits should verify their vendors comply before using their tools on California-related work.

    SB 942 (AI Content Detection)

    Providers with 1 million+ monthly California users must offer free AI detection tools and watermarking for audiovisual content. Affects major AI platforms nonprofits use. If you use covered platforms to generate fundraising videos, communications, or other audiovisual content, understand the detection and disclosure obligations.

    Illinois and New York City: Employment AI Focus

    In effect now

    Any nonprofit using AI in employment decisions faces obligations in Illinois and New York City that are distinct from the broader AI governance frameworks above.

    Illinois HB 3773 (effective January 1, 2026)

    Prohibits AI use in employment decisions (hiring, promotion, termination) that discriminates against protected classes under the Illinois Human Rights Act, even if unintentional. Requires notification to employees and candidates when AI is used in employment decisions.

    New York City Local Law 144 (in effect since July 2023)

    Requires annual independent bias audits for Automated Employment Decision Tools (AEDTs), public posting of audit summaries, 10-day advance notice to candidates before using AEDTs, and alternative selection processes on request. Any nonprofit with NYC employees using AI-assisted hiring tools is subject to this law.

    How the Regulatory Conflict Affects Nonprofits Specifically

    Nonprofits face a uniquely challenging compliance environment for several reasons that distinguish them from for-profit companies navigating the same landscape. Understanding these distinguishing factors is essential for developing a realistic and proportionate compliance approach.

    Key Challenges for Nonprofits

    No nonprofit exemption exists

    Current state AI laws generally do not carve out nonprofits. A nonprofit that uses AI in hiring, program delivery, client services, or beneficiary eligibility decisions faces the same compliance obligations as a for-profit company of equivalent size. Mission-driven status does not create a legal exception.

    Multi-state operations amplify exposure

    Many nonprofits operate across state lines, serve populations in multiple states, or have staff in different jurisdictions. An organization with Colorado operations, Illinois employees, California donors, and Texas program sites could simultaneously be subject to four different AI compliance frameworks, each with distinct requirements.

    Mission-driven AI often involves high-risk decisions

    The populations nonprofits serve, and the decisions organizations make about them, frequently fall into "high-risk" categories under state AI laws. Client intake screening, eligibility determination, housing match algorithms, case management prioritization, and employment assistance tools all potentially constitute high-risk AI systems under Colorado's definition. Serving vulnerable populations is the nonprofit mission, but it is also where AI regulation focuses most intensely.

    Resource constraints make compliance expensive

    Impact assessments, bias audits, disclosure systems, risk management programs, and legal review all cost money that most nonprofits are not currently budgeting. For large organizations, these are manageable expenses. For small and medium nonprofits, compliance infrastructure competes directly with mission delivery for limited resources.

    Uncertainty is itself a compliance cost

    With the preemption battle unresolved, nonprofits cannot wait for clarity before deciding whether to comply. State laws are in effect now. An organization that waits for the DOJ litigation to resolve before addressing Colorado requirements could face enforcement action while waiting. Organizations must make good-faith compliance investments knowing some requirements may later be overturned.

    The positive dimension of this picture: regulators and legislatures are increasingly aware that enforcement against well-intentioned nonprofits making good-faith compliance efforts is counterproductive. The goal of AI regulation is to protect vulnerable populations, which is also the nonprofit mission. Organizations that demonstrate genuine engagement with compliance, even if implementation is imperfect, are in a materially better position than organizations that ignore regulatory requirements entirely.

    The EU AI Act: What US Nonprofits Need to Know

    For US nonprofits with international operations, EU donors, global program delivery, or any activities affecting EU residents, the EU AI Act carries its own compliance obligations entirely separate from the federal/state regulatory conflict. Like GDPR, the EU AI Act extends well beyond EU borders.

    The EU AI Act's extraterritorial reach applies to any organization whose AI systems are used within the EU or produce outputs affecting EU residents, regardless of where the organization is physically located. A US nonprofit operating health programs in Europe, accepting donations from EU citizens through an AI-assisted platform, or managing EU-based staff with AI-assisted HR tools may be in scope.

    EU AI Act: Key Dates and Obligations

    February 2, 2025 (In Effect)

    Prohibited AI practices took effect. These include social scoring systems, real-time biometric identification in public spaces, AI targeting children for commercial manipulation, and emotion recognition in workplaces and educational settings. Violations can reach 35 million euros or 7% of global turnover.

    August 2, 2025 (In Effect)

    General purpose AI model obligations took effect, requiring technical documentation and copyright compliance summaries from model developers. Primarily affects AI vendors rather than deployer organizations.

    August 2, 2026 (Upcoming Deadline)

    High-risk AI system requirements (Annex III) become enforceable. High-risk categories include employment and recruitment tools, access to essential private services and benefits, education and vocational training systems, and healthcare diagnostics. Nonprofits using AI in any of these areas with EU-connected populations should be preparing now. Note: Proposed EU Digital Omnibus amendments could extend this to December 2027, but organizations should plan for August 2026 as the binding date.

    For US nonprofits uncertain about their EU AI Act exposure, the threshold question is whether any AI system they use makes decisions that affect EU residents. If the answer is yes, even for a subset of program participants, donor communications, or staff management, the Act's obligations likely apply. Organizations that have already done GDPR compliance work have a head start: the data governance infrastructure, documentation practices, and vendor review processes built for GDPR translate meaningfully to EU AI Act compliance.

    Research on public interest provisions suggests some limited exemptions for educational and research organizations conducting scientific research, which may benefit nonprofits with academic or research missions operating in the EU. Legal counsel familiar with EU AI Act compliance should be consulted to evaluate whether these exemptions apply to your specific activities.

    The NIST AI Risk Management Framework: A Cross-Jurisdictional Safe Harbor

    Amid the regulatory patchwork, one voluntary framework stands out as a practical cross-jurisdictional safe harbor: the NIST AI Risk Management Framework. Released in January 2023, the NIST AI RMF is explicitly referenced as an affirmative defense under both Colorado's SB 24-205 and Texas TRAIGA, cited as a compliance standard under the EU AI Act, and referenced (with proposed modifications) in the Trump AI Action Plan.

    This convergence is meaningful for nonprofits. Rather than building separate compliance programs for each jurisdiction, organizations that implement the NIST AI RMF gain documented protections under multiple regulatory frameworks simultaneously. The framework is also flexible and scalable, designed to be adopted proportionally based on organizational size and the nature and scope of AI use.

    NIST AI RMF: The Four Core Functions

    The framework's structure is organized around four functions that can be implemented at any scale

    GOVERN

    Establish organizational structures, policies, and oversight mechanisms for AI risk management. For nonprofits, this means creating an AI policy, defining who is accountable for AI decisions, and ensuring board-level awareness of AI governance. This is where an AI champion or governance lead becomes essential.

    MAP

    Identify and analyze risks associated with specific AI systems and their context of use. Practically, this means cataloging every AI tool in use, understanding what decisions they influence, who is affected, and what could go wrong. A nonprofit's AI inventory becomes the foundation for all other compliance work.

    MEASURE

    Analyze and quantify AI risks using appropriate methods for the context. This includes bias testing, performance monitoring, and impact assessments. For high-risk AI systems, this is where Colorado's annual impact assessment requirement maps most directly to the NIST framework.

    MANAGE

    Put identified risks into context, prioritize responses, and implement ongoing monitoring. For nonprofits, this includes procedures for reviewing AI decisions that affect beneficiaries, escalating concerns, and updating policies as AI tools change. This function supports the human oversight requirements present in most state AI laws.

    NIST maintains a free Playbook with specific procedures and methods for implementing each function, and the framework is available at no cost at nist.gov. For resource-constrained nonprofits, implementing the NIST AI RMF at a scale proportionate to your AI use, even in a simplified form, provides more legal protection and organizational clarity than either ignoring regulation or attempting to build jurisdiction-specific programs.

    What AI Uses Actually Trigger Regulatory Requirements

    One of the most practically useful things a nonprofit leader can understand is which AI uses trigger significant regulatory obligations versus which carry minimal compliance burden. Not all AI use is equivalent under the law, and many of the AI tools nonprofits use most commonly for productivity fall into lower-risk categories.

    Higher-Risk Uses

    Likely to trigger state and EU obligations

    • AI screening job applicants or analyzing resumes
    • Determining client/beneficiary eligibility for services or housing
    • Healthcare diagnostics or treatment recommendations
    • Credit or financial decision support
    • Educational assessment or credentialing
    • Facial recognition or biometric identification

    Medium-Risk Uses

    Disclosure and oversight best practices apply

    • Chatbots interacting with clients or donors
    • AI-generated donor fundraising appeals
    • Predictive analytics for program outcomes
    • Case management prioritization tools
    • AI-assisted grant writing and communications

    Lower-Risk Uses

    Policy needed, but minimal regulatory burden

    • Internal productivity tools (drafting emails, summarizing documents)
    • Administrative automation (scheduling, data entry)
    • Research, information retrieval, background reading
    • Meeting transcription and note summarization
    • Translation and language assistance for staff

    The practical implication: most nonprofits that use AI primarily for internal productivity work, content creation, and communications face limited compliance burden from state AI laws. The regulatory intensity concentrates on AI that influences consequential decisions about individuals. Organizations using AI tools for grant writing, donor communications, and administrative tasks can focus on establishing a good general AI policy, protecting data privacy, and monitoring the regulatory environment, rather than building the full impact assessment infrastructure required for high-risk deployments.

    Practical Compliance Steps for Nonprofits

    Given the complexity and pace of change in AI regulation, the most effective compliance approach for nonprofits is one that provides broad protection across jurisdictions, scales with organizational size and AI use intensity, and remains adaptable as the legal landscape evolves. The following steps represent a practical framework that works regardless of how the federal-state preemption battle resolves.

    Compliance Framework for Nonprofits

    1. Conduct an AI inventory

    Catalog every AI tool in use across the organization, including off-the-shelf software with embedded AI features such as HR platforms with automated resume screening, CRM tools with predictive analytics, or donor management systems with AI-assisted segmentation. Document the purpose, data inputs, affected populations, and how each tool influences decisions. This inventory is the foundation for all other compliance work.

    2. Assess risk levels

    Using the Colorado and EU AI Act frameworks as guides, classify each AI use by risk level. The primary question: does this AI system make or substantially influence consequential decisions about specific individuals, particularly in categories like employment, housing, healthcare, education, or access to services? High-risk uses require more rigorous governance.

    3. Develop and publish an AI policy

    A written AI policy should cover: permitted and prohibited uses of AI, data handling rules (what may and may not be shared with AI tools), disclosure obligations when AI influences decisions affecting clients, human oversight requirements, and consequences for non-compliance. Free resources for nonprofit AI policy development include Fast Forward's Nonprofit AI Policy Builder and NTEN's templates. This policy also supports the board governance expectations that funders and regulators increasingly hold.

    4. Implement the NIST AI RMF proportionally

    Even a simplified version of the NIST Govern-Map-Measure-Manage framework provides affirmative defenses under Colorado and Texas law and aligns with global standards. This does not require a large investment: document your governance structure, identify your AI use cases and risks, monitor for issues, and maintain that documentation. Scale the rigor to the risk level of your specific AI deployments.

    5. Review AI vendor contracts

    AI vendors are increasingly subject to their own regulatory obligations as "developers" under state AI laws. But deployers, including nonprofits, retain independent duties. Review vendor agreements for: compliance representations, data retention policies, indemnification provisions, audit rights, and incident notification obligations. Ask vendors whether they comply with applicable state AI laws.

    6. Establish human oversight mechanisms

    Every state AI law and the EU AI Act requires that humans remain accountable for consequential decisions, even when AI supports those decisions. Document who is responsible for reviewing AI-influenced decisions affecting clients, employees, or beneficiaries. Create procedures for how those reviews happen and how concerns are escalated. This protects both the individuals affected and your organization.

    7. Train staff on AI governance

    Policy documents only work if staff understand and follow them. Ensure all staff using AI tools understand the organization's policy, know what types of data cannot be shared with AI tools, understand when AI decisions must be reviewed by a human, and know how to report concerns. As organizations build AI literacy across their teams, integrating compliance awareness into that training reduces risk effectively.

    8. Monitor the regulatory landscape actively

    The Commerce Department's evaluation of state AI laws was due in March 2026. DOJ litigation against state laws will unfold over coming months and years. Congressional action on federal AI legislation remains possible. Organizations should designate someone to monitor regulatory developments, subscribe to relevant legal alerts, and review compliance posture at least annually against the current landscape.

    Planning for a Landscape That Will Keep Changing

    The federal-state preemption battle over AI regulation will not resolve quickly. Courts will take years to adjudicate the DOJ's legal theories. Congress may or may not pass a comprehensive federal AI law. In the meantime, state laws keep taking effect, each with its own obligations, enforcement mechanisms, and compliance timelines. This is not a landscape where nonprofits can wait for clarity before acting.

    The most protective strategy available is one that builds durable compliance infrastructure rather than jurisdiction-specific responses. Organizations that implement the NIST AI RMF, develop clear internal policies, conduct AI inventories, establish human oversight mechanisms, and train staff will be well-positioned regardless of how the preemption battle resolves. This approach satisfies the most demanding current state requirements, provides legal safe harbors in multiple jurisdictions, and aligns with the direction of global regulation.

    Nonprofit organizations serve some of the populations these regulations are most designed to protect: people in vulnerable circumstances whose access to housing, healthcare, employment, and social services can be profoundly affected by algorithmic decisions. That alignment between regulatory purpose and organizational mission creates an opportunity to lead on responsible AI adoption rather than merely comply. Organizations that develop genuine AI governance, not just compliance paperwork, will build the trust with clients, donors, and funders that becomes increasingly valuable as AI's role in service delivery grows.

    For organizations considering how to build broader AI strategy alongside regulatory compliance, the two efforts reinforce each other. Good AI governance requires understanding what AI you are using and why, which is also the foundation for strategic AI adoption. Organizations that invest in compliance infrastructure are simultaneously building the organizational knowledge that makes intentional, mission-aligned AI deployment possible.

    Need Help Navigating AI Compliance?

    Our team helps nonprofits develop AI governance frameworks, build internal policies, and make sense of the regulatory landscape as it evolves. We work with organizations at every stage of AI adoption.