Back to Articles
    AI Governance & Compliance

    New York's AI Consumer Protection Act: What Nonprofits Should Prepare For

    New York is rapidly assembling one of the most aggressive AI regulatory frameworks in the United States. From the RAISE Act signed into law in December 2025 to the proposed NY AI Act (S1169) targeting algorithmic discrimination, the state is building a multi-layered system of oversight that will reshape how organizations, including nonprofits, deploy artificial intelligence. This guide breaks down each piece of legislation, explains which provisions affect nonprofit operations directly, and provides a practical preparation roadmap so your organization is not caught off guard.

    Published: March 20, 202615 min readAI Governance & Compliance
    New York AI consumer protection legislation and nonprofit compliance

    New York has long been a bellwether for consumer protection regulation in the United States, and artificial intelligence is no exception. Over the past eighteen months, the state has moved on multiple fronts simultaneously: signing the RAISE Act into law, expanding the FAIR Act to cover nonprofit organizations, advancing the NY AI Act (S1169) through legislative committees, and proposing new protections for minors interacting with AI chatbots. The cumulative effect is a regulatory environment that demands attention from every nonprofit operating within state lines.

    For nonprofits already tracking Colorado's AI Act and California's AI transparency requirements, New York's approach adds another dimension. While Colorado focused on deployer obligations and algorithmic discrimination in a single comprehensive statute, New York is building its framework through multiple, overlapping pieces of legislation. This means nonprofits cannot simply track one bill. They need to understand how these laws interact, which provisions are already in effect, and which are likely to pass in the current session.

    The stakes are particularly high for New York-based nonprofits because of the state's sheer concentration of social service organizations, healthcare providers, educational institutions, and advocacy groups. Many of these organizations have adopted AI tools for client intake, program eligibility screening, donor analysis, and communications. Each of these use cases falls under different regulatory provisions depending on whether the AI makes consequential decisions, processes personal data, or interacts directly with consumers.

    This article walks through each major piece of New York's AI regulatory landscape, explains which provisions matter most for nonprofits, and provides concrete steps your organization can take right now. Whether you are a small community organization with a handful of AI tools or a large multi-program nonprofit with enterprise-level deployments, the time to prepare is now, not when enforcement begins.

    The RAISE Act: What the Federal-Style Frontier Model Law Means for Nonprofits

    The Responsible AI Safety and Education Act (RAISE Act) was signed by Governor Hochul on December 19, 2025, and took effect on March 19, 2026. It is the first law of its kind at the state level, targeting the developers of "frontier" AI models, defined as systems trained with computational resources exceeding 10^26 floating-point operations (FLOPs) and developed by companies with more than $500 million in annual revenue. In practical terms, this covers the largest AI labs in the world: OpenAI, Google DeepMind, Anthropic, Meta AI, and a handful of others.

    For most nonprofits, the RAISE Act does not create direct compliance obligations. You are a user of these frontier models, not a developer. However, the law matters to your organization for several important reasons. First, the RAISE Act requires frontier model developers to publish safety documentation, including information about known risks, testing results, and the types of outputs the model can produce. This documentation becomes a valuable resource when you are conducting vendor due diligence. If your nonprofit uses GPT-4, Claude, or Gemini through an API or a third-party platform, you can now point to mandatory safety disclosures when evaluating whether the tool is appropriate for your use case.

    Second, the RAISE Act establishes the Empire AI consortium as a formal coordinating body for AI safety research and public sector AI deployment. The consortium, which has received expanded state funding, is specifically tasked with developing resources for government agencies and nonprofit organizations. This means that over the coming months, New York nonprofits can expect to see state-supported guidance documents, training materials, and potentially grant opportunities tied to responsible AI adoption.

    Third, the RAISE Act's safety documentation requirements set a precedent that is likely to extend downstream. If the NY AI Act (S1169) passes, deployers of AI systems, including nonprofits, will need to demonstrate that they evaluated the safety characteristics of their AI tools before deployment. The RAISE Act creates the information infrastructure that makes those deployer evaluations possible. In this sense, the law is laying the groundwork for obligations that will eventually reach your organization directly.

    What the RAISE Act Requires

    Key provisions targeting frontier model developers

    • Mandatory safety documentation for models exceeding 10^26 FLOPs
    • Public disclosure of known risks, testing methodology, and output limitations
    • Revenue threshold of $500M+ ensures only the largest developers are covered
    • Empire AI consortium expansion for public sector and nonprofit AI guidance

    Why It Matters for Nonprofits

    Indirect but significant implications

    • Safety documentation creates a baseline for your vendor due diligence
    • Empire AI resources may include nonprofit-specific implementation guidance
    • Sets precedent for deployer-level obligations in future legislation
    • Strengthens your ability to demand transparency from AI vendors

    The NY AI Act (S1169): Algorithmic Discrimination and the Deployer Problem

    While the RAISE Act targets the developers who build frontier AI models, the proposed NY AI Act (S1169) takes aim at the organizations that deploy AI systems to make decisions about people. This is the bill that nonprofits need to watch most closely. If enacted, it would create obligations that are similar in scope to Colorado's AI Act but with several provisions that go further, including independent audit requirements, a five-day advance notice rule for consumers, and a private right of action that would allow individuals to sue deployers directly.

    The bill defines "high-risk AI systems" as those that are used to make or substantially assist in making "consequential decisions." These decisions cover familiar territory: employment, education, healthcare, housing, financial services, insurance, and legal services. If your nonprofit uses AI-powered tools to screen job applicants, determine client eligibility for programs, prioritize service delivery, or assess risk in any of these domains, you would be classified as a deployer under this bill. The nonprofit status of your organization provides no exemption.

    One of the most significant provisions in S1169 is the requirement for independent algorithmic audits. Unlike Colorado's approach, which relies primarily on self-assessment through impact assessments, the NY AI Act would require deployers of high-risk AI systems to obtain independent, third-party audits examining whether the system produces discriminatory outcomes. For nonprofits with limited budgets, the cost of these audits could be substantial. However, the bill's sponsors have indicated that they are exploring scaled requirements based on organization size and a possible state-funded audit assistance program.

    The five-day consumer notice provision is another element that would change nonprofit operations. Under S1169, before an organization makes an AI-assisted consequential decision about a consumer, it must provide the consumer with at least five business days' notice that AI is being used in the decision-making process. The notice must include a description of the AI system, the data it processes, and the consumer's right to opt out or request human review. For a nonprofit running a high-volume client intake process, implementing this notification requirement would require significant changes to existing workflows.

    Perhaps the most impactful provision for nonprofits is the private right of action. While Colorado's AI Act is enforced exclusively by the state Attorney General, S1169 would allow individuals who believe they were harmed by algorithmic discrimination to bring lawsuits directly against deployers. This creates a different risk profile entirely. Instead of worrying about regulatory enforcement, which tends to be resource-constrained and focused on the largest offenders, nonprofits would face the possibility of individual lawsuits from the people they serve. If you are familiar with the growing landscape of AI litigation risk for nonprofits, this provision significantly raises the stakes.

    Key Risk: Private Right of Action

    Unlike Colorado's AI Act, which relies on Attorney General enforcement, the NY AI Act (S1169) would allow individuals to sue deployers directly for algorithmic discrimination. This means a client who believes your AI-assisted eligibility screening discriminated against them could file a lawsuit against your nonprofit. Even if the claim is ultimately unsuccessful, the legal costs of defending against it could be significant. Nonprofits should begin evaluating their AI-assisted decision-making processes now and ensuring that human oversight mechanisms are robust and documented.

    The FAIR Act Expansion: Consumer Protections Now Apply to Nonprofits

    While the NY AI Act remains proposed legislation, the expansion of New York's FAIR Act is already in effect. As of February 17, 2026, the state's consumer protection framework, which previously applied primarily to for-profit businesses, has been expanded to include nonprofit organizations. This is not AI-specific legislation, but it has direct implications for how nonprofits use AI tools that interact with consumers, clients, and constituents.

    The FAIR Act expansion means that nonprofits are now subject to the same consumer protection standards as for-profit entities when it comes to deceptive practices, unfair treatment, and data handling. In the context of AI, this creates several immediate obligations. If your nonprofit uses an AI chatbot on its website to interact with potential clients or donors, that chatbot must not make misleading claims about your services, programs, or the nature of the interaction itself. If the chatbot is AI-generated rather than human, the interaction must be transparent about that fact.

    The FAIR Act expansion also intersects with AI-driven communications. Nonprofits that use AI to generate fundraising emails, personalize donor outreach, or create targeted campaign messages need to ensure that these communications do not contain misleading or deceptive content. While AI hallucinations are a known technical limitation, from a legal perspective, sending a donor an AI-generated email that makes factually incorrect claims about your organization's impact could now constitute a violation of consumer protection law.

    For nonprofits that have been operating under the assumption that consumer protection laws apply only to commercial transactions, this is a significant shift. The FAIR Act expansion recognizes that the relationship between a nonprofit and its clients, donors, and service recipients is functionally similar to a consumer relationship in terms of the power dynamics and the potential for harm. If your organization has not yet reviewed its AI-powered communications and client-facing tools through this lens, now is the time.

    Client Interactions

    AI chatbots and virtual assistants on your website must clearly identify themselves as automated systems. Any information they provide about services, eligibility, or programs must be accurate and not misleading.

    Fundraising Communications

    AI-generated donor outreach, fundraising appeals, and impact reports must be factually accurate. AI hallucinations in fundraising communications could constitute deceptive practices under the expanded FAIR Act.

    Data Handling

    How your AI tools process, store, and use client and donor data is now subject to consumer protection oversight. Nonprofits must ensure their AI vendors handle data in compliance with the same standards applied to commercial entities.

    Emerging Legislation: AI Chatbot Restrictions and Companion System Protections

    Beyond the RAISE Act, the NY AI Act, and the FAIR Act expansion, New York legislators have introduced several additional bills that nonprofits should track. The AI Chatbot Ban for Minors legislation, which recently passed committee, would restrict AI chatbot interactions with users under 18 without parental consent. For youth-serving nonprofits, this could mean significant changes to how you deploy AI-powered tools on your website or in your programs.

    If your nonprofit operates after-school programs, youth mentorship services, or educational platforms that use AI chatbots for tutoring, information delivery, or engagement, you would need to implement age verification mechanisms and obtain parental consent before minors interact with these systems. The bill does not distinguish between general-purpose chatbots and purpose-built educational tools, meaning that even a well-intentioned AI tutor on your program's website could fall under the restriction.

    New York has also introduced protections around "AI companion systems," which are AI products designed to form ongoing, relationship-like interactions with users. While most nonprofits are unlikely to deploy dedicated AI companion systems, the definitions in the proposed legislation are broad enough to potentially cover AI-powered peer support tools, mental health check-in bots, or ongoing AI-facilitated mentorship interactions. Organizations providing behavioral health or social-emotional support through AI-mediated tools should monitor this legislation closely.

    The common thread across all of these proposals is New York's commitment to regulating AI based on the potential for consumer harm, regardless of the deployer's intent or organizational structure. Nonprofits are not being singled out, but they are not being exempted either. The state views the relationship between a nonprofit and the people it serves as one that carries the same potential for AI-related harm as any commercial interaction.

    How New York Fits Into the National AI Regulatory Landscape

    New York is not operating in isolation. The state's AI regulatory push is part of a broader national trend that includes Colorado's AI Act (SB 24-205), California's AI transparency requirements, and Texas's RAIGA framework. For nonprofits that operate across state lines, understanding how these laws compare and where they overlap is critical for building a compliance strategy that works nationally rather than state by state.

    Colorado's approach relies on deployer self-assessment through impact assessments and risk management, with enforcement through the Attorney General. California has focused primarily on transparency and disclosure, requiring organizations to inform consumers when they are interacting with AI. Texas has adopted a principles-based framework through RAIGA that encourages voluntary compliance. New York, by contrast, is pursuing the most aggressive combination: mandatory independent audits, advance consumer notification, AG enforcement, and a private right of action. If the NY AI Act passes as written, New York would have the strongest AI deployer obligations of any state.

    For multi-state nonprofits, this creates a compliance challenge. The most practical approach is to build your AI governance framework around the strictest requirements you face and then scale back for states with lighter obligations. If you prepare to meet New York's potential requirements, including independent audits, five-day consumer notices, and documented human oversight processes, you will almost certainly satisfy the requirements in Colorado, California, and Texas as well. The NIST AI Risk Management Framework provides a useful foundation for building this kind of scalable governance structure.

    What Makes New York Different

    • Independent third-party audits (vs. self-assessment in Colorado)
    • Five-day advance consumer notification before AI-assisted decisions
    • Private right of action allowing individuals to sue deployers directly
    • AG enforcement combined with private litigation creates dual exposure

    Cross-State Compliance Strategy

    • Build governance around the strictest state requirements
    • Use NIST AI RMF as a scalable foundation across jurisdictions
    • Document compliance efforts so they satisfy multiple state laws
    • Monitor legislative sessions quarterly for amendments and new bills

    Building Vendor Accountability and Due Diligence Processes

    One of the most practical steps any nonprofit can take right now, regardless of whether S1169 passes, is to strengthen its vendor due diligence processes for AI tools. Under both the existing FAIR Act expansion and the proposed NY AI Act, nonprofits bear responsibility for how their AI tools perform, even when those tools are built and maintained by third-party vendors. The legal concept is straightforward: you chose to deploy the tool, so you are accountable for its outputs.

    This does not mean you need to audit your vendor's code or understand the mathematical details of their algorithms. It means you need to ask the right questions, document the answers, and make informed decisions about which tools to use and how to use them. The RAISE Act's safety documentation requirements for frontier model developers give you new leverage in these conversations. You can now ask your vendor whether the underlying AI model has published safety documentation, what risks were identified, and how those risks are mitigated in the vendor's product.

    Effective vendor due diligence for AI tools goes beyond the initial procurement decision. It requires ongoing monitoring and periodic reassessment. AI models are updated frequently, and a tool that performed well when you first adopted it may behave differently after a major model update. Your vendor agreements should include provisions for notification when underlying models change, access to performance data, and the ability to opt out of updates that have not been tested against your specific use case. For a broader framework on evaluating AI tools, see our guide on building an AI ethics checklist for nonprofits.

    Vendor Due Diligence Checklist for AI Tools

    Questions to ask before deploying and during ongoing use

    Before Deployment

    • Does the vendor provide bias testing results or algorithmic impact assessments?
    • What training data was used, and was it tested for representational bias?
    • Does the underlying model have published RAISE Act safety documentation?
    • What human override mechanisms are built into the product?

    During Ongoing Use

    • Does the vendor notify you before making significant model changes?
    • Can you access performance metrics and outcome data for your specific deployment?
    • Does your contract include a right to independent audit of the system?
    • Is there a clear data retention and deletion policy for your organization's data?

    Practical Preparation Steps for Nonprofits

    Whether S1169 passes in its current form, a revised version, or not at all this session, the direction of New York's AI regulation is clear. Nonprofits that begin preparing now will be in a strong position regardless of the final legislative outcome. The steps below are designed to be proportional to the level of risk your AI use creates. A nonprofit that uses AI only for internal communications has different obligations than one that uses AI to determine client eligibility for housing services. Start by understanding your risk profile, then implement protections that match.

    1Conduct an AI Inventory and Risk Classification

    Start by cataloging every AI tool your organization uses, including tools embedded in existing software that you may not think of as "AI." CRM systems with predictive donor scoring, email platforms with AI-generated subject lines, HR software with resume screening, and program management tools with automated eligibility checks all count. For each tool, document what data it processes, what decisions it influences, and whether those decisions could be classified as "consequential" under S1169's definitions.

    • List all AI-powered tools across every department and program
    • Classify each tool as high-risk, medium-risk, or low-risk based on decision impact
    • Identify which tools interact with protected-class data or make eligibility determinations

    2Update Your AI Policy to Reflect New York Requirements

    If your organization already has an AI use policy, review it against the specific requirements in the FAIR Act expansion and the anticipated provisions of S1169. If you do not have one yet, this is the time to create it. Your policy should address consumer notification practices, human oversight requirements for consequential decisions, data handling standards for AI tools, and incident response procedures for when an AI tool produces a harmful or discriminatory output. For guidance on building a comprehensive policy, see our article on updating your AI policy for 2026.

    • Define which AI uses require human review before final decisions
    • Establish procedures for notifying clients when AI is used in decisions affecting them
    • Create an incident response plan for AI errors, bias findings, or client complaints

    3Implement Human Oversight for High-Risk Decisions

    For any AI tool that makes or substantially contributes to consequential decisions, ensure that a qualified human reviews the AI's output before the decision is finalized. "Qualified" means the person understands the AI tool's purpose, its limitations, and the criteria for overriding its recommendation. Simply having a human rubber-stamp the AI's output does not satisfy this requirement. Document who reviews each type of decision, what training they have received, and how often they override the AI's recommendation.

    • Train staff on how to critically evaluate AI recommendations
    • Track and document override rates to demonstrate meaningful human involvement
    • Ensure appeal mechanisms exist for clients who disagree with AI-influenced decisions

    4Prepare for Independent Audit Requirements

    Even if S1169 does not pass immediately, the trend toward independent algorithmic audits is accelerating. Begin documenting your AI system's inputs, outputs, and decision patterns now so that you have the data necessary for an audit when the requirement arrives. Work with your AI vendors to understand what audit-related data they can provide, and include audit cooperation provisions in your next contract renewal.

    • Begin logging AI decision inputs and outputs for high-risk systems
    • Identify potential third-party auditors and understand their cost structures
    • Negotiate audit cooperation clauses into vendor contracts

    Preparing for a Multi-Layered Regulatory Future

    New York's approach to AI regulation reflects a broader reality: AI governance is not going to arrive as a single, comprehensive law. It is going to arrive as overlapping layers of legislation, each targeting different aspects of AI development, deployment, and consumer interaction. The RAISE Act addresses the developers of the most powerful models. The NY AI Act (S1169) targets the organizations that deploy AI for consequential decisions. The FAIR Act expansion extends consumer protections to cover nonprofit-constituent relationships. And additional bills address specific populations and interaction types.

    For nonprofits, the key takeaway is not panic, but preparation. The organizations that will navigate this regulatory landscape most successfully are those that invest in understanding their AI systems now, build governance structures that can adapt as new requirements emerge, and cultivate vendor relationships that support transparency and accountability. None of this requires massive budgets or technical expertise. It requires intentionality, documentation, and a willingness to treat AI governance as an ongoing operational function rather than a one-time compliance project.

    The regulatory environment is moving quickly, and New York is leading the way. Whether your nonprofit operates exclusively in New York or serves clients across multiple states, the standards being set here will influence AI governance expectations nationally. By preparing now, you are not just protecting your organization from regulatory risk. You are building the kind of responsible AI practices that strengthen your mission, protect the people you serve, and position your organization as a trusted leader in your community.

    Ready to Build Your AI Compliance Strategy?

    New York's multi-layered AI regulations require proactive preparation. We help nonprofits build governance frameworks that satisfy current requirements and adapt to emerging legislation across every state where you operate.