Updating Your AI Policy for 2026: New Requirements Every Nonprofit Should Address
AI governance moved from voluntary best practice to legal requirement in 2026. If your AI policy was written in 2024 or earlier, it almost certainly has gaps that now create real compliance risk. This guide identifies what needs to change and why.

When most nonprofits wrote their first AI policies in 2023 or 2024, the regulatory environment was almost entirely voluntary. Governance frameworks from NIST and the White House offered guidance. Sector organizations shared templates. Board members asked pointed questions. But the legal stakes were modest: no mandatory disclosures, no impact assessments, no civil penalties for getting it wrong.
That changed in 2026. Multiple state AI laws are now in effect or taking effect this year, the EU AI Act's most consequential provisions become fully applicable in August, and the gap between what early AI policies addressed and what compliance now requires has grown into something that demands active attention. The organizations that will be exposed are those treating their 2024 governance document as a finished product rather than a living framework.
The governance gap is significant. According to the 2026 Nonprofit AI Adoption Report from Virtuous and Fundraising.AI, 92% of nonprofits now use AI in some capacity. Yet nearly half, 47%, operate with no formal AI governance policy at all. Of those that do have policies, many were written when the tool in question was ChatGPT used for a newsletter draft, not when AI was embedded in their CRM, influencing grant prospect rankings, or routing service applicants based on predictive scores. The scope of what those policies need to cover has grown substantially.
This article walks through what has changed in the regulatory landscape, identifies the twelve most common gaps in existing nonprofit AI policies, and provides a structural framework for updating your governance documentation to reflect the current reality. It is not legal advice, and organizations in regulated industries or states with specific AI laws should consult legal counsel. But it offers a clear starting point for the policy review conversation that most nonprofits need to have this year.
What Changed: The 2026 Regulatory Landscape
Understanding which laws apply to your organization requires knowing where you operate and what your AI tools are used for, not just what state you are incorporated in. Several distinct regulatory frameworks are now live or activating, and they interact in ways that make a simple checklist insufficient. Here is what nonprofit leaders need to understand about the landscape that now surrounds their AI use.
Colorado AI Act: Effective June 30, 2026
The first comprehensive U.S. state AI statute targeting deployers
Colorado's AI Act (SB 24-205) is the most consequential U.S. state law for nonprofits because it targets deployers, the organizations using AI systems, not just the companies building them. If your organization uses AI that plays a "substantial role" in decisions about education enrollment, employment, financial services, healthcare, housing, insurance, or legal services, you are a covered deployer regardless of your sector or size.
The practical implications: you must conduct annual impact assessments of high-risk AI systems, implement a risk management program aligned with NIST or ISO/IEC 42001, provide consumer disclosures when AI influences consequential decisions, and use "reasonable care to prevent algorithmic discrimination." Violations are treated as consumer protection violations with civil penalties up to $20,000 per violation.
Many nonprofits are surprised to discover they qualify as covered deployers. If your organization uses AI to rank grant applicants, screen program participants, assess housing eligibility, or make any similar consequential determination, the law applies. The question is not whether AI makes the final decision, but whether it plays a substantial role in shaping it.
Illinois and New York City: Employment AI Obligations
Notice and disclosure requirements for AI used in hiring and employment decisions
Illinois (HB 3773, effective January 1, 2026) prohibits using AI in employment decisions in ways that result in bias against protected classes under the Illinois Human Rights Act. It applies to employers with at least one Illinois employee for 20 or more calendar weeks, a threshold most nonprofits with Illinois operations meet easily. If you use any AI tool for recruiting, screening resumes, assessing candidates, or evaluating employees, disclosure obligations now apply.
New York City's Local Law 144 requires annual independent bias audits of automated employment decision tools and pre-use notices to candidates. Candidates must receive notice at least 10 days before an automated tool is used in their evaluation and must be offered an alternative selection process on request. For nonprofits using AI-powered applicant tracking systems, this creates concrete procedural requirements that need to be embedded in hiring workflows, not just mentioned in a policy document.
EU AI Act: Full Implementation August 2, 2026
Extraterritorial reach affects any nonprofit with EU operations or beneficiaries
The EU AI Act's remaining provisions, including those governing high-risk AI systems covering education, employment, essential services, and healthcare, become fully applicable on August 2, 2026. The law's extraterritorial reach is broad: it applies whenever an AI system's outputs are used within the EU, regardless of where the organization deploying it is located.
For U.S. nonprofits, the relevant question is not whether you are based in Europe but whether your programs serve EU-based individuals, whether you have EU partners who use your AI-generated outputs, or whether your global operations create any intersection with EU regulatory jurisdiction. International nonprofits, organizations serving diaspora communities with EU family members, and any organization that processes data about EU individuals should assess their exposure and consult legal counsel before August.
California and the Federal Preemption Question
Mandatory disclosures alongside ongoing regulatory uncertainty
California's SB 53 (effective January 1, 2026) primarily regulates large frontier AI developers, but it creates disclosure standards that flow downstream to deployers. California AB 853 requires provenance labeling in AI-generated content, relevant for any nonprofit publishing AI-assisted reports, grant narratives, or communications in California. SB 243 requires disclosure when users interact with AI chatbots in commercial contexts, which may apply to nonprofits deploying AI intake or helpline tools.
The federal picture remains unsettled. The Trump administration's December 2025 executive order directed a task force to challenge state AI laws that conflict with a "minimally burdensome national policy framework." Compliance attorneys uniformly advise that existing state laws remain enforceable until courts rule otherwise. Organizations should comply with applicable state laws now rather than speculating about federal preemption outcomes that may take years to resolve.
The 12 Most Common Gaps in Existing Nonprofit AI Policies
Analysis of nonprofit AI policies from sector organizations including Whole Whale, the NC Center for Nonprofits, and Fast Forward reveals consistent patterns in what early policies addressed and what they missed. If your organization has an AI policy from 2023 or 2024, start your update process by checking whether it addresses each of the following.
1Embedded AI in Existing Software
Early policies addressed standalone tools like ChatGPT. Most did not address AI features baked into existing software: Salesforce Einstein, Microsoft Copilot in Teams, Mailchimp's AI content suggestions, Canva's AI image generation. These embedded tools now represent the majority of AI use in most organizations, and policies that ignore them leave the most common use cases ungoverned.
2No Risk Tiering
Policies that treat AI use as a single category, either permitted or prohibited, fail to distinguish between drafting a social media post and using predictive scoring to determine which applicants receive housing support. Colorado's law and other frameworks require treating high-risk uses differently, with additional oversight, documentation, and review requirements. If your policy does not categorize uses by risk level, this is the most important gap to close.
3No Vendor Assessment Process
Many policies permitted AI use but included no process for evaluating new tools before adoption. As AI features proliferate across software categories, staff regularly encounter new AI capabilities without any organizational guidance on whether or how to use them. A vendor assessment checklist and approval process transforms AI adoption from an individual decision to an organizational one.
4Values Without Operational Controls
Many early policies stated values like "fairness," "transparency," and "human oversight" without specifying what actions implement those values. A policy that says "we commit to using AI fairly" but does not define what fair use looks like, who checks for it, or what happens when it is not achieved provides no operational guidance and offers no compliance protection.
5No Bias Monitoring Process
Acknowledging that AI can produce biased outputs is not the same as having a mechanism to detect or address bias in practice. Colorado's law requires deployers to use reasonable care to prevent algorithmic discrimination. Without a defined process for checking AI outputs for disparate impact by protected class or population group, "reasonable care" cannot be demonstrated even if the commitment is genuine.
6Board-Level Accountability Gap
AI governance was treated as an IT or staff matter in most 2023-era policies. Legal and governance guidance in 2026 makes clear that AI risk is a board fiduciary responsibility. WilmerHale's January 2026 analysis specifically identifies board-level AI literacy and oversight as governance priorities. If your policy does not specify what the board is responsible for regarding AI, it is out of step with current expectations.
7Static, One-Time Document
A policy written in 2023 or 2024 that has never been reviewed is almost certainly out of compliance with at least some current requirements. The pace of regulatory change, from no mandatory requirements to multiple state frameworks in under two years, makes AI policy a category that demands regular review. Six-month review cycles are now the standard recommendation across legal and governance guidance.
8No Funder Disclosure Framework
As funders increasingly ask about AI use in grant applications and program reports, organizations without a clear disclosure framework face awkward, inconsistent responses. The Humanity AI initiative ($500 million in philanthropic investment) and major foundations are actively evaluating grantee AI governance as part of due diligence. A policy that does not address funder disclosure leaves a visible gap in organizational positioning.
9Weak Data Protection Provisions
Most early policies mentioned data privacy generically. Updated policies need specific, operational prohibitions: no personally identifiable information in unapproved AI tools, anonymization requirements before using client examples in prompts, and vendor contract language restricting how organizational data can be used to train models. Generic privacy language does not translate into protection at the staff action level.
10No Incident Response Provisions
The vast majority of early nonprofit AI policies had no provision for what to do when something goes wrong: a model hallucinates a significant error in a published report, a vendor experiences a data breach, AI-generated content produces a discriminatory output. Without an incident response protocol, organizations discover their response framework only at the moment they most need it.
11Missing Employment AI Provisions
With Illinois, New York City, and other jurisdictions now requiring notice of AI use in hiring and employment decisions, policies that do not address employment AI create direct compliance exposure. If your organization uses any AI tool for recruiting, resume screening, or performance evaluation, specific language governing notification, documentation, and human review is now legally required in multiple jurisdictions.
12No Documented Training Requirements
State laws increasingly require evidence that deployers have trained staff who oversee AI systems and who are empowered to override AI recommendations. A policy that states principles without specifying what training staff need, when they need it, and how completion is documented cannot demonstrate compliance even if training actually happens.
Key Policy Elements to Add or Strengthen
Beyond closing the gaps above, several specific policy elements have moved from optional best practices to compliance requirements or sector expectations in 2026. Each of the following sections addresses one area where existing policies typically need new or substantially revised language.
Transparency and Disclosure
Research from the ORR Group reveals that 83% of nonprofits believe they are transparent about their AI usage while only 38% of constituents, members, and partners agree. That gap represents a significant trust risk. Transparency policy needs to operate at two levels: internal (what staff must disclose when AI contributed to their work) and external (what the organization publicly communicates about how AI is used in operations and programs).
Disclosure Language to Adopt
- For AI-assisted content: "We developed this communication with AI assistance, which was then reviewed and edited by our team."
- For chatbot-based services: Disclose at the start of any interaction when users are communicating with an AI system rather than a human staff member.
- For employment processes: Written notice to candidates and employees when AI contributes to any employment decision, as required by Illinois law and NYC Local Law 144.
- For funders: A standard paragraph for grant applications describing how the organization uses AI responsibly in its operations and programs.
Vendor Management and Contract Requirements
A critical development in 2026 governance guidance: organizations cannot contract away their compliance obligations. If a vendor causes a regulatory violation, the deploying organization faces liability. This means vendor relationships require active governance, not just a signed agreement. Every AI vendor contract should now include specific clauses that legal guidance across multiple sources identifies as essential.
Required Vendor Contract Clauses
- Data ownership: all organizational data remains property of the nonprofit and cannot be used for any purpose other than the contracted service.
- Training data restrictions: vendor may not use organizational data to train, fine-tune, or improve any AI model, including models serving other clients.
- Bias testing disclosure: vendor must disclose what bias testing has been conducted on their models, with what results, and how frequently.
- Incident notification: vendor must notify the organization within 72 hours of any AI security incident, data breach, or significant model failure.
- Data deletion on termination: all organizational data must be returned and deleted within 30 days of contract termination, with written certification.
- Regulatory compliance commitment: vendor commits to compliance with applicable AI laws and provides 30-day notice of changes to AI systems that may affect compliance.
Human Oversight for Consequential Decisions
Multiple state laws, and good governance practice generally, require that final decisions affecting beneficiary services, employment, housing, and healthcare involve "meaningful human review" rather than full automation. Policies need to define what meaningful human review looks like in practice, not just state that it is required.
Meaningful review means the reviewer has access to the relevant information, understands what the AI recommended and why, has authority to override the recommendation, and documents their decision. A staff member rubber-stamping an AI output without access to the underlying reasoning does not qualify. Policies should specify what information reviewers need to see, what their override authority encompasses, and how both the AI recommendation and the final decision are documented.
This connects directly to the broader governance gap in the sector. Organizations that have built AI into service delivery workflows without defining human oversight protocols are operating on borrowed time as enforcement mechanisms strengthen.
Data Protection Specifics
The language your policy needs is operational and specific, not aspirational. Staff need to know exactly what they can and cannot put into AI tools, and the examples need to match the data they actually work with. Vague language about "protecting privacy" does not translate into protective behavior at the level of individual staff decisions.
Specific Data Policy Language
Prohibited Data in AI Tools (add examples specific to your programs):
"Staff shall not enter personally identifiable information (PII), protected health information (PHI), donor financial data, beneficiary case details, immigration status, Social Security numbers, or any information that could identify an individual into any AI tool not specifically approved for that data type. This applies regardless of whether the data is entered directly or embedded in documents uploaded to the tool."
Anonymization Requirement:
"When using AI to draft case-related documents, communications, or program content, staff must replace all identifying information with fictional details. Example: Replace 'John Smith, age 12, who came to our shelter on February 14' with 'a 12-year-old client who accessed emergency services last month.'"
Four-Year Retention for Employment AI (required in California):
"Records of all automated decision-making data used in employment processes, including AI tool outputs, shall be retained for four years from the date of the employment action and stored in [designated secure location]."
The 2026 Policy Structure: What to Include
Based on leading nonprofit AI policies and current legal guidance, a comprehensive 2026 AI policy should include fifteen sections. Organizations updating existing policies should verify each section is present and current. Organizations building policies from scratch can use this as a structural guide.
Purpose and Mission Alignment
Why the policy exists and how AI use connects to organizational mission and values.
Scope
Who it applies to (staff, contractors, volunteers, board) and which tools and activities it covers.
Definitions
Plain-language definitions of AI types and key terms with concrete examples.
Guiding Principles
Core commitments: mission first, human oversight, confidentiality, and equity.
Acceptable Uses by Role
Specific examples of permitted AI use for each staff category.
Prohibited Uses
Explicit list of what is never permitted, with specific data examples.
Risk Classification
Categories of AI use by risk level with different oversight requirements for each.
Data Security Requirements
Specific data handling rules, anonymization requirements, and approved tool list.
Vendor Assessment and Procurement
Required questions, contract clauses, and approval process for new AI tools.
Disclosure and Transparency
When and how to disclose AI use to constituents, funders, and in communications.
Human Oversight for High-Risk Decisions
Specific protocols, reviewer authority, and documentation requirements.
Bias Monitoring and Equity Review
Process, frequency, triggers for deeper audit, and remediation approach.
Incident Response
What qualifies as an AI incident, notification timeline, and post-incident review.
Training Requirements
What training is required by role, how often, and how completion is documented.
Governance and Review Cycle
Who owns the policy, board accountability, and mandatory six-month review cadence.
Several free resources are available for organizations building or updating AI policies. Fast Forward maintains a Nonprofit AI Policy Builder at ffwd.org that generates a customized policy based on organizational responses. Mission Metrics publishes a nine-section AI Usage and Safety Policy template with specific example prompts. The NC Center for Nonprofits curates a collection of actual nonprofit AI policies. The AICPA and CIMA offer a Not-for-Profit Generative AI Policy template from a compliance and governance perspective.
Making the Review Process Happen
Knowing what needs to change is the easier part. The harder part is creating the organizational process that actually produces an updated policy, secures board approval, communicates changes to staff, and builds the review cadence that keeps the policy current. Here is a practical approach to getting this done in 2026.
Step 1: Audit Your Current AI Use
Before you can write a policy that covers your actual AI use, you need to know what that use actually is. Survey staff across departments to identify every AI tool they use, including features embedded in software they use daily. Map the data flows: what information goes into each tool, where it comes from, and what happens to it. This inventory will surface uses your policy currently ignores and data risks your organization may not be aware of.
This audit connects to your organization's broader AI maturity assessment. Understanding where you actually are, not where you think you are, is the prerequisite for meaningful policy work.
Step 2: Build the Right Review Team
Effective policy review requires perspectives from: operations or IT leadership (who can speak to technical realities), program staff (who work with beneficiary data), communications or development staff (who use AI in external-facing work), finance or compliance staff (who understand regulatory exposure), and ideally, legal counsel for at least a review of the final draft. If your organization does not have internal legal capacity, pro bono legal resources through organizations like Taproot+ or your state bar's nonprofit legal services may be available.
Board involvement is not optional for 2026 policies. The board's technology or audit committee should review and formally approve the updated policy. This is both a governance best practice and increasingly a compliance expectation under frameworks like the Forvis Mazars nonprofit AI governance guidance.
Step 3: Train Before You File
A policy that exists only as a document is largely theater. The compliance and governance value of an AI policy depends on whether staff actually understand and apply it. Train all staff on the updated policy before it takes effect, with role-specific sessions that focus on the situations each department is most likely to encounter. Document completion.
- Leadership and board: strategic risk, fiduciary responsibilities, and regulatory exposure
- Program staff: acceptable use in client-facing work, data protection specifics, human review protocols
- Communications and development: disclosure requirements, content review standards, funder disclosure language
- HR and hiring managers: employment AI notification requirements, bias audit obligations, documentation
Step 4: Build the Review Cadence In
Schedule the next review before the current one is complete. Calendar a six-month review as a recurring item on the technology or audit committee's agenda. Identify in advance what would trigger an off-cycle review: a significant change in AI tools being used, a new state law taking effect, a policy violation incident, or a major development in the regulatory landscape. The review cadence is itself a governance commitment that needs to be embedded in the policy and in committee work plans.
From Aspiration to Compliance
The transition from voluntary AI governance to legally required AI governance happened faster than most nonprofit leaders expected. Three years ago, writing an AI policy was a forward-thinking gesture. Today, not having one, or having one that does not reflect current legal requirements, is a governance gap with real consequences. The organizations that benefit most from this moment are those that treat their AI policy update as an operational priority rather than a compliance checkbox.
There is a meaningful difference between a policy that demonstrates organizational seriousness and one that exists primarily because sector norms suggest you should have something filed. The former requires genuine organizational conversation about how AI is used, who is accountable for what, what protections beneficiaries and staff deserve, and how those commitments will be maintained over time. That conversation is harder than downloading a template, but it is also the conversation that produces governance that actually functions.
The 47% of nonprofits without any AI policy are the most urgent priority. But the organizations with 2023-era policies that have not been reviewed since may be in almost as precarious a position, unaware of the gap between what they committed to then and what compliance requires now. Closing that gap is achievable, particularly with the growing number of sector resources available to support the process. The window for doing so comfortably, before enforcement mechanisms strengthen and funder expectations crystallize, is narrowing.
Ready to Update Your AI Governance?
One Hundred Nights helps nonprofits build and update AI governance frameworks that reflect current legal requirements and organizational realities. Start with a conversation about where your current policy stands.
