Back to Articles
    AI Regulation & Compliance

    The EU AI Act Takes Full Effect in 2026: Implications for US Nonprofits with Global Operations

    The world's most comprehensive AI regulation is now substantially in force, with full compliance obligations beginning in August 2026. US nonprofits with European offices, beneficiaries, donors, or staff need to understand what applies to them, what it requires, and how to prepare before the deadlines arrive.

    Published: February 23, 202616 min readAI Regulation & Compliance
    EU AI Act compliance guide for US nonprofits with global operations

    When the European Union's Artificial Intelligence Act entered into force on August 1, 2024, it became the first comprehensive legal framework for AI anywhere in the world. The Act's requirements are now rolling out on a phased schedule, with prohibitions on certain AI uses having taken effect in February 2025, general-purpose AI model obligations beginning in August 2025, and the full suite of high-risk AI system requirements taking effect in August 2026.

    For US nonprofit organizations with exclusively domestic operations, the EU AI Act may seem like a distant concern. But for the significant number of American nonprofits that maintain offices in Europe, deliver programs to beneficiaries in EU member states, process personal data of European residents, employ staff in European countries, or receive funding from European foundations, the regulation may create real compliance obligations, regardless of where the organization is headquartered.

    This matters for two reasons. First, the penalties for non-compliance are substantial: fines of up to 35 million euros or 7% of worldwide annual revenue for violations involving prohibited AI practices. Second, the EU AI Act is already influencing how foundations and institutional funders evaluate grantee AI governance, how data protection regulators approach AI systems that process European personal data alongside GDPR obligations, and how the international nonprofit community thinks about responsible AI use. Getting familiar with the framework now, even if full compliance is not immediately required, positions your organization well for a future where AI governance standards continue to tighten globally.

    This article explains the EU AI Act's structure, identifies which US nonprofits are most likely to have compliance obligations, explains what those obligations require in practical terms, discusses the proposed simplifications under the EU's Digital Omnibus package, and offers a framework for assessing your organization's exposure and preparing accordingly.

    Understanding the EU AI Act's Structure

    The EU AI Act organizes AI systems into four risk categories, with different requirements applying to each. Understanding this structure is essential for nonprofits trying to assess whether and how the regulation applies to their operations.

    At the top of the risk pyramid are prohibited AI practices: applications that are banned outright because they pose unacceptable risks to people's rights and safety. These include AI systems that manipulate people through subliminal techniques, systems that exploit the vulnerabilities of specific groups, social scoring systems by public authorities, real-time remote biometric identification in public spaces, and AI systems that predict the likelihood of a person committing a crime based on profiling. These prohibitions took effect in February 2025. Most nonprofits are not building or deploying systems in these categories, but it is worth reviewing your AI use portfolio to confirm.

    High-risk AI systems face the most detailed obligations under the Act. The definition of high-risk is specific and involves two pathways. The first covers AI systems embedded in products already subject to EU safety legislation, such as medical devices, vehicles, and toys. The second covers AI systems in eight specific sensitive domains listed in Annex III of the Act: biometric identification, critical infrastructure, education and vocational training, employment and worker management, access to essential private and public services, law enforcement, migration and border control, and administration of justice.

    This second pathway is where many nonprofits need to pay attention. Organizations that use AI to support decisions about who receives services, who qualifies for programs, or how resources are allocated may be operating high-risk AI systems as the Act defines them. A nonprofit that uses an AI tool to help prioritize clients for limited housing placements, assess educational needs for program enrollment, or screen job applicants may be deploying high-risk AI under the Act's definitions.

    General-purpose AI models, which are the large language models like GPT-4o and Claude that power many AI tools, face a separate set of obligations focused on transparency, capability documentation, and safety testing. These obligations primarily apply to the companies that develop and provide these models, not to the nonprofits that use them through commercial APIs. However, nonprofits that integrate general-purpose models into their own products and deploy them to others can become subject to provider-level obligations.

    Prohibited AI Practices

    Banned outright as of February 2025

    • Subliminal manipulation of behavior outside a person's awareness
    • Exploitation of vulnerable groups through psychological weaknesses
    • Social scoring by public authorities based on behavior or personal characteristics
    • Real-time remote biometric identification in public spaces (with limited exceptions)
    • Predictive policing AI that profiles individuals for crime likelihood

    High-Risk AI Domains (Annex III)

    Full compliance obligations from August 2026

    • Biometric identification and categorization of natural persons
    • Education and vocational training: access, assessment, performance evaluation
    • Employment and worker management: recruitment, performance assessment
    • Access to essential private and public services and benefits
    • Migration, asylum, and border control management

    Which US Nonprofits Are Actually Affected

    The EU AI Act has extraterritorial reach similar to GDPR. It applies not based on where an organization is incorporated, but based on where its AI systems have an impact. Specifically, the Act applies to providers of AI systems that are placed on the market or put into service in the European Union, and to deployers of AI systems who are established in or located in the EU, or who are located outside the EU but deploy AI systems that affect people in the EU.

    This extraterritorial scope creates clear obligations for several categories of US nonprofit. An international development or humanitarian organization with program offices in EU countries and that uses AI tools to support program delivery or staff management is deploying AI systems in the EU and may be subject to the Act. A human rights organization that uses AI to process case information about European nationals or residents is likely handling personal data subject to GDPR already, and the AI Act adds an additional layer of obligations if those AI systems fall into high-risk categories.

    US environmental nonprofits that have European chapters or affiliates, arts and cultural organizations with touring or exchange programs, educational nonprofits that operate international exchange programs or partner with European educational institutions, and health organizations with clinical research or program partnerships in Europe all potentially have EU AI Act exposure. The key question for each is not simply whether you have a presence in Europe, but specifically whether you are using AI systems that affect EU-resident natural persons in ways that touch the Act's high-risk categories.

    There is an important nuance around roles. The Act distinguishes between AI providers (organizations that develop AI systems or place them on the market) and AI deployers (organizations that use AI systems in their operations). Most nonprofits using commercially available AI tools like Microsoft Copilot, Salesforce Einstein, or standalone AI applications are deployers rather than providers. Deployers have fewer obligations than providers, but they are not exempt. Deployers of high-risk AI systems must conduct fundamental rights impact assessments, maintain usage logs, ensure human oversight of high-stakes decisions, and provide transparency to affected individuals.

    Nonprofit organizations that use AI to inform or support decisions about who receives social services, which individuals are prioritized for limited housing or healthcare resources, or how clients are assessed for program eligibility are most likely to be deploying high-risk AI systems under the Act's definition. These use cases directly parallel the "access to essential private and public services" category in Annex III, which the Act treats as high-risk because of the significant impact such decisions have on people's fundamental rights and life circumstances.

    Assessing Your Organization's EU AI Act Exposure

    Work through these questions to understand whether the Act likely applies to your organization

    Presence and Operations

    • Do you have offices, staff, or registered entities in EU member states?
    • Do you deliver programs or services to beneficiaries located in the EU?
    • Do you process personal data of EU residents (including in your CRM, case management systems, or HR systems)?

    AI Use Cases

    • Do you use AI tools to assess, prioritize, or make decisions about who receives services or benefits?
    • Do you use AI in employment processes: screening, performance monitoring, or scheduling?
    • Do you use AI for educational assessment or vocational training program decisions?
    • Have you developed or substantially modified an AI system that you make available to others?

    What High-Risk AI Compliance Actually Requires

    For organizations that determine they are deploying high-risk AI systems in contexts that affect EU residents, the EU AI Act creates a specific set of compliance obligations. Understanding these obligations in practical terms helps nonprofits assess what preparation is needed.

    Deployers of high-risk AI systems must conduct a Fundamental Rights Impact Assessment (FRIA) before deploying the system. This assessment, required under Article 27 of the Act, is similar in spirit to a Data Protection Impact Assessment under GDPR but specifically focused on AI. The FRIA must describe how and when the AI system will be used, who it might affect, what risks it might pose to fundamental rights, how human oversight will be maintained, and what steps will be taken if risks materialize. Once completed, deployers must notify the relevant market surveillance authority of the results. The FRIA requirement is expected to take full effect in August 2026.

    High-risk AI deployers must implement appropriate technical and organizational measures to ensure they can use the system as intended, keep relevant logs of the system's operation, and conduct post-deployment monitoring to identify and address issues. They must also ensure that the people making decisions informed by the AI output have sufficient understanding of the system's capabilities and limitations to exercise meaningful human oversight. This requirement has practical implications: an AI system that recommends which clients receive limited housing placements, for example, must have human decision-makers who understand how the recommendations are generated and can override them.

    Transparency to affected individuals is another core requirement. People who are subject to consequential decisions that involve high-risk AI must be informed that AI is being used in the decision process. They must also have access to human review and the ability to challenge decisions. For nonprofits working with vulnerable populations who may already face barriers to advocating for themselves, this requirement is both a legal obligation and an ethical one that aligns with good program practice.

    Providers of high-risk AI systems, which includes organizations that develop or substantially modify AI systems, face more extensive obligations including technical documentation requirements, conformity assessment procedures, registration in a publicly accessible EU database, and post-market monitoring plans. Most nonprofits using commercial AI tools will be deployers rather than providers, but any organization that has developed a custom AI model or substantially modified a foundation model for deployment should carefully assess whether it meets the provider definition.

    Deployer Obligations (August 2026)

    Requirements for nonprofits using high-risk AI systems

    • Fundamental Rights Impact Assessment before deployment
    • Maintain logs of system operation for review
    • Human oversight by people who understand the system's limitations
    • Transparency to affected individuals that AI is being used
    • Right to human review and ability to challenge AI-informed decisions
    • Post-deployment monitoring for unintended effects and risks

    FRIA Requirements

    What a Fundamental Rights Impact Assessment must cover

    • Description of how and when the AI system will be used
    • Identification of the groups of people likely to be affected
    • Assessment of the specific risks to fundamental rights
    • Description of human oversight mechanisms
    • Mitigation measures for identified risks
    • Notification to market surveillance authority

    The GDPR Connection: How the Two Regulations Interact

    Many US nonprofits with European operations are already navigating GDPR compliance. Understanding how the EU AI Act relates to and interacts with GDPR is essential for organizations that need to comply with both.

    The two regulations focus on different entities and different risks. GDPR governs data controllers and processors who handle personal data, with obligations centered on lawful bases for processing, data subject rights, and accountability for data protection. The EU AI Act governs AI system providers and deployers, with obligations centered on risk assessment, transparency, human oversight, and the specific risks that AI systems pose to fundamental rights.

    Where the regulations overlap is in their shared concern with automated decision-making that affects individuals. GDPR's Article 22 already restricts decisions based solely on automated processing that produce significant effects on individuals, requiring human intervention, the ability to express opinions, and the ability to contest decisions. The EU AI Act extends this framework with more specific requirements about what human oversight must look like, what documentation must be maintained, and what assessments must be conducted before deployment.

    For nonprofits that have already conducted Data Protection Impact Assessments under GDPR, there is meaningful overlap with the Fundamental Rights Impact Assessments required by the EU AI Act. Where a GDPR DPIA meets the requirements of an AI Act FRIA, the two assessments can be aligned, reducing duplication. In practice, however, the AI Act's FRIA covers somewhat different ground, particularly around the specific capabilities and limitations of the AI system and the mechanisms for human oversight, so some organizations will need to develop the two assessments in parallel rather than treating one as a substitute for the other.

    The European Commission's Digital Omnibus proposal, published in November 2025, acknowledges the complexity of navigating multiple overlapping EU digital regulations and proposes simplifications intended to reduce compliance burdens, particularly for smaller organizations. The proposal would delay high-risk AI compliance obligations by up to 16 months beyond the original August 2026 deadline, potentially extending them into late 2027 or early 2028. However, the Digital Omnibus is still moving through the EU legislative process and is not yet final law; nonprofits should plan for the original August 2026 timeline until there is greater certainty about whether and when delays will be formally adopted.

    Digital Omnibus: Proposed Simplifications to Watch

    The EU Commission's November 2025 proposal could ease some compliance burdens, but is not yet final

    • Delayed deadlines: High-risk AI obligations originally due August 2026 may be extended by up to 16 months, to late 2027 or early 2028, pending confirmation that compliance support tools are available.
    • Simplified monitoring: The requirement to follow a Commission-specified post-market monitoring template may be replaced with more flexible documentation requirements.
    • AI literacy changes: Mandatory organization-level AI literacy obligations may be replaced with softer requirements for member states and the Commission to encourage training.
    • Registration exemptions: Providers whose AI systems are not classified as high-risk may no longer need to register in the EU database.

    Note: The Digital Omnibus is under ordinary legislative procedure and requires approval by the European Parliament and Council. Until formally adopted, the original AI Act timelines remain in force.

    Penalties, Enforcement, and Proportionality

    Understanding the penalty structure helps nonprofits assess the actual risk of non-compliance relative to the cost of compliance preparation.

    The EU AI Act establishes a three-tier penalty structure. Violations of prohibited AI practice bans can result in fines of up to 35 million euros or 7% of worldwide annual turnover, whichever is higher. Non-compliance with obligations for high-risk AI systems carries fines of up to 15 million euros or 3% of annual worldwide turnover. Supply of incorrect or misleading information to regulators carries fines of up to 7.5 million euros or 1% of annual worldwide turnover.

    These are maximum penalties, not typical ones. The Act explicitly states that for small and medium-sized enterprises, the lower of the two penalty amounts in each tier will be applied, and that penalties must be proportionate to the severity of the infringement and the circumstances of the particular case. Enforcement is carried out by national market surveillance authorities in each EU member state, with the European AI Office coordinating enforcement for general-purpose AI models and cross-border matters. Early enforcement is likely to focus on egregious violations and on market surveillance for prohibited practices rather than on technical documentation deficiencies at small deployers.

    For most US nonprofits with limited European operations, the immediate penalty risk is lower than the numbers suggest, particularly if their AI use cases do not involve the most sensitive categories. However, reputational risk, the effect on funder relationships, and the potential for data protection authorities to raise AI-related issues in connection with GDPR enforcement are all real considerations that go beyond formal penalty exposure. Organizations that serve vulnerable populations, work in healthcare or social services, or rely on European institutional funders have particular reasons to take the framework seriously.

    Penalty Structure Summary

    Tier 1

    Prohibited AI Practices

    Up to €35M or 7% of global annual revenue. Fines applicable from February 2025.

    Tier 2

    High-Risk AI System Non-Compliance

    Up to €15M or 3% of global annual revenue. Full obligations from August 2026 (potentially later per Digital Omnibus).

    Tier 3

    Misleading Regulators

    Up to €7.5M or 1% of global annual revenue. Applies to supply of incorrect or incomplete information to authorities.

    Practical Preparation Steps for US Nonprofits

    The following framework is designed for US nonprofit leaders who need to assess their EU AI Act exposure and take proportionate preparation steps. It is not a substitute for legal counsel, particularly for organizations with significant European operations or AI use cases that clearly fall into high-risk categories. But it provides a starting point for understanding where you stand.

    The first step is conducting an AI inventory. Before you can assess compliance risk, you need to know what AI systems your organization uses, in what contexts, and where. This inventory should cover commercial AI tools embedded in your existing software platforms (your CRM, fundraising software, HR system), AI tools your staff use directly (chatbots, content generators, data analysis tools), and any custom AI systems your organization has developed or commissioned. For each tool, document what decisions or activities it informs, who uses it, and whether it processes personal data of EU residents.

    The second step is mapping your European exposure. For each AI system in your inventory, assess whether it is deployed in contexts that affect EU residents. This does not require certainty: if you have EU program offices, EU staff, or EU beneficiaries, you have European exposure and should assess accordingly. If you are exclusively US-based with no European data subjects, your immediate compliance burden is low, though monitoring the regulation remains wise as it influences global standards.

    The third step is assessing risk categories for your EU-adjacent AI use. For the AI systems that do affect EU residents, work through the Annex III domains to assess whether any fall into high-risk categories. The key question is whether the AI informs decisions about access to services, employment, education, or other areas where the Act's high-risk definitions apply. If any do, you have a high-risk deployment and need to plan for the full suite of deployer obligations.

    Throughout this assessment process, align your work with the broader AI governance practices your organization should be developing in any case. Our articles on building AI governance when adoption outpaces strategy and the US federal versus state AI regulation landscape provide complementary frameworks for the domestic side of AI compliance.

    Three-Phase Preparation Framework

    A proportionate approach for nonprofits assessing their EU AI Act exposure

    Phase 1: Assess (Now through Q2 2026)

    • Build a complete inventory of all AI tools and systems your organization uses
    • Map each AI use case to potential EU data subjects or affected individuals
    • Identify AI use cases that touch Annex III high-risk categories
    • Consult legal counsel if significant high-risk EU exposure is identified

    Phase 2: Prepare (Q2 through Q3 2026)

    • For high-risk AI systems: conduct Fundamental Rights Impact Assessments
    • Document human oversight procedures for AI-informed decisions
    • Update transparency notices to inform affected individuals about AI use
    • Train staff who use or oversee AI systems on their responsibilities under the Act

    Phase 3: Maintain (Ongoing from Q3 2026)

    • Implement operational logs for high-risk AI system usage
    • Conduct periodic reviews as AI use evolves and regulations are clarified
    • Monitor Digital Omnibus legislative progress for any timeline adjustments
    • Update AI governance policies annually to reflect regulatory developments

    Beyond Compliance: What This Means for Nonprofit AI Strategy

    The EU AI Act is not just a compliance exercise. It reflects a substantive set of values about how AI should be used in contexts that affect people's fundamental rights: that impacts should be assessed before deployment, that affected individuals should be informed, that meaningful human oversight must be maintained, and that systems should be monitored for unintended effects after launch.

    Many of these values align well with the mission commitments of nonprofit organizations. Nonprofits exist to serve communities, and those communities deserve to know when AI influences decisions about their services, their employment, or their children's education. They deserve the ability to contest those decisions and access to human judgment when it matters. The Act's requirements, in many cases, codify practices that mission-driven organizations should want to implement as a matter of ethical commitment rather than legal obligation.

    For nonprofits thinking about AI strategy, the EU AI Act's framework is a useful organizing tool even where it does not strictly apply. Using the high-risk category definitions as a checklist for your own AI use portfolio helps identify where human oversight is most important. Conducting the kind of impact assessment the Act requires, even as an internal exercise, builds organizational muscle for responsible AI deployment. Training staff on AI literacy and on their responsibilities in the decision chain improves both compliance readiness and decision quality.

    The regulation also has implications for vendor relationships. When your organization uses a commercial AI platform to support high-risk functions, the Act creates specific obligations on providers to share documentation, capability information, and technical access with deployers to enable their compliance. Nonprofits with EU exposure should be asking their AI vendors about their EU AI Act compliance status and what documentation they provide to support deployer obligations. Vendors who cannot answer these questions clearly may not be appropriate partners for use cases with significant regulatory exposure.

    For US nonprofits that currently operate only domestically, the EU AI Act is worth understanding as a leading indicator of where global standards are heading. The regulation is already influencing how major AI providers structure their products, how international funders evaluate AI governance, and how other jurisdictions are approaching their own AI frameworks. Organizations that build AI governance practices aligned with the Act's principles are not just managing regulatory risk: they are building the responsible AI infrastructure that the field is moving toward, at whatever pace regulatory requirements move.

    Key Questions for Your Board and Leadership

    Governance questions that responsible AI deployment requires your leadership to address

    • Do we have a complete picture of what AI systems we use and in what contexts? If not, when will we build one?
    • For AI systems that inform decisions about our clients or beneficiaries, do we have meaningful human oversight in place and documented?
    • Do the people who will be affected by AI-informed decisions in our programs know that AI is being used, and do they have recourse?
    • Have we assessed whether any of our AI use cases carry the kind of fundamental rights risks that warrant the Act's high-risk designation?
    • How are we staying current on AI regulation as both EU and US frameworks continue to evolve?

    Conclusion

    The EU AI Act represents a fundamental shift in how AI use is regulated globally. For the first time, there is a comprehensive legal framework that imposes specific risk assessment, documentation, transparency, and human oversight requirements on organizations that deploy AI systems in contexts that affect people's fundamental rights.

    US nonprofits with global operations need to understand whether and how this regulation applies to them, and they need to understand it before August 2026 when the full suite of high-risk AI obligations takes effect. The assessment is not always simple, but it is also not insurmountable. Most organizations can conduct a meaningful initial review of their AI inventory and EU exposure within a few weeks, and can identify relatively quickly whether they need legal guidance on specific use cases.

    For organizations that determine they have compliance obligations, the preparation work, while significant, builds governance infrastructure that aligns with both legal requirements and ethical best practices. For organizations that determine the Act does not currently apply to them, the process of making that determination builds understanding of where AI governance is heading and positions them well for the increasingly regulated AI future that every nonprofit will navigate.

    The EU AI Act is, in this sense, not just a compliance matter. It is a mirror that reflects back the questions every organization deploying AI should be asking itself, regardless of where its headquarters are located.

    Navigate AI Compliance with Confidence

    One Hundred Nights helps nonprofits build responsible AI governance frameworks that align with evolving legal requirements and ethical best practices, so you can use AI effectively while managing risk.