Back to Articles
    AI Policy & Advocacy

    AI Policy and the 2026 Midterms: What Nonprofit Advocates Need to Know

    The 2026 midterm elections have become the most expensive battleground for artificial intelligence policy in American history. With more than $175 million committed by competing super PACs, thousands of federal lobbyists working AI-related issues, and a coordinated push to eliminate state-level AI safety laws through federal preemption, the outcome of these elections will determine whether nonprofits retain the regulatory protections they have spent years building. This guide explains who is spending what, why federal preemption is the central issue, and how nonprofit advocates can engage effectively before November.

    Published: March 20, 202618 min readAI Policy & Advocacy
    AI policy and the 2026 midterm elections for nonprofit advocates

    Artificial intelligence has arrived as a defining political issue in the 2026 midterm elections. Not in the way most people expected, with candidates debating the merits of different regulatory frameworks on the campaign trail, but through an unprecedented flood of money from AI companies and their investors seeking to shape who writes the rules. The scale of spending is staggering: one super PAC alone has raised $125 million, and the combined investment across all sides now exceeds $175 million. For context, that is more than the entire cryptocurrency industry spent in the 2024 election cycle, and it signals a new era in which AI policy is not just a technology issue but a core political battleground.

    For nonprofit organizations, the stakes could not be higher. Over the past three years, states have been the primary drivers of AI safety and accountability legislation. Colorado's AI Act, New York's consumer protection framework, and similar efforts across dozens of states have created a patchwork of protections that, while imperfect, give nonprofits and the communities they serve meaningful recourse when AI systems cause harm. The central goal of the largest spenders in the 2026 midterms is to replace this state-level activity with a single federal framework, one that many advocates warn would be weaker, slower to update, and more susceptible to industry capture.

    This article provides nonprofit leaders and advocates with a comprehensive overview of the political landscape surrounding AI policy in the 2026 midterms. We will examine the major players and their spending, explain why federal preemption is the central policy fight, analyze how the lobbying infrastructure has expanded, and provide concrete strategies for nonprofit engagement. Whether your organization works on AI policy directly or simply wants to understand how election outcomes might affect the regulatory environment you operate in, this guide will help you make sense of a rapidly shifting landscape.

    Understanding this landscape is essential for any nonprofit that has invested time in building an AI governance framework or adapting to state-level requirements. The rules your organization has prepared for could change dramatically depending on what happens in November 2026, and the time to engage is now.

    The Money: How AI Industry Super PACs Are Reshaping Midterm Races

    The single most important entity in the 2026 AI policy election landscape is Leading the Future, a super PAC that has raised $125 million with $70 million in cash on hand as of early 2026. Backed by some of the most prominent names in Silicon Valley, including Andreessen Horowitz, OpenAI co-founder Greg Brockman, and Palantir's Joe Lonsdale, Leading the Future represents the largest coordinated effort by the AI industry to influence congressional elections. The PAC's stated mission is to support candidates who favor "innovation-friendly" AI policy, but its practical effect is to elect lawmakers who will oppose strict AI regulation and support federal preemption of state laws.

    What makes Leading the Future particularly notable is its bipartisan structure. The PAC operates through two subsidiary organizations: Think Big, which backs Democratic candidates, and American Mission, which supports Republicans. This dual approach allows the PAC to influence races on both sides of the aisle, ensuring that regardless of which party controls Congress after November, there will be sympathetic lawmakers in key committee positions. For nonprofit advocates accustomed to thinking about AI policy in partisan terms, this bipartisan spending strategy is a critical wake-up call. The industry is not betting on one party. It is buying influence across the entire political spectrum.

    On the other side of the spending ledger, Public First Action has emerged as the primary counter-PAC supporting candidates who favor stronger AI regulation. Public First Action has pledged $50 million for the 2026 cycle, with Anthropic contributing $20 million to the effort. While this represents a significant investment, it is less than half of what Leading the Future has raised, creating an asymmetry that concerns many in the AI safety community. The remaining $30 million comes from a coalition of foundations, individual donors, and organizations concerned about AI safety, consumer protection, and workers' rights.

    The Future of Life Institute has added another dimension to this spending environment by announcing an $8 million advertising campaign targeting voters in six states: Iowa, Kentucky, Maine, Michigan, North Carolina, and a sixth yet to be confirmed. Unlike the super PAC spending, which focuses on electing or defeating specific candidates, the Future of Life Institute's campaign is designed to raise public awareness about AI risks and build grassroots support for strong AI regulation. The campaign represents an unusual approach, investing in voter education rather than candidate support, and its effectiveness will be closely watched by advocacy organizations planning their own strategies for future cycles.

    Anti-Regulation Spending

    Leading the Future and aligned organizations

    • $125M raised by Leading the Future super PAC with $70M cash on hand
    • Backed by Andreessen Horowitz, OpenAI's Greg Brockman, Palantir's Joe Lonsdale
    • Bipartisan strategy via Think Big (Democrats) and American Mission (Republicans)
    • Goal: elect lawmakers who support federal preemption of state AI laws

    Pro-Regulation Spending

    Public First Action and aligned organizations

    • $50M pledged by Public First Action for pro-regulation candidates
    • Anthropic contributed $20M to Public First Action
    • Future of Life Institute: $8M ad campaign across six states
    • Coalition includes foundations, AI safety orgs, and consumer protection groups

    Targeted Races: How AI Money Is Picking Winners and Punishing Regulators

    The AI industry's election spending is not distributed evenly across all competitive races. Instead, it is concentrated on a targeted strategy: punishing lawmakers who have championed AI safety legislation and rewarding those who promise a lighter regulatory touch. This approach mirrors tactics used by other industries, from pharmaceutical companies targeting drug-pricing advocates to fossil fuel interests opposing climate legislation, but the speed and scale at which the AI industry has adopted it is remarkable given how recently AI became a political issue.

    The most prominent target is New York Democrat Alex Bores, who authored one of the first comprehensive AI safety laws at the state level. Bores's legislation, which established disclosure requirements and algorithmic impact assessments for high-risk AI systems, drew intense opposition from industry groups during its passage. Now, Leading the Future is spending heavily to defeat him in his 2026 re-election campaign, sending a clear message to other legislators: if you write AI safety laws, the industry will come for your seat. For nonprofit advocates who worked with Bores's office on provisions protecting vulnerable populations from algorithmic discrimination, his targeting represents a direct threat to the legislative relationships they have built.

    On the other side of the equation, Leading the Future is actively supporting candidates like Texas Republican Chris Gober, who has positioned himself as a champion of limited AI regulation and federal preemption. Gober's platform emphasizes reducing what he calls "regulatory fragmentation" across states, a framing that resonates with industry concerns about compliance costs but that many nonprofit advocates see as a euphemism for weakening consumer protections. The PAC's support for Gober includes both direct spending and independent expenditures on advertising that focuses on economic growth and job creation rather than AI policy specifically.

    This last point deserves particular attention. NBC News reported that AI industry advertisements flooding the 2026 elections are largely about everything except AI itself. Ads funded by Leading the Future and its subsidiaries focus on healthcare costs, education, infrastructure, and economic opportunity, rarely mentioning artificial intelligence or the regulatory debates that motivate the spending. This approach makes it difficult for voters to connect the spending to AI policy outcomes, and it creates a challenge for nonprofit advocates trying to raise awareness about the stakes of these races. When the ads your opponents are running talk about lowering prescription drug costs rather than weakening AI safety laws, the advocacy playbook needs to adapt accordingly.

    Why Targeted Races Matter for Nonprofits

    The precedent being set in 2026 will shape AI policy for years to come

    When the AI industry successfully defeats a lawmaker who championed AI safety legislation, it creates a chilling effect that extends far beyond that single race. Other legislators considering similar bills will think twice about introducing them. Committee chairs will hesitate to schedule hearings. And the nonprofit advocates who built coalitions to pass those laws will find it harder to recruit legislative champions in the next session. The targeting strategy is designed to reshape the political incentive structure around AI regulation, making it politically costly to support strong oversight and politically rewarding to defer to industry preferences.

    • Defeating AI safety champions creates a chilling effect on future legislation
    • Industry ads avoid mentioning AI, making it harder for voters to connect spending to policy
    • Nonprofit coalitions lose legislative partners they spent years cultivating
    • Bipartisan PAC structure means no safe harbor in either party for regulation advocates

    Federal Preemption: The Policy Battle That Could Erase State AI Protections

    If there is one policy issue that explains the scale of AI industry spending in 2026, it is federal preemption. Federal preemption refers to the legal principle that federal law supersedes state law when the two conflict. In the AI context, industry groups are pushing for comprehensive federal AI legislation that would include a preemption clause, effectively nullifying the state-level AI safety laws that have been passed over the past three years. For nonprofits that have invested significant time and resources in understanding and complying with state-specific requirements, this is the single most consequential policy outcome of the 2026 elections.

    The industry's argument for federal preemption centers on compliance efficiency. AI companies operating nationally face a patchwork of state regulations with different definitions, different thresholds, and different enforcement mechanisms. The debate between federal and state AI regulation is not new, but the 2026 elections have escalated it from a policy discussion to an existential fight. Industry groups argue that a single federal standard would reduce compliance costs, create regulatory certainty, and allow companies to focus on innovation rather than navigating 50 different regulatory regimes.

    The counter-argument, advanced by a broad coalition of consumer protection groups, labor organizations, AI safety researchers, and nonprofit advocates, is that federal preemption would replace strong state protections with a weaker federal floor. States have been the laboratories of democracy for AI regulation, with different states testing different approaches. Colorado's focus on algorithmic discrimination, New York's consumer protection framework, and California's transparency requirements each address different aspects of AI harm, and together they create a more comprehensive safety net than any single federal law is likely to provide. Furthermore, state laws can be updated more quickly than federal legislation, allowing regulators to keep pace with rapidly evolving technology.

    The Trump administration amplified the preemption push with an executive order that attempted to ban state AI laws outright. While the legal authority for such an order remains contested, it signaled the federal government's alignment with industry preferences and created additional political cover for congressional candidates supporting preemption. Hundreds of organizations responded by submitting formal letters opposing federal preemption, including tech-worker unions, labor groups, AI safety and consumer protection nonprofits, and academic institutions. The breadth of this opposition coalition demonstrates that resistance to preemption extends far beyond the usual regulatory advocacy organizations.

    For nonprofits specifically, federal preemption could undo years of work. Organizations that have built compliance programs around Colorado's AI Act or prepared for New York's consumer protection requirements might find those frameworks superseded by a federal law with different definitions, weaker enforcement mechanisms, and fewer protections for vulnerable populations. Even more concerning, a preemptive federal law could prevent states from passing new AI regulations in the future, freezing the regulatory landscape at whatever level Congress sets in its initial legislation.

    Industry Arguments for Preemption

    • Single compliance framework reduces costs for companies operating nationally
    • Regulatory certainty encourages innovation and investment
    • Prevents "race to the top" in regulation that could stifle emerging technology
    • Consistent definitions and thresholds across all 50 states

    Advocacy Arguments Against Preemption

    • Federal floor would be weaker than existing state protections
    • State laws update faster to match rapidly evolving AI capabilities
    • Different states addressing different AI harms creates comprehensive coverage
    • Prevents future state innovation in AI safety regulation

    The Lobbying Infrastructure: 3,570 Lobbyists and the Machinery of Influence

    Election spending is only one dimension of the AI industry's political strategy. Equally significant is the lobbying infrastructure that operates year-round in Washington and state capitals. As of 2025, 3,570 federal lobbyists reported lobbying on artificial intelligence issues. That figure represents 26% of all registered federal lobbyists, a concentration of lobbying resources on a single technology that is virtually unprecedented. To put this in perspective, the entire pharmaceutical industry, one of the most heavily lobbied sectors in American politics, employs roughly 1,800 federal lobbyists. AI has surpassed it in just a few years.

    This lobbying army serves multiple functions beyond simply advocating for or against specific bills. Lobbyists shape the terms of the debate by defining what counts as "reasonable" regulation, providing technical briefings to lawmakers who may not fully understand AI systems, drafting model legislation that can be introduced with minor modifications, and building relationships with committee staff who control which bills get hearings and which die quietly. For nonprofits trying to influence AI policy, the sheer volume of industry lobbyists creates an asymmetry that is difficult to overcome through traditional advocacy methods alone.

    The lobbying effort extends beyond traditional corporate advocacy. AI companies have invested heavily in think tanks, research organizations, and academic institutions that produce policy papers and recommendations aligned with industry interests. These organizations provide intellectual cover for industry positions, allowing lobbyists to cite "independent research" that supports their preferred policy outcomes. For nonprofit advocates, distinguishing between genuinely independent policy analysis and industry-funded research has become an essential but increasingly difficult task.

    At the state level, the lobbying picture is even more complex. As states have become the primary venue for AI regulation, industry groups have deployed lobbyists to state capitals that previously had limited experience with technology policy. Many state legislators lack the staff and technical expertise to evaluate AI policy proposals independently, making them more reliant on the information provided by lobbyists. This creates both a challenge and an opportunity for nonprofits: organizations that can provide accessible, trustworthy technical expertise to state lawmakers can have an outsized impact on policy outcomes, precisely because so few non-industry voices are present in these conversations.

    AI Lobbying by the Numbers

    Key statistics on the AI industry's lobbying footprint

    • 3,570 federal lobbyists reported lobbying on AI in 2025, representing 26% of all registered lobbyists
    • AI lobbying now exceeds the pharmaceutical industry's lobbying workforce by nearly 2x
    • Industry-funded think tanks and academic programs produce policy research aligned with corporate interests
    • State-level lobbying expanding rapidly as states become primary AI regulation venues

    States as Laboratories: Why State-Level AI Policy Leadership Matters

    To understand why the federal preemption fight matters so much, it helps to appreciate what states have accomplished in AI regulation over the past few years. While Congress has struggled to pass comprehensive AI legislation, states have moved aggressively to address specific harms caused by artificial intelligence systems. This state-level innovation has produced a body of regulatory experience and legal frameworks that, collectively, represent the most advanced AI governance system in the world.

    Colorado's AI Act, which took effect in 2026, established the first comprehensive framework for regulating high-risk AI systems at the state level. It requires deployers of AI systems that make consequential decisions about employment, housing, insurance, education, and lending to conduct impact assessments, notify affected individuals, and provide meaningful explanations of AI-driven decisions. For nonprofits that provide services in these areas, the Act created specific obligations that many organizations have spent months preparing to meet. A federal preemption of Colorado's law would not just eliminate these requirements; it would potentially remove the protections that Colorado residents who interact with nonprofit AI systems currently enjoy.

    New York has taken a different but complementary approach, building a multi-layered regulatory framework that includes the RAISE Act for frontier model developers, proposed algorithmic discrimination protections for consumers, and specific safeguards for minors interacting with AI chatbots. California, Illinois, Texas, and more than a dozen other states have introduced or passed their own AI-related legislation addressing deepfakes, automated employment decisions, biometric data, and AI-generated content labeling. Each of these state efforts represents not just a regulatory requirement but a laboratory experiment in what works, what does not, and how AI governance should evolve.

    The history of consumer protection regulation in the United States strongly supports the state-led approach. Many of the federal protections Americans take for granted, from automobile safety standards to environmental regulations to financial consumer protections, originated as state-level experiments before being adopted and strengthened at the federal level. The pattern is clear: states innovate, test, and refine regulatory approaches, and the best of those approaches eventually inform stronger federal standards. Federal preemption of state AI laws would short-circuit this process, replacing tested state frameworks with an untested federal approach drafted under enormous industry pressure.

    For nonprofits that have been updating their AI policies for 2026, the state regulatory environment has provided valuable structure and guidance. These state requirements have pushed organizations to think carefully about how they use AI, what risks they are creating, and how they can protect the people they serve. Losing these frameworks to a weaker federal standard would not just change compliance requirements; it would remove the external pressure that has driven many organizations to adopt responsible AI practices in the first place.

    The Opposition Coalition: Who Is Fighting to Preserve State AI Protections

    While the AI industry's spending advantage is significant, the opposition to federal preemption has assembled one of the broadest advocacy coalitions in recent technology policy history. Hundreds of organizations have submitted formal letters opposing efforts to override state AI laws, and the diversity of this coalition is itself a powerful argument against the industry's framing of preemption as a simple matter of regulatory efficiency.

    The coalition includes tech-worker unions whose members understand firsthand how AI systems are built and where their risks lie. It includes labor groups concerned about AI-driven workplace surveillance, automated hiring and firing decisions, and the displacement of workers without adequate transition support. AI safety and consumer protection nonprofits provide policy expertise and grassroots organizing capacity. Academic institutions contribute research on AI harms, algorithmic bias, and the limitations of current AI safety techniques. Civil rights organizations bring expertise on how AI systems disproportionately affect marginalized communities, a perspective that is often absent from industry-led policy discussions.

    This coalition's strength lies not just in its breadth but in the specificity of its arguments. While the industry tends to make broad claims about innovation and competitiveness, coalition members can point to concrete examples of AI harms that state laws were designed to address: discriminatory hiring algorithms, biased healthcare screening tools, predatory lending models, and surveillance systems that target vulnerable populations. For nonprofit advocates, the coalition provides both a network for coordinated action and a repository of evidence and arguments that can be adapted for local advocacy efforts.

    The coalition also benefits from growing public awareness of AI risks. Polling consistently shows that voters across the political spectrum support stronger AI regulation, including requirements for transparency, algorithmic auditing, and human oversight of high-stakes AI decisions. This public support creates political space for candidates to champion AI safety without fearing voter backlash, but only if advocacy organizations effectively communicate the connection between election outcomes and AI policy to voters who may not be tracking these issues closely.

    Labor & Workers

    Tech-worker unions and labor organizations fighting against AI-driven workplace surveillance, automated employment decisions, and inadequate worker transition support. Their firsthand experience with AI systems provides credible, technical arguments against weak regulation.

    Safety & Consumer Groups

    AI safety nonprofits, consumer protection organizations, and civil rights groups bringing policy expertise, grassroots organizing capacity, and evidence of specific AI harms affecting marginalized communities. They provide the concrete examples that counter abstract industry arguments.

    Academic Institutions

    Universities and research institutions contributing independent studies on algorithmic bias, AI safety limitations, and the effectiveness of different regulatory approaches. Their research provides the evidence base that policymakers need to resist industry pressure for weaker standards.

    What Nonprofit Advocates Can Do: A Strategic Engagement Framework

    Given the scale of AI industry spending and lobbying, it is easy for nonprofit advocates to feel overwhelmed. But the reality is that nonprofits have several unique advantages in this fight that money cannot buy: credibility with affected communities, on-the-ground evidence of AI harms and benefits, relationships with state and local officials, and the moral authority that comes from serving the public interest rather than shareholder returns. The key is deploying these advantages strategically in the months leading up to November 2026.

    The first and most important step is education, both internal and external. Internally, nonprofit boards and leadership teams need to understand the connection between the 2026 midterms and the regulatory environment their organizations operate in. Many nonprofit leaders track AI policy developments but may not have connected those developments to specific election outcomes. Briefing your board on which races matter most, why federal preemption would affect your organization, and what the spending landscape looks like is essential groundwork for any advocacy effort.

    Externally, nonprofits can play a crucial role in voter education. Remember that AI industry ads are deliberately avoiding the topic of AI, focusing instead on generic economic messages. Nonprofits can fill this information gap by helping their communities understand which candidates support strong AI protections and which are backed by industry money seeking to weaken those protections. This does not require partisan advocacy; it requires transparent information about funding sources, policy positions, and the real-world consequences of different regulatory approaches.

    Second, nonprofits should invest in state-level advocacy infrastructure. Even if federal preemption passes, the legislative battles over implementation, enforcement, and carve-outs will play out over years. Organizations that maintain strong relationships with state legislators, regulatory agencies, and other advocacy groups will be better positioned to influence those battles regardless of the federal outcome. If your organization has already built relationships around state AI legislation, now is the time to deepen those relationships and expand your network.

    Third, nonprofits should document and share their experiences with AI systems. One of the most powerful tools in the advocacy toolkit is concrete evidence of how AI regulation (or lack thereof) affects real people. If your organization has implemented AI tools under a state regulatory framework, your compliance experience, the costs and benefits, the problems you identified and corrected, the protections you were able to offer your clients, is exactly the kind of evidence that policymakers need to resist industry arguments that regulation is unnecessary or burdensome.

    Immediate Actions (Now through Summer 2026)

    • Brief your board on the connection between midterm outcomes and AI regulatory environment
    • Identify which races in your state are being targeted by AI industry spending
    • Join or form coalitions with other organizations opposing federal preemption
    • Document your organization's experience with AI systems under current state regulations
    • Develop voter education materials that connect AI spending to policy outcomes

    Longer-Term Advocacy Strategies

    • Build and maintain relationships with state legislators regardless of federal outcomes
    • Provide accessible technical expertise to lawmakers who lack AI policy staff
    • Submit public comments and testimony on AI legislation at every level of government
    • Train staff and volunteers to communicate AI policy issues to general audiences
    • Track industry-funded research and provide independent analysis as a counter-narrative

    Preparing for Multiple Outcomes: Scenario Planning for Nonprofits

    Given the uncertainty surrounding the 2026 midterms and the subsequent legislative session, nonprofits should prepare for multiple scenarios rather than betting on a single outcome. Each scenario has different implications for organizational strategy, compliance requirements, and advocacy priorities.

    In the first scenario, pro-regulation candidates hold their seats and federal preemption fails. This would preserve the existing state regulatory landscape and likely accelerate the pace of state-level AI legislation. Nonprofits in this scenario should continue deepening their compliance capabilities, investing in AI governance frameworks, and building the internal expertise needed to navigate an increasingly complex multi-state regulatory environment. The advocacy priority would shift to strengthening and harmonizing state laws rather than defending them from federal override.

    In the second scenario, industry-backed candidates win key races and federal preemption legislation advances. This would not happen overnight; even with favorable election results, passing comprehensive federal AI legislation would take months or years. Nonprofits in this scenario should focus on shaping the federal legislation to include the strongest possible protections, pushing for carve-outs that preserve state authority in specific areas (such as civil rights enforcement), and building coalitions that can influence the regulatory agencies tasked with implementing any federal framework. The compliance priority would be maintaining current state-level programs while preparing to adapt to federal requirements.

    In the third and perhaps most likely scenario, election results are mixed, creating a divided Congress where neither side has a clear mandate on AI policy. This scenario would likely result in incremental federal legislation, possibly addressing specific AI applications (such as deepfakes or healthcare AI) without comprehensive preemption. Nonprofits in this scenario would face the most complex regulatory environment: continued state requirements plus new federal obligations in specific areas. The advocacy priority would be ensuring that any incremental federal legislation includes explicit non-preemption clauses preserving state authority.

    Regardless of which scenario unfolds, the fundamental work of responsible AI governance remains the same. Organizations that have invested in understanding their AI systems, documenting their decision-making processes, and building internal oversight capabilities will be well-positioned to adapt to whatever regulatory framework emerges. The organizations that will struggle are those that have been waiting to see what happens before taking action. The regulatory landscape may be uncertain, but the need for responsible AI practices is not.

    Conclusion: The 2026 Midterms Will Define the Next Decade of AI Governance

    The 2026 midterm elections represent a turning point for AI policy in the United States. The more than $175 million in combined spending, the 3,570 federal lobbyists working AI issues, the targeted campaigns against AI safety champions, and the coordinated push for federal preemption all point to an industry that understands the stakes and is investing accordingly. For nonprofit advocates, the question is not whether to engage but how to engage most effectively given the resource asymmetry they face.

    The good news is that nonprofits bring assets to this fight that cannot be purchased: community trust, lived experience with AI's impacts, moral authority, and the ability to mobilize grassroots support. The organizations that will have the greatest impact in the months ahead are those that combine these natural advantages with strategic engagement, clear communication, and sustained coalition building. This means educating boards and communities, documenting AI experiences, strengthening state-level relationships, and helping voters understand what is actually at stake behind the generic campaign advertisements.

    Whatever happens in November, the work of building responsible AI governance does not stop. If state laws survive, they need to be implemented effectively. If federal preemption advances, it needs to be shaped to include the strongest possible protections. If the outcome is mixed, nonprofits need the flexibility to navigate complexity. In every scenario, the organizations that have invested in understanding AI policy, building internal governance capabilities, and engaging in the political process will be the ones best positioned to protect the communities they serve. The time to start that investment is now.

    Prepare Your Organization for What Comes Next

    Whether the 2026 midterms preserve or reshape the AI regulatory landscape, your organization needs a governance framework that can adapt. We help nonprofits build resilient AI strategies that protect their missions regardless of the political environment.