The Carbon Footprint of Your AI Tools: An Updated Environmental Assessment for 2026
As nonprofits deepen their reliance on AI platforms, the environmental cost of those tools deserves honest scrutiny. This guide examines what the latest data shows, how different tools compare, and what responsible AI use looks like for mission-driven organizations in 2026.

When your fundraising team uses an AI platform to draft donor appeals, or your program staff runs queries through a large language model to analyze survey data, those interactions carry an energy cost. Data centers consume electricity. That electricity, in many regions, still comes partly from fossil fuels. And the scale at which AI is now deployed means the aggregate impact is no longer trivial. For nonprofits that care about sustainability, this creates a genuine tension: the tools helping organizations operate more efficiently may themselves carry an environmental burden.
The picture in 2026 is more nuanced than either AI alarmists or technology optimists typically acknowledge. Some providers have made substantial investments in renewable energy and efficiency improvements. Others remain opaque about their environmental commitments. The type of AI task matters enormously: a simple text query consumes a fraction of the energy required to generate a short video. And the aggregate case for AI's climate impact may ultimately be positive, because the same technologies are accelerating clean energy research, optimizing logistics, and improving resource allocation in ways that reduce emissions far beyond their own operational footprint.
None of that means nonprofits should ignore the question. Organizations with environmental or climate-related missions have a particular accountability to their donors and stakeholders when it comes to their technology choices. And even for nonprofits whose missions lie elsewhere, understanding the environmental implications of AI adoption is part of responsible stewardship. This article provides an updated, evidence-based assessment of AI's carbon footprint, what the leading providers are doing to address it, and how nonprofits can make more informed decisions about the tools they choose and how they use them.
The goal here is not to induce guilt over every AI query or to suggest that nonprofits should abandon helpful technology for environmental reasons. It is to provide the honest framework that responsible organizations need to make intentional choices, evaluate vendor claims, and engage with this dimension of AI governance thoughtfully.
How Much Energy Does AI Actually Use?
Understanding AI's energy footprint requires distinguishing between two very different types of consumption: training and inference. Training is the one-time process of building a model, during which a massive amount of computation is performed to adjust billions of parameters. Inference is what happens every time you send a query to that model, the ongoing operational cost that scales with usage.
Training large frontier models is extraordinarily energy-intensive. Estimates for training GPT-3 put the energy consumption at roughly 1,287 megawatt-hours, generating around 552 tonnes of CO2. Estimates for GPT-4 suggest consumption in the range of 50,000 megawatt-hours, a figure equivalent to powering a small city for several days. These are significant numbers, but they represent a fixed, one-time cost that is then amortized across billions of queries over the model's deployment lifetime.
Inference, by contrast, is the running cost of AI at scale, and it now accounts for over 80% of total AI electricity consumption. As more people and organizations use AI tools daily, this inference load is growing rapidly. A December 2025 study estimated that AI systems could produce between 32 and 80 million tonnes of CO2-equivalent in 2025 alone, placing the industry's footprint in the same range as some mid-sized countries. The International Energy Agency projects that global data center electricity consumption could nearly double to around 945 terawatt-hours annually by 2030, with AI-focused computing representing nearly half the growth.
For nonprofits trying to contextualize their own usage, the most practical unit is energy per query. A standard text query to a frontier model like GPT-4o or Google Gemini consumes roughly 0.24 to 0.34 watt-hours of electricity. A traditional Google search consumes around 0.3 watt-hours. So at the individual query level, the footprint is in the same general range as familiar digital activities. The difference is that AI usage is growing at a rate that far outpaces the efficiency gains being made, and the aggregate infrastructure buildout required to support that growth is where the real environmental stakes lie.
Text Query
~0.3 Wh
Per text query to a frontier LLM, similar to a traditional web search
Image Generation
~1 Wh
Per AI-generated image, roughly 3x a text query depending on resolution
Video Generation
~1,000 Wh
For a 5-second AI-generated video clip, roughly 3,000x a text query
Not All AI Tasks Are Created Equal
One of the most practically important insights from current research is the enormous variation in energy consumption across different types of AI tasks. The comparison table above makes the point dramatically: generating a short video clip consumes roughly the same energy as 3,000 text queries. This means the mix of AI tasks your organization performs matters far more to your overall footprint than the frequency of use in the abstract.
For most nonprofits, the dominant AI use cases involve text: drafting communications, summarizing documents, answering staff questions, analyzing feedback, and generating reports. These tasks are in the low-impact category and should not be a source of significant environmental concern at the scale typical of a nonprofit organization. An organization where five staff members each conduct 20 text queries per day would consume roughly 30 watt-hours, less than a single household light bulb running for an hour.
The calculus changes when organizations move into AI video generation, which is increasingly marketed to nonprofits as a tool for donor storytelling and impact communications. Generating fundraising video content through AI tools like Sora, Veo, or Runway carries a footprint that is orders of magnitude higher than text work. A nonprofit producing even a handful of AI-generated video clips per week would accumulate a carbon cost that dwarfs all other AI activities combined. This does not mean AI video is off-limits, but it does mean the decision to adopt it should include an honest environmental accounting alongside the mission and cost considerations.
Model size is the other major variable. Frontier models like Claude 3 Opus, which consumes an estimated 4 watt-hours per query, are significantly more energy-intensive than smaller, task-optimized alternatives. For routine nonprofit tasks, the performance difference between a frontier model and a smaller, purpose-built model is often negligible while the energy difference can be substantial. Researchers have found that choosing a task-appropriate smaller model over a frontier model for equivalent work can reduce energy consumption by 80 to 90 percent. This is one of the highest-leverage choices available to nonprofits trying to reduce their AI footprint without sacrificing capability.
What the Major Providers Are Doing About It
Provider commitments to clean energy vary significantly in ambition, transparency, and verifiability. Here is what the leading platforms have committed to and where questions remain.
Google (Gemini)
Most transparent renewable commitments
Google has pursued 24/7 Carbon-Free Energy matching, a more rigorous standard than annual renewable offsets. Gemini's median text query carbon footprint dropped approximately 44x over the 12 months prior to August 2025, largely due to this renewable sourcing. Google quietly removed explicit net-zero goals from its website in mid-2025, though the company states its commitment to net-zero by 2030 remains intact.
- Hourly renewable energy matching (not just annual offsets)
- Regular public reporting on AI inference energy and emissions
- Removed explicit net-zero language from public-facing commitments
Microsoft (Copilot / Azure OpenAI)
Ambitious targets with strong water focus
Microsoft has set goals to be carbon negative, water positive, and zero waste by 2030. The company is piloting zero water evaporation data centers opening in 2026, each saving over 125 million liters of water per year. Microsoft is also investing in advanced nuclear and fusion energy as future clean power sources.
- Carbon negative and water positive targets by 2030
- 39% improvement in water usage effectiveness since 2021
- Massive data center expansion underway to support AI demand
OpenAI (ChatGPT)
Rapid growth, limited public transparency
OpenAI runs its infrastructure primarily through Microsoft Azure, meaning its environmental footprint is largely tied to Microsoft's commitments. GPT-4o consumes roughly 0.30 to 0.34 watt-hours per text query. OpenAI does not publish separate environmental reports or make standalone energy sourcing commitments, making independent assessment difficult.
- Runs on Azure infrastructure with Microsoft's sustainability backing
- No standalone environmental reporting or transparency
- Rapid model release cadence increases aggregate infrastructure load
Anthropic (Claude)
High capability, least public transparency
Anthropic publishes less public information about its energy sourcing and environmental commitments than Google or Microsoft. Claude 3 Opus is one of the most energy-intensive publicly available models at approximately 4 watt-hours per query. Newer, smaller Claude models consume significantly less. Anthropic's infrastructure runs across multiple cloud providers.
- Smaller Claude models offer lower-energy alternatives for routine tasks
- Limited public environmental reporting compared to peers
- Claude 3 Opus has among the highest per-query energy costs of major models
The Net Positive Case: When AI Helps the Climate
The environmental conversation around AI is not simply about its direct carbon footprint. There is a compelling and well-documented case that AI's net effect on climate could be substantially positive, because of what the technology enables in other domains. A study published by the LSE Grantham Institute found that AI applications in power systems, transportation, food production, and industrial processes could reduce global emissions by 3.2 to 5.4 billion tonnes of CO2-equivalent annually by 2035. That figure would dwarf the technology's operational footprint many times over.
The mechanism is straightforward: AI dramatically accelerates optimization in systems that generate the majority of global emissions. In the electricity sector, AI is being used to improve the integration of intermittent renewable energy, predict demand with greater precision, and reduce curtailment of wind and solar generation. In transportation, route optimization, traffic management, and fleet efficiency improvements reduce fuel consumption and emissions. In agriculture, precision AI applications reduce fertilizer use, water consumption, and food waste while maintaining yields. These are not hypothetical future applications; they are deployed today at scale by environmental organizations, utilities, and logistics companies.
For nonprofits working in environmental spaces, this net-positive framing provides an important anchor. The question is not simply "does using AI emit carbon?" but "does using AI help our organization achieve more environmental impact per dollar spent?" If your conservation nonprofit uses AI to improve grant writing efficiency, analyze satellite imagery for land cover change, or optimize your field team routing, the mission-multiplying effect likely exceeds the operational carbon cost by a wide margin.
That said, the net positive case should not become a blanket excuse for unlimited and unexamined AI consumption. The honest version of this argument requires that the AI use actually be mission-advancing, not merely convenient. Using frontier models for tasks that a simpler tool would handle equally well, adopting AI video generation for content that does not meaningfully improve mission outcomes, or using AI in ways that do not genuinely save time or improve quality are cases where the environmental cost is not offset by proportional benefit. Intentionality is the key variable.
What Your Nonprofit Can Do: A Practical Framework
Responsible AI use from an environmental perspective does not require abandoning helpful technology. It requires making intentional choices across three dimensions: tool selection, usage patterns, and organizational governance.
1. Choose Providers with Verifiable Clean Energy Commitments
The most impactful single decision is which provider you use
The electricity grid powering your AI queries matters more than your individual query count. A provider running on renewable energy can deliver the same output with a fraction of the carbon emissions of one relying on fossil fuels. When evaluating AI vendors, ask directly about their energy sourcing, published sustainability reports, and whether they use annual renewable energy certificates (a weaker standard) or hourly carbon-free energy matching (the more rigorous approach). Google currently offers the most transparent and aggressive renewable energy commitments among major AI providers.
- Ask vendors for their most recent sustainability or ESG report
- Prefer providers using hourly carbon-free energy matching over annual offsets
- Include sustainability criteria in your AI vendor selection rubric
- For cloud compute, choose data center regions with cleaner electricity grids
2. Match Model Size to Task Requirements
Smaller models are often sufficient and dramatically more energy-efficient
Not every task requires a frontier model. Using a task-appropriate smaller model instead of a large frontier model for equivalent work can reduce energy consumption by 80 to 90 percent. For routine nonprofit tasks like drafting templated communications, summarizing meeting notes, categorizing responses, or answering frequently asked questions, smaller and more efficient models typically produce indistinguishable results. Reserve frontier model usage for genuinely complex reasoning, nuanced analysis, and tasks where the quality difference is meaningful.
- Use smaller models (Haiku, mini, flash variants) for routine text tasks
- Reserve large frontier models for complex analysis where quality matters
- Consider open-source small models (Llama, Mistral, Phi) for high-volume tasks
- Train staff to use the lightest tool that meets the need
3. Apply Special Scrutiny to High-Impact Modalities
AI video generation carries a footprint orders of magnitude larger than text
Given that video generation consumes roughly 3,000 times more energy per output than a text query, this modality deserves specific attention in any organizational AI policy. This does not mean prohibiting AI video; it means making those decisions intentionally. Where AI video genuinely improves donor engagement, program communication, or reach in ways that justify the cost, it may well be worth using. Where the video would primarily serve convenience rather than mission impact, the environmental calculus weighs more heavily against it.
- Include video generation in your AI policy with explicit approval criteria
- Evaluate whether AI video is genuinely improving outcomes or just adding novelty
- For environmental mission organizations, document the rationale for AI video use
- Consider lower-impact alternatives: AI-assisted editing of existing footage is greener than generation
4. Build Environmental Criteria into Your AI Governance
Policy and governance are how organizations create accountability
Individual staff choices about AI usage are difficult to govern at scale. The more effective lever is organizational policy that embeds environmental considerations into how AI is adopted, evaluated, and used. This means including sustainability questions in vendor evaluation rubrics, training staff on the environmental differences between task types and model sizes, and, for environmentally focused organizations, reporting on AI's environmental footprint in the same way you might report on office energy use or travel.
- Add environmental criteria to your AI ethics checklist
- Include sustainability in your organizational AI policy
- Train staff to distinguish high-impact from low-impact AI tasks
- For environmental nonprofits, consider annual reporting on AI footprint
Tools for Estimating Your AI Carbon Footprint
If your organization wants to go beyond qualitative assessment and estimate its actual AI-related carbon footprint, several tools are available. It is important to understand what these tools can and cannot do. Most estimate inference energy only, excluding the training costs and the energy consumed by users' devices. Actual emissions depend heavily on the electricity grid serving the data center, which varies by region and hour of day. All calculators produce estimates, not precise measurements.
For nonprofits building AI applications or using cloud compute directly, CodeCarbon (codecarbon.io) provides a developer-oriented library for tracking energy consumption of AI code in real time. ML CO2 Impact (mlco2.github.io/impact) allows you to estimate training emissions for custom models. For organizations assessing the footprint of their overall AI tool portfolio, Climatiq offers a GHG Protocol-compliant carbon intelligence platform, and Greenly Earth provides quick estimates for AI usage.
For most nonprofits, however, the most meaningful step is not running a carbon calculator but rather making better structural choices: choosing providers with genuine renewable energy commitments, using smaller models for routine tasks, and applying particular care to high-impact modalities like video generation. The data available in 2026 is sufficient to make informed directional decisions even if precise measurement remains challenging.
Organizations that want to communicate transparently with donors about AI's environmental implications can reference their vendor sustainability reports, their policy choices around model selection, and their broader organizational sustainability commitments. This kind of honest accounting builds trust, particularly with donors who care about environmental stewardship, far more effectively than either ignoring the issue or overstating the problem.
Special Considerations for Environmental Mission Organizations
Nonprofits with environmental or climate-related missions face a particular version of this challenge. Using AI tools with significant carbon footprints while advocating for emissions reductions creates a potential credibility gap, one that donors and stakeholders may increasingly notice and question. At the same time, AI represents some of the most powerful tools available to environmental organizations for land monitoring, species identification, climate modeling, and grant research.
The most defensible position for environmental nonprofits is not to avoid AI but to use it strategically and transparently. This means choosing providers with the strongest and most verifiable renewable energy commitments, preferring tools that are demonstrably mission-advancing over those that are merely convenient, and being willing to communicate openly with donors about how you have evaluated and addressed AI's environmental implications.
It also means taking advantage of AI's potential to amplify environmental mission. Tools like SpeciesNet for wildlife monitoring, climate modeling platforms, satellite imagery analysis, and research tools for tracking deforestation or pollution can dramatically extend an organization's monitoring and advocacy capacity. As we noted earlier, research suggests that well-deployed AI in environmental applications could reduce global emissions by billions of tonnes annually, an impact that would dwarf any plausible operational footprint from nonprofit AI use.
Consider including a brief note in your annual report or donor communications about how your organization has evaluated and addressed AI's environmental footprint. For environmental organizations, this kind of transparency is not just a nice-to-have; it is increasingly an expectation from sophisticated donors and foundation funders who are thinking carefully about the technology practices of the organizations they support.
Conclusion: Intentional AI Use Is Sustainable AI Use
The carbon footprint of AI tools is a real and legitimate concern, but it is also a manageable one for nonprofits who approach it thoughtfully. The most important insight from the 2026 data is that not all AI use is created equal. Text queries have a modest footprint, comparable to familiar digital activities. Video generation is orders of magnitude more impactful and deserves specific scrutiny. Provider choice matters enormously, with renewable energy sourcing making the difference between clean and fossil-fuel-powered AI infrastructure.
For most nonprofits, the appropriate response to AI's environmental footprint is not restriction but intentionality. Build environmental criteria into your vendor evaluation process. Train staff to match model size to task requirements. Apply particular care to high-impact modalities. For environmental mission organizations, go a step further by reporting transparently on AI's footprint and ensuring that your use of AI is genuinely advancing the mission it is supposed to serve.
The organizations that will handle this well are those that treat AI environmental accountability the same way they treat other forms of organizational accountability: not as a box to check, but as a genuine expression of their values. In a sector that asks donors to trust its integrity and commitment to mission, extending that integrity to how you use technology is both the right thing to do and, increasingly, what your stakeholders expect.
Use AI Responsibly and Effectively
One Hundred Nights helps nonprofits build AI strategies that are mission-aligned, cost-effective, and ethically grounded. Let's talk about how to get the most from AI while staying true to your values.
