Eric Zie

Accommodating the Increasing Environmental Impact of AI Adoption

Live (Online)

A Talk by Eric Zie (Chief Research Officer, Charsfield Research and Advisory)

About this Talk

Artificial Intelligence (AI) is often heralded as a force for efficiency, innovation, and business transformation. From optimising operations to enhancing customer experiences, AI adoption continues to accelerate across industries. However, an often-overlooked consequence of this rapid expansion is AI’s growing environmental impact—an issue that executives must address as part of their deployment decisions and sustainability commitments. The Sustainability Blind Spot: AI’s Unmeasured Carbon Cost AI’s sustainability narrative has largely focused on its potential to enable carbon reductions—through smarter energy management, supply chain efficiencies, and predictive analytics. Yet, few organisations systematically measure the direct impact of AI itself, particularly the emissions linked to training and deploying large-scale models. The absence of reliable data means that organisations may be underestimating—or entirely missing—the carbon cost of their AI ambitions. Take, for example, the AI-powered workloads increasingly embedded in everyday business functions. Training a single large language model (LLM) can consume substantial amounts of electricity. For instance, training GPT-3 consumed approximately 1,287 MWh of electricity, resulting in carbon emissions of 502 metric tonnes—equivalent to the annual emissions of 112 gasoline-powered cars (Columbia Climate School, 2023). Inferencing—running these models in production—continually draws energy, often from high-carbon grid sources. Despite AI being a significant consumer of cloud resources, most enterprises lack the data to quantify its share of their total carbon footprint. This means sustainability strategies may be incomplete or misaligned, with AI’s energy consumption eroding decarbonisation gains made elsewhere in the organisation. The Missing Metrics: Why Executives Struggle to Quantify AI’s Footprint A key challenge for executives is the lack of standardised measurement for AI emissions. Unlike traditional IT infrastructure, where power consumption is more predictable, AI workloads fluctuate based on demand, compute intensity, and deployment models. Several data gaps hinder accurate assessment: Cloud and Data Centre Transparency Issues Many organisations rely on cloud providers for AI infrastructure, yet granular energy usage data is rarely disclosed at the workload level. Without visibility into the energy source mix (e.g., renewable vs. fossil fuel-based power), organisations struggle to factor AI into their carbon accounting frameworks. AI Model Efficiency vs. Carbon Intensity While model optimisation techniques (e.g., quantisation, pruning) can reduce power consumption, there’s often no direct correlation between AI efficiency and sustainability impact. A highly optimised model may still consume excessive energy if it’s repeatedly run on high-carbon grids or provisioned inefficiently. Scope 3 Emissions from AI Supply Chains The embodied carbon in AI hardware (GPUs, TPUs, and other accelerators) is rarely considered in IT sustainability reporting. From raw material extraction to manufacturing and transport, the infrastructure supporting AI has a significant carbon footprint that remains largely unmeasured and unaccounted for. AI’s Carbon Reality: Consuming Decarbonisation Gains A recent industry report further revealed that nearly half (48%) of executives believe their use of generative AI has led to increased emissions, with 42% needing to reassess their climate goals (The Engineer, 2024). This presents a stark challenge for sustainability- focused enterprises: how do you scale AI innovation without compromising environmental commitments? More importantly, how do you measure and manage AI’s sustainability impact when the data is incomplete? For some organisations, the uncontrolled expansion of AI could consume a substantial portion of their carbon reduction efforts. Early analysis of early-stage enterprise AI deployments shows that AI-related emissions can already account for up to 8% of total ICT-related decarbonisation benefits—meaning that gains made from renewable energy adoption, process efficiencies, and hardware lifecycle improvements are being eroded by AI’s energy consumption (GoCodeGreen, 2025). A Smarter Approach: Bringing AI into Decarbonisation Strategy Addressing AI’s carbon footprint requires a shift in mindset—AI should be treated as a measurable component of an organisation’s environmental impact, not an afterthought. Executives looking to align AI growth with net-zero goals should consider the following steps: 1. Integrate AI into Carbon Accounting Frameworks AI workloads should be explicitly measured within Scope 2 emissions for cloud- based compute and Scope 3 emissions for supply chain impact. Organisations should push for greater transparency from cloud and AI solution providers, ensuring emissions reporting includes AI workloads separately from general compute. 2. Optimise AI Compute for Carbon Efficiency AI workloads should be deployed based on grid carbon intensity, ensuring energy- intensive training runs occur during periods of lower grid emissions. Dynamic workload scheduling—adjusting when and where AI runs—can significantly reduce associated emissions without compromising performance. 3. Prioritise Energy-Efficient AI Design AI engineering teams should adopt sustainability-aware model architectures, selecting smaller, more efficient models where possible. Model distillation, quantisation, and efficient algorithm design can reduce computation requirements without sacrificing accuracy. 4. Extend AI Lifecycle Assessment to Hardware Organisations should factor in the full lifecycle emissions of AI hardware, considering circular economy approaches for GPUs, TPUs, and accelerators, and prioritising energy-efficient hardware in procurement policies. 5. Embed Environmental Impact as a Core AI KPI AI teams should track environmental impact alongside accuracy and speed, ensuring emissions reduction is a core part of AI governance and decision-making. From Awareness to Action: The Need for Executive-Led Change AI is here to stay, but its unchecked expansion risks becoming a blind spot in enterprise sustainability strategies. Without accurate data and measurement, executives may be unintentionally undermining their own decarbonisation goals. To lead responsibly in the AI era, business leaders must integrate AI emissions tracking, optimise deployment strategies, and ensure that AI serves sustainability instead of working against it. What should be the next step? Organisations must move beyond theoretical sustainability commitments and start measuring the real-world impact of AI. By bringing AI into the decarbonisation conversation today, enterprises can build a more sustainable, responsible AI future—one that balances innovation with planetary boundaries.

20 October 2025, 10:00 AM

10:00 AM - 10:30 AM

Add to Calendar

About The Speaker

Eric Zie

Eric Zie

Chief Research Officer, Charsfield Research and Advisory