Generative AI's environmental footprint is vast and growing. Training GPT-4 produced an estimated 21,660 metric tons of CO₂ equivalent, while a single ChatGPT query uses ~10x the energy of a Google search. Global AI electricity consumption hit 415 TWh in 2024 and could exceed 1,000 TWh by 2030. Water consumption is equally alarming—GPT-3 training evaporated ~700,000 litres of freshwater, and generating a 100-word email consumes ~519ml of water. Key mitigation strategies include model compression (quantisation, pruning, distillation), Mixture of Experts architectures, liquid/immersion cooling (up to 99% water reduction), renewable energy procurement with additionality, and task-appropriate model selection. The DeepSeek paradox illustrates that training efficiency gains don't guarantee lower inference costs. Policy frameworks like the EU AI Act and Energy Efficiency Directive are beginning to mandate transparency. The Jevons paradox remains a critical challenge: efficiency gains tend to be consumed by larger models and expanded use cases rather than reducing total consumption. Three 2030 scenarios range from 800 TWh (optimistic) to 1,200+ TWh (pessimistic), with outcomes hinging on policy and procurement decisions made in the next 2–3 years.
Sort: