What direct risks does AI pose to the climate and environment?

Artificial intelligence (AI) is increasingly being embedded into the daily functioning of sectors from healthcare to finance, agriculture and environmental management. While it has the potential to support climate action and biodiversity conservation, unmitigated growth in AI use poses significant ethical and direct climate and environmental risks. These direct risks primarily stem from the significant infrastructure needed to build and operate AI systems, including energy and water-intensive data centres and critical mineral extraction. Importantly, the use of AI can also directly cause environmental harm.
Environmental impacts from AI infrastructure
Climate and air pollution impacts from AI’s energy consumption
Training AI models requires vast computational resources, with computing power usage increasing tenfold between 2018 and 2022. With fossil fuels still providing over 60% of total global electricity generation, the rising demand for AI computing risks increasing greenhouse gas emissions.
The process by which AI models analyse new data and produce outputs, commonly referred to as inference, is now responsible for about 80–90% of AI computing resources. Most users rely on commercial models for which energy consumption data remains undisclosed. This lack of transparency from major providers poses a significant challenge for accurately assessing AI’s environmental impacts, including its carbon footprint, as researchers cannot independently evaluate the energy usage of these proprietary models (like they can for open source models). As AI usage expands rapidly across society – likely driven by increases in total energy consumption from inference – improving transparency will become increasingly critical for managing this footprint.
In 2024, data centres accounted for about 1.5% of the world’s electricity consumption. But demand is projected to double to 945 TWh by 2030, potentially exceeding Japan’s current total electricity consumption. The United States accounts for the largest share of global data centre electricity consumption (45%), followed by China (25%) and Europe (15%). In the US, data centres already represent 4.4% of energy consumption. The need to finance the unprecedented expansion of power grid infrastructure could lead to public funding being diverted away from social and environmental projects and delay the retirement of existing fossil fuel-intensive infrastructure.
Data centres currently contribute about 1% of global energy-related greenhouse gas emissions and are among the fastest-growing sources of emissions. By 2035, increased data centre energy use could lead to an additional 0.4–1.6 gigatonnes of CO2 equivalent (GtCO2e) emissions. Some companies, such as xAI, are building new gas-powered turbines to power their data centres. This expansion of fossil fuel infrastructure is also causing local air pollution to increase. For instance, xAI’s data centre in Memphis, Tennessee emits an estimated 1,200 to 2,000 tons of nitrogen oxide (NOx) a year, making it one of the area’s largest emitters. Besides affecting human health, high concentrations are also harmful to natural ecosystems. NOx emissions are generally strictly regulated, but it appears some companies are sidestepping regulations to meet their electricity needs.
Pressure on water resources from cooling data centres
Data centres can require enormous volumes of water for cooling, and tend to be highly concentrated in certain locations, threatening local water supplies. In 2027, global AI training and use are projected to account for 4.2−6.6 billion cubic metres of water withdrawal. This raises concerns around priorities for water use, especially in water-stressed areas and during times of drought, and places pressure on nature and people. For instance, in 2021, the Taiwanese government implemented water rationing measures that prioritised the semiconductor sector, which is an essential hardware input for AI data centres, over other sectors such as agriculture.
Impacts from mining critical minerals for AI infrastructure
The construction of data centres and AI technologies requires significant amounts of minerals and metals, including those essential for semiconductors and microelectronics (e.g. boron and silicon), data storage components (e.g. lithium, silicon and gallium) and power generation and storage (e.g. lithium and graphite). Mining and processing these minerals significantly affect the environment, through high energy demand and associated emissions, groundwater depletion, water and soil contamination, deforestation and soil erosion, in turn contributing to biodiversity loss, land degradation and harming human health. Furthermore, demand for these minerals is rising. As well as the increase in direct environmental impacts, the growing demand places AI infrastructure in direct competition with other critical societal and environmental needs, such as the expansion of renewable energy infrastructure needed to mitigate climate change.
Marine ecosystem risks from underwater data centres
Underwater data centres, such as Microsoft’s research project Natick, have gained attention for their potential to be more sustainable than land-based alternatives, as they could reduce cooling costs and associated electricity use, while improving hardware reliability. However, the heat generated by these submerged facilities can raise local water temperatures, further intensifying existing warming due to climate change. Elevated water temperatures reduce oxygen availability, threatening marine species’ healthy functioning. Additionally, warmer surface waters mix less effectively with deeper, nutrient-rich layers, disrupting nutrient cycles and potentially harming biodiversity and food webs.
Environmental risks from AI-driven applications
In addition to the environmental risks from the rapid build-out of AI infrastructure, there are serious environmental concerns that may come from the use of AI.
Risks from AI-powered biological design
AI-driven biodesign, which uses biological principles and living organisms to create products, systems and services, can operate as a ‘black box’, with limited traceability, accountability or biosafety oversight. This poses enormous risks, especially when new lifeforms are introduced into ecosystems, such as in the use of self-spreading viruses as insecticides. When used to intentionally cause harm, AI may also reduce the barriers for large-scale biological attacks or be used for military purposes, which has the potential to create lasting human and ecological damage.
Accelerating environmentally harmful industries
General-purpose AI technologies can boost efficiency and innovation in environmentally harmful industries. For instance, the oil and gas sector, an early adopter of AI, has used these technologies to optimise exploration, production and maintenance and to facilitate the discovery of new reserves. These innovations could decrease the cost of fossil fuels and incentivise greater consumption. Similarly, AI-powered autonomous vehicles could encourage private car use at the expense of more sustainable public transport options.
AI’s harm to ecosystems will depend on human policy choices
AI regulations largely focus on ethics and privacy (e.g. the EU’s AI Act), neglecting explicit environmental impacts. Regulatory oversight remains fragmented or inadequate in many locations, including the US, exemplified by cases where companies operate energy infrastructure to power data centres without environmental permits. This lack of effective oversight leaves environmental risks largely unchecked.
The environmental impacts of AI on communities, ecosystems, greenhouse gas emissions and biosecurity are not predetermined. Rather, they depend on choices about data centre locations, resource extraction for hardware, energy sources powering AI infrastructure, and oversight of AI application use. Ultimately, the future of AI is a political decision: the choice could be made to build a future in which AI benefits society without causing irreparable damage to the environment. Democratic deliberation and government regulation will be critical to promoting this choice.
This Explainer was written by Lea Reitmeier and Sylvan Lutz, and reviewed and edited by Georgina Kyriacou and Sarah King. The authors thank Roberta Pierfederici, Gustavo Pinilla, Franka Huhn, Elena Almeida and Laudine Goumet for their helpful comments on an earlier draft. Lea Reitmeier authored this commentary when employed as a Policy Fellow at the Grantham Research Institute.
Listen to the LSE iQ podcast episode: Is AI destroying our planet? and the recording of the LSE public lecture on: Harnessing AI: safeguarding high-integrity data for climate action