Addressing the Environmental Impact of AI Data Centers: Power, Heat, and Water Challenges
As artificial intelligence (AI) technologies advance, the infrastructure supporting them-particularly data centers-faces increasing scrutiny for its environmental footprint. The evolution of computing hardware, especially CPUs and GPUs, plays a pivotal role in this dynamic, influencing energy consumption, heat generation, and water usage. Understanding these factors is essential for developing sustainable solutions in the AI era.
Power Efficiency: CPUs vs. GPUs in Modern AI Workloads
Historically, central processing units (CPUs) have improved in efficiency by integrating more transistors without significantly increasing power consumption. This is partly due to specialized processing units that accelerate specific tasks, allowing the chip to return to low-power states more quickly. In contrast, graphics processing units (GPUs), which are fundamental to AI computations, have grown larger and more complex with each generation, resulting in substantial increases in power demand. Unlike CPUs, GPUs tend to activate all processing cores simultaneously, leading to higher energy consumption.
Recent data from the Lawrence Berkeley National Laboratory highlights this trend. Between 2014 and 2016, data center electricity usage remained steady at approximately 60 terawatt-hours (TWh). However, as GPUs became more prevalent, consumption surged to 76 TWh in 2018 and skyrocketed to 176 TWh by 2023. This represents a more than twofold increase in just five years, with data centers now accounting for nearly 4.4% of total U.S. electricity use-a figure projected to reach up to 12% by the early 2030s.
Heat Generation and Cooling: The Hidden Costs of AI Processing
Electric current flowing through silicon chips encounters resistance, producing heat much like the filament in a traditional lightbulb. CPUs resemble modern LEDs in their relative efficiency, whereas GPUs are akin to incandescent bulbs, dissipating a significant portion of power as heat. AI data centers, often packed with racks of GPUs, generate enormous amounts of thermal energy that must be managed carefully to maintain performance and hardware longevity.
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) recommends maintaining data center temperatures between 18°C and 27°C (64.4°F to 80.6°F). Achieving this requires sophisticated cooling strategies and substantial energy input. One common method is hot and cold aisle containment, which directs cold air through server racks while channeling hot exhaust air away for cooling and recirculation.
Evaporative cooling techniques are widely used, especially in U.S. data centers. Direct evaporative cooling involves passing hot air through a moist medium to cool it via evaporation before reintroducing it into the server environment. Indirect evaporative cooling adds a heat exchanger to prevent humidity increases inside the facility. Despite their effectiveness, these methods contribute to a significant water footprint.
Water Consumption: A Growing Concern for Data Centers
Water usage by data centers has escalated dramatically. In 2014, U.S. data centers consumed approximately 21.2 billion liters of water. By 2018, this figure tripled to 66 billion liters, largely driven by hyperscale facilities focused on AI workloads. In 2023, traditional data centers used about 10.56 billion liters, while AI-centric centers consumed roughly 55.4 billion liters. Projections suggest AI data centers could require up to 124 billion liters annually by 2028.
Data centers rank among the top ten industrial water consumers in the U.S., with around 20% sourcing water from stressed watersheds-regions where demand exceeds natural supply. Most of this water evaporates during cooling, removing it from the local water cycle. Additionally, water treated with chemicals to prevent corrosion and microbial growth results in wastewater that cannot be reused for drinking or agriculture.
Beyond direct cooling, a significant portion of data centers’ water footprint stems from electricity generation. Power plants use vast amounts of water for cooling and steam production to drive turbines. On average, every megawatt-hour of electricity consumed by U.S. data centers requires 7.1 cubic meters of water. This interdependence means data centers indirectly rely on water resources across the entire contiguous United States.
Innovative Cooling Solutions to Reduce Environmental Impact
To mitigate water consumption and improve cooling efficiency, many AI data centers are adopting closed-loop liquid cooling systems. These systems circulate coolant through heat exchangers attached to high-heat components like CPUs and GPUs, transferring heat away more effectively than traditional air cooling. Unlike evaporative methods, closed-loop systems minimize water loss, making them more sustainable.
Industry leaders such as Google, NVIDIA, and Microsoft are pioneering liquid cooling technologies. Microsoft, for example, is experimenting with microfluidic cooling, where coolant flows through microscopic channels directly on the chip’s surface. Laboratory tests show this method can remove heat up to three times more efficiently than conventional cold plates, reducing peak silicon temperatures by 65% in GPUs.
Another promising approach is “free cooling,” which leverages natural environmental conditions. Air-based free cooling uses cold outdoor air in cooler climates, while water-based free cooling taps into cold water sources like seawater. Some data centers integrate rainwater harvesting to supplement water needs for humidification and other processes.
Geothermal energy is emerging as a viable power and cooling source. The Rhodium Group estimates geothermal could supply up to 64% of the projected increase in U.S. data center electricity demand by the early 2030s. Iron Mountain Data Centers operates a geothermal-cooled facility in Pennsylvania, utilizing an underground reservoir for year-round cooling. Meta has also partnered with Sage Geosystems to provide up to 150 megawatts of geothermal power for its data centers starting in 2027.
Beyond Infrastructure: Transparency and Efficiency in AI Operations
While technological innovations in cooling and power management are crucial, experts emphasize the importance of transparency regarding AI data centers’ environmental impacts. Vijay Gadepally, a senior scientist at MIT’s Lincoln Laboratory Supercomputing Center, advocates for AI companies to openly disclose their emissions and resource consumption to foster accountability.
Improving chip design to enhance performance per watt is another key strategy. Many data centers currently operate below capacity, leading to inefficient power use. Optimizing existing infrastructure before expanding new facilities can significantly reduce environmental strain.
Moreover, AI models themselves often consume excessive resources relative to their tasks. Gadepally likens this to “cutting a hamburger with a chainsaw”-effective but unnecessarily wasteful. Developing more tailored, efficient AI models can help curb the growing energy and water demands of AI computing.

