How datacenters use the water and why it is almost impossible to kick the habit

Feature: Since ChatGPT was launched in 2022, the explosive growth of datacenters has brought to light the environmental impact these power-hungry installations have.

It’s not only power that we need to worry about. These facilities can consume enormous amounts of water.

Austin Shelnutt, of Texas-based Strategic Thermal Labs, explained that datacenters in the US can consume between 300,000 to four million gallons per day in order to keep the compute within them cool. This was revealed in a presentation given at SC24 Atlanta in the fall.

In some regions, datacenters consume up to 25 percent of a municipality’s water supply. We’ll explain why in a moment.

This high level of water use has, understandably, led to concerns about water scarcity and desertification. These issues were already problematic because of climate change and have been further exacerbated by generative AI. The AI datacenters that are used to train these models require tens or thousands of GPUs. Each GPU is capable of generating up to 1,200 watts in power and heat.

Hyperscalers, cloud service providers, and model builders will deploy millions of GPUs over the next few year, requiring gigawatts in energy. This will result in even higher water consumption.

According researchers at UC Riverside, and the University of Texas Arlington in Arlington, by 2027 the global AI demand will account for the withdrawal of 4.2 to 6.6 billion cubic meters of fresh water per year. This is roughly equivalent to half of the UK’s annual water withdrawal.

Mitigating datacenter water consumption doesn’t mean simply switching to waterless cooling towers.

Datacenter water cycle

There are two ways that datacenters use water. We will focus on the direct water consumption first. This water is sourced from local sources, including water and wastewater treatment facilities.

The water is pumped to cooling towers where it evaporates and transfers heat into the air. Cooling towers are similar to swamp coolers, which you may have used in your home or apartment.

Datacenter operators are increasingly using evaporative cooling because it is effective at removing heat and doesn’t use a lot of electricity.

Shelnutt says that evaporating 10 gallons of water per minute is enough for cooling roughly 1.5 megawatts.

We’re talking about “consumption,” when we mention

. It’s not so much that it’s consumed as it is removed from the local waters by the wind. This can be problematic, as evaporative cooling is most effective in arid regions where water scarcity is a common problem. Researchers estimate that 70-80% of the water entering a cooling tower will be consumed [PDF]; the remainder is used to flush away mineral deposits, similar to those found in humidifiers. The brine is recycled until it reaches a certain concentration. At that point, it’s flushed to a holding pond.

In order for this to work, it is important that the wastewater treatment plant be sized appropriately to handle the volume of brine generated in the area. When this isn’t done things can get pretty complicated, as it did for Microsoft’s campus at Goodyear, Arizona.

Datacenters’ drinking habits are hard to kick

Evaporative coolers are cheaper to run than other technologies, which is one of the reasons datacenter operators gravitate towards them. Shelnutt said. He explained that the COP (which is the amount of heat removed per unit of power) for evaporative coolers is 1,230, while dry coolers, chillers, and other cooling systems have COPs of 12 and 4 respectively.

An evaporatively-cooled datacenter is more energy-efficient than one that does not use water. This translates into lower operating costs.

The problem is that evaporative cooling is not suitable for every location or climate. In hotter climates, where water is scarce or there is high humidity, evaporative cooling may not be effective. Instead, chillers can be used, which work like your AC unit. In cooler climates like the Nordic region, datacenters use dry coolers and free cooling to remove heat from the air without using any water. Digital Realty CTO Chris Sharp explained to The Register that the use of evaporating cooling depends on both location and climate. He explained. “You have to be good stewards of that resource just to ensure that you’re utilizing it effectively.”

Colocation giant operates over 300 bit barns in different parts of the world, and uses different designs depending on the predicted capacity requirements and environmental aspects. Sharp claims that its standard datacenter design uses no water and relies on chillers instead to draw energy from the facility. In some places, however, evaporative and dry coolers may be used instead.

The majority of datacenter water is not consumed on site

Dry coolers and chillers are not without compromise. These technologies use a lot more power and could result in higher indirect water usage.

The US Energy Information Administration states that the US gets 89 percent its electricity from coal, natural gas and nuclear plants. Steam turbines are used in many of these plants to generate electricity, and this process consumes a large amount of water. Ironically, evaporative cooling is the reason datacenters use so much water on site, but the same technology can be used to reduce the amount lost to steam. The amount of water used in energy generation is far greater than that consumed by modern datacenters.

According to a 2016 study [PDF] conducted by Lawrence Berkeley National Lab, approximately 83 percent (or 1.3 billion gallons) of water consumed in datacenters can be attributed to energy generation. In other words, reducing water consumption on site at the expense of increased power draw could result in an increase in water consumption.

Shaolei Ren is an associate professor of electrical engineering and computer science at UC Riverside. She told The Register that just because datacenters use more water, it doesn’t necessarily mean that power plants do. Ren and his team are studying the environmental impact of datacenters on air quality and water consumption. This is also highly dependent on the location and grid mix. Datacenters in regions with a lot of solar, wind, or hydroelectric power will have a lower indirect water use than those powered by fossil fuels.

How can we reduce datacenter water consumption?

Even though datacenters will, with few exceptions always use some water, there are many ways to reduce direct and indirectly consumption.

One way to reduce water consumption is by matching the flow rate of water with the facility load. Another option is to use free cooling whenever possible. Sharp claims that Digital Realty has seen a 15 percent decrease in water consumption by using a combination sensors and software automation for monitoring pumps and filters.

“That equates to about 126 million gallons of avoided withdrawal from the system because we’re just running it more efficiently,” He said. How deep is Nvidia CUDA’s moat?

  • Google believes the grid cannot support AI, and so it is investing in solar for future datacenters.
  • Meta announces the largest-ever fossil fuel datacenter the day after its nuclear power pledge.
  • Cloudy, with a chance to GPU bills: AIโ€™s energy appetite makes CIOs sweat.
  • Datacenters are also being built in colder climates, which can take advantage of free air conditioning most of the time. In many Nordic countries there is a large amount of hydroelectric energy, so indirect water consumption doesn’t matter as much, even if dry coolers or chilled are needed.

    The heat generated by datacenters has also been used to heat local offices, support district-heating grids, and even greenhouses for year-round production.

    In places where free cooling and heat re-use are not practical, switching to direct-to chip and immersion liquid cooling for AI clusters – which is, by the way a closed-loop that doesn’t consume water – can facilitate the usage of dry coolers. Dry coolers are more energy-intensive, but liquid cooling’s power use efficiency (PUE) is much higher.

    For those who are unfamiliar, PUE is a measure of how much power is consumed by datacenters for computing, storage, and networking equipment, things that make money, versus facility cooling, things that don’t. The more efficient a facility is, the closer the PUE value is to 1.0.

    It is possible because an important chunk of energy, up to 20 percent, is used by air-cooled AI system chassis fans. Water is also a better heat conductor. DLC, which is already being used by Nvidia in its top-specced Blackwell components, could reduce PUE from 1,69-1,44 to 1.1 or even lower.

    As Shelnutt pointed out in his SC24 talk, this balancing act relies heavily on the power savings from DLC not being used to support additional computation. Water-aware computing.

    While many of these technologies require infrastructure changes to be implemented, another option could be to change how workloads are distributed between datacenters. Ren explained that the idea is not too different from carbon-aware computation, where workloads can be routed to different datacenters based on time and carbon intensity of the grid. He admits that cloud providers and hyperscalers won’t be able to achieve this, as they have a tighter grip on their infrastructure. “Colocation providers have more challenges due to limited control over the servers and workloads.”

    Similarly, this approach may not be suitable for workloads that are latency-sensitive, such as AI inferencing. In these cases, proximity to the users is essential for real-time processing of data. AI training workloads, however, don’t suffer from these limitations. Imagine an AI training workload that could run for several weeks or even months in a datacenter in the polar region, where cooling is free.

    Fine tuning workloads, which involves changing the behavior of an already-trained model are much less computationally intensive. A fine-tuning task may only take a few hours, depending on the size of your base model and dataset. In this case, it could be scheduled for the nighttime when temperatures are lower and water is lost less through evaporation.

    Water, the new oil? Shelnutt says that while datacenter water consumption is a concern, especially in drought-prone regions, the real issue is where this water comes from. He said. Shelnutt says that datacenter operators need to invest in desalination facilities, water distribution networks and on-premises wastewater treatments. She also suggests non-potable storage for non-potable water.

    Although the idea of desalinating water first and then shipping it by pipeline or train may sound prohibitive, many hyperscalers are already investing hundreds of millions of dollar to secure onsite nuclear energy over the next few year. Water desalination may not seem so outlandish.

    Shelnutt says that desalinating water and shipping it from the coasts are still more efficient than using refrigerant-based or dry coolers. He said. Shelnutt claims that if you ship 1,000 miles via pipeline, the COP will drop to 132. The COP drops even further when the water is transported by train. It falls to 38. This is still more efficient than using dry cooling systems, but less efficient than evaporating municipal treatment plant water. (r)

    Read More

    More from this stream

    Recomended


    Notice: ob_end_flush(): Failed to send buffer of zlib output compression (0) in /home2/mflzrxmy/public_html/website_18d00083/wp-includes/functions.php on line 5464