> Liquid cooling is a familiar concept to PC enthusiasts, and has a long history in enterprise compute as well.
And the trend in data centers was to move towards more passive cooling at the individual servers and hotter operating temperatures for a while. This is interesting because it reverses that trend a lot, and possibly because of the per-row cooling.
Google wants you to know it recycles its water. It's free points.
Edit: to clarify, normal social media is being flooded with stories about AI energy and water usage. Google isn't greenwashing, they're simply showing how things work and getting good press for something they already do.
Those style of jobs worked well but as Google has realized it has more high performance computing with unique workload characteristics that are mission-critical (https://cloud.google.com/blog/topics/systems/the-fifth-epoch...) their infrastructure has had to undergo a lot of evolution to adapt to that.
Google PR has always been full of "look we discovered something important and new and everybody should do it", often for things that were effectively solved using that approach a long time ago. MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).
https://blog.codinghorror.com/building-a-computer-the-google...
Last year, U.S. data centers consumed 17 billion gallons of water. Which sounds like a lot, but the US as a whole uses 300 billion gallons of water every day. Water is not a scarce resource in much of the country.
Liquid cooling at Google scale is different than mainframes as well. Mainframes needed to move heat from the core out to the edges of the server where traditional data center cooling would transfer it away to be conditioned. Google liquid cooling is moving the heat completely outside of the building while it’s still liquid. That’s never been done before as far as I am aware. Not at this scale at least.
I posed this further down in a reply-to-a-reply but I should call it out a little closer to the top: The innovation here is not “we are using water for cooling”. The innovation here is that they are direct cooling the servers with chillers that are outside of the facility. Most mainframes will use water cooling to get the heat from the core out to the edges where traditional where it can be picked up by the typical heatsink/cooling fans. Even home PCs do this by moving the heat to a reservoir that can be more effectively cooled.
What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server. The return water is then cooled in the chiller tower. This eliminates ANY air based transfer besides the chiller tower. This is one being done a server or a rack.. its being done on the whole data center all at once.
I am super curious how they handle things like chiller maintenance or pump failures. I am sure they have redundancy but the system for that has to be super impressive because it can’t be offline long before you experience hardware failure!
[Edit: It was pointed out in another comment that AWS is doing this as well and honestly their pictures make it way clearer what is happening: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data...]
There's also all the fun experiments with dunking the whole server into oil, but I'll give you that again I've only seen setups described with secondary cooling loops - probably because of corrosion and wanting to avoid contaminants.
Yes. A supply and return line along with power. Though if I had to guess how its setup this would be done with some super slick “it just works” kind of mount that lets them just slide the case in and lock it in place. When I was there almost all hardware replacement was made downright trivial so it could just be more or less slide in place and walk away.
We have been doing this for decades, it's how refrigerants work.
The part that is new is not having an air-interface in the middle of the cycle.
Water isn't the only material being looked at, mostly because high pressure PtC (Push to Connect) fittings, and monitoring/sensor hardware has evolved. If a coolant is more expensive but leaks don't destroy equipment, and can be quickly isolated then it becomes a cost/accounting question.
Unless Google has discovered a way to directly transfer heat to the aethereal plane, nothing they’re doing is new. Mainframes were moving chip and module heat entirely outside the building decades ago. Immersion cooling? Chip, module, board, rack, line, and facility-level work? Rear-door and hybrid strategies? Integrated thermal management sensors and controls? Done. Done. Done. Done. Richard Chu, Roger Schmidt, and company were executing all these strategies at scale long before Google even existed.
It does sound like connections do involve water lines though. As they are isolating different water circuits, in theory they could have a dry connection between heat exchanger plates, or one made through thermal paste. It doesn't sound like they're doing that though.
I wasn’t clear when I was writing but this was the point I was trying to make. Heat from the chip is transferred in the same medium all the way from the chip to the exterior chiller without intermediate transfers to a new medium.
The “Map” in MapReduce does not originally stand for the map operation, it comes from the concept of “a map” (or, I guess, a multimap). MapReduce descends from “the ripper”, an older system that mostly did per-element processing, but wasn't very robust or flexible. I believe the map operation was called “Filter()” at the time, and reduce also was called something else. Eventually things were cleaned up and renamed into Map() and Reduce() (and much more complexity was added, such as combiners), in a sort of backnaming.
It may be tangential, but it's not like the MapReduce authors started with “aha, we can use functional programming here”; it's more like the concept fell out. The fundamental contribution of MapReduce is not to invent lambda calculus, but to show that with enough violence (and you should know there was a lot of violence in there!), you can actually make a robust distributed system that appears simple to the users.
Dean, Ghemawat, and Google at large deserve credit not for inventing map and reduce—those were already canonical in programming languages and parallel algorithm theory—but for reframing them in the early 2000s against the reality of extraordinarily large, scale-out distributed networks.
Earlier takes on these primitives had been about generalizing symbolic computation or squeezing algorithms into environments of extreme resource scarcity. The 2004 MapReduce paper was also about scarcity—but scarcity redefined, at the scale of global workloads and thousands of commodity machines. That reframing was the true innovation.
https://www.opwglobal.com/products/us/retail-fueling-product...
But the point of this kind of paper is typically not what is new, it's what combination of previously known and novel techniques have been found to work well at massive scale over a timespan of years.
and with the internal usage of the program (I only started in 2008, but spoke to Jeff extensively about the history of MR as part of Google's early infra) where the map function can be fed with recordio (list containers) or sstable (map containers).
As for the ripper, if you have any links to that (rather than internal google lore), I'd love to hear about it. Jeff described the early infrastructure as being very brittle.
IMO, it's not a big difference. There are probably many details more noteworthy than this. And yeah, mainframes are that way because the vendor only creates them up to the hack-level, while Google has the "vendor" design the entire datacenter. Supercomputers have had single-vendor datacenters for decades too, and have been using large pipes for a while too.
So you can get a single, blind-mating connector combining power, data and water - but you might not want to :)
In my day we had software that would “drain” a machine and release it to hardware ops to swap the hardware on. This could be a drive, memory, CPU or a motherboard. If it was even slightly complicated they would ship it to Mountain View for diagnostic and repair. But every machine was expected to be cycled to get it working as fast as possible.
We did a disk upgrade on a whole datacenter that involved switching from 1TB to 2TB disks or something like that (I am dating myself) and total downtime was so important they hired temporary workers to work nights to get the swap done as quickly as possible. If I remember correctly that was part of the “holy cow gmail is out of space!” chaos though, so added urgency.
https://substackcdn.com/image/fetch/$s_!8aMm!,f_auto,q_auto:...
Looks like the power connector is in the centre. I'm not sure if backplane connectors are covered up by orange plugs?
> What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server.
From the article:
> CDUs exchange heat between coolant liquid and the facility-level water supply.
Also, I know from attaching them at some point that plenty of mainframes used this exact same approach (water to water exchange with facility water), not water to air to water like you describe in this comment and others, so I think you may have just not had experience there? https://www.electronics-cooling.com/2005/08/liquid-cooling-i... contains a diagram in Figure 1 of this exact CDU architecture, which it claims was in use in mainframes dating back to 1965 (!).
I also don't think "This eliminates ANY air based transfer besides the chiller tower." is strictly true; looking at the photo of the sled in the article, there are fans. The TPUs are cooled by the liquid loop but the ancillaries are still air cooled. This is typical for water cooling systems in my experience; while I wouldn't be surprised to be wrong (it sure would be more efficient, I'd think!), I've never seen a water cooling system which successfully works without forced air, because there are just too many ancillary components of varying shapes to successfully design a PCB-waterblock combination which does not also demand forced air cooling.
Non-spill fluid quick disconnects are established tech in industries like medical, chemical processing, beverage dispensing, and hydraulic power, so there are plenty of design concepts to draw on.
I worked on the MapReduce team for a while (coincidentally, around 2008), together with Marián Dvorský, who wrote up a great little history of this. I don't think it was ever made public, though.
> As for the ripper, if you have any links to that (rather than internal google lore), I'd love to hear about it. Jeff described the early infrastructure as being very brittle.
I believe it's all internal, unfortunately.
Oh interesting I missed that when I went through in the first pass. (I think I space bared to pass the image and managed to skip the entire paragraph in between the two images so that’s on me.
I was running off an informal discussion I had with a hardware ops person several years ago where he mentioned a push to unify cooling and eliminate thermal transfer points since they were one of the major elements of inefficiency in modern cooling solutions. By missing that as I browsed through it I think I leaned too heavily on my assumptions without realizing it!
Also, not all chips can be liquid cooled so there will always be an element of air cooling so the fans and stuff are still there for the “everything else” cases and I doubt anybody will really eliminate that effectively. The comment you quoted was mostly directed towards the idea that Cray-1 had liquid cooling, it did, but it transferred to air outside of the server which was an extremely common model for most older mainframe setups. It was rare for the heat to be kept liquid along the whole path.
I do think Google must be doing something right, as their quoted PUE numbers are very strong, but nothing about what's in the linked chipsandcheese article seems groundbreaking at all architecturally, just strong micro-optimization. The article talks a lot about good plate/thermal interface design, good water flow management, use of active flow control valves, and a ton of iteration at scale to find the optimal CDU-to-hardware ratio, but at the end of the day it's the same exact thing in the diagram from 1965.
I don't know what surprises me about it so much, but having these rack-sized CDU heat-exchangers was quite a surprise, quite novel to me. Having a relatively small closed loop versus one big loop that has to go outside seems like a very big tradeoff, with a somewhat material and space intensive demand (a rack with 6x CDUs), but the fine grained control does seem obviously sweet to have. I wish there were a little more justification for the use of heat exchangers!
The way water is distributed within the server is also pretty amazing, with each server having it's own "bus bar" of water, and each chip having it's own active electro-mechanical valve to control it's specific water flow. The TPUv3 design where cooling happens serially, each chip in sequence getting hotter and hotter water seems common-ish, where-as with TPUv4 there's a fully parallel and controllable design.
Also the switch from lidded chips to bare chips, with a cold plate that comes down to just above, channeling water is one of those very detailed fine-grained optimizations that is just so sweet.
The correct metric is something like, what's the probability that the launch of data center in a location results in nearby communities to drop significantly in these water metrics.
Google isn't claiming to have invented water cooling. This article recaps their talk at Hot Chips where they showed some details of their implementation.
Data center cooling is also a different beast than supercomputer cooling. Datacenter cooling operates at a larger scale and has different needs for maintenance operations like maintenance cycles.
There are also some interesting notes in there about new things Google is doing, like the direct-die cooling.
Water cooling is a big field. Data center operations is a big field. There is interesting content in the article.
Building out an idea for a bespoke supercomputer and making an iteration of that idea that applies to globally scaled workloads is a different thing. They are building computing factories, as is Amazon, MS, Facebook, etc.
That said, PR is PR, and companies always crow about something for their own reasons.
Running direct on facility water would made day to day operations and maintenance a total pain.
Starting with S/390 G4 they did a weird thing where the internals were cooled by refrigeration but the standard SKUs actually had the condenser in the bottom of the cabinet and they required raised floor cooling.
They brought water to air back with the later zSeries, but the standard SKUs mimicked the S/390 strategy with raised floor. I guess you could buy a z196 or a ec12 with a water to water cabinet but I too have never seen one.
Hasn't this just been for things like rack doors and such?
In the last ~two generations of servers it seems like now there's finally DLC (direct liquid cooling) into the actual servers themselves (similar to the article). Intel kind of forced that one on us, with their high-end SKUs. This has been a pain becuase it doesn't fit into legacy datacenters as easily as the door/rack-based systems.
I won't say which server vendor it is, but I've put in more than one service ticket for leaking coolant bags (they hang them on the racks).
The problem is often exacerbated on PCBs designed for air cooling where the clearance between water cooled and air cooled components is not high enough to fit a water block. Usually the solution when design allows is to segment these components into a separate air cooled portion of the design, which is what Google look to have done on these TPU sleds (the last ~third of the assembly looks like it’s actively air cooled by the usual array of rackmount fans).
The next step is probably evaporative cooling, with liquid coolant ("freon") pumped to individual racks.
> part of the “holy cow gmail is out of space!” chaos
This sounds like an interesting story. Can you share more details.Messy.
You would have a liquid block on the CPU but you'd also have a heat sink on top that transfers heat from the air to the coolant block, working in reverse compared to normal air cooling heatsinks. The temperature difference would cause passive air circulation and the liquid cooling would now cool both the CPU and the air in the box, without fans.
Seems like something someone would have thought about and tested already though.
edit: https://www.teslarati.com/tesla-liquid-cooled-supercharger-c...
Take that airflow away and you have to be a good deal more careful with your connector selection, quality control and usability or you'll risk melted connectors.
Water-cooling connectors and cables isn't common, outside of things like 250kW EV chargers.
It's a fascinating industry, but only in my head as the only info you get about it is carefully polished articles and the occasional anecdote on HN, which is also carefully polished due to NDAs.
At least, if you look at them on streetview a lot of them seem to be in the middle of nowhere, surrounded by miles of undeveloped scrubland. If Google's Henderson, NV data centre [1] needed more space they could simply buy out the adjacent car wrecking yard or pet crematorium, or reconsider the gigantic gatehouse and vast expanses of beige gravel.
Even in Belgium, with its higher population density [2] the car park is bigger than the data centre.
The layout makes me think they were told they could only have a certain amount of power, but essentially as much land as they needed. So they're concerned about power and thermals, but maybe not about power and thermal density so much.
[1] https://www.google.com/maps/place/Google+Data+Center+-+Hende... [2] https://www.google.com/maps/place/Google+Data+Center/@55.557...
[I am still annoyed at how many people are dismissive of Google’s datacenter work simply because “severs have been water cooled before” which completely misses the point of datacenter level cooling. I also learned that AWS is doing this already, along with some elements of OVH] =)
It doesn't have to do that much, but maybe you're right. I'm sure they'd be doing this if it was practical, being able to onit thousands of fans would probably save a pretty penny both on hardware and electricity.