> Liquid cooling is a familiar concept to PC enthusiasts, and has a long history in enterprise compute as well.
And the trend in data centers was to move towards more passive cooling at the individual servers and hotter operating temperatures for a while. This is interesting because it reverses that trend a lot, and possibly because of the per-row cooling.
Those style of jobs worked well but as Google has realized it has more high performance computing with unique workload characteristics that are mission-critical (https://cloud.google.com/blog/topics/systems/the-fifth-epoch...) their infrastructure has had to undergo a lot of evolution to adapt to that.
Google PR has always been full of "look we discovered something important and new and everybody should do it", often for things that were effectively solved using that approach a long time ago. MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).
Liquid cooling at Google scale is different than mainframes as well. Mainframes needed to move heat from the core out to the edges of the server where traditional data center cooling would transfer it away to be conditioned. Google liquid cooling is moving the heat completely outside of the building while it’s still liquid. That’s never been done before as far as I am aware. Not at this scale at least.
There's also all the fun experiments with dunking the whole server into oil, but I'll give you that again I've only seen setups described with secondary cooling loops - probably because of corrosion and wanting to avoid contaminants.
We have been doing this for decades, it's how refrigerants work.
The part that is new is not having an air-interface in the middle of the cycle.
Water isn't the only material being looked at, mostly because high pressure PtC (Push to Connect) fittings, and monitoring/sensor hardware has evolved. If a coolant is more expensive but leaks don't destroy equipment, and can be quickly isolated then it becomes a cost/accounting question.
Unless Google has discovered a way to directly transfer heat to the aethereal plane, nothing they’re doing is new. Mainframes were moving chip and module heat entirely outside the building decades ago. Immersion cooling? Chip, module, board, rack, line, and facility-level work? Rear-door and hybrid strategies? Integrated thermal management sensors and controls? Done. Done. Done. Done. Richard Chu, Roger Schmidt, and company were executing all these strategies at scale long before Google even existed.
I wasn’t clear when I was writing but this was the point I was trying to make. Heat from the chip is transferred in the same medium all the way from the chip to the exterior chiller without intermediate transfers to a new medium.
IMO, it's not a big difference. There are probably many details more noteworthy than this. And yeah, mainframes are that way because the vendor only creates them up to the hack-level, while Google has the "vendor" design the entire datacenter. Supercomputers have had single-vendor datacenters for decades too, and have been using large pipes for a while too.
I do think Google must be doing something right, as their quoted PUE numbers are very strong, but nothing about what's in the linked chipsandcheese article seems groundbreaking at all architecturally, just strong micro-optimization. The article talks a lot about good plate/thermal interface design, good water flow management, use of active flow control valves, and a ton of iteration at scale to find the optimal CDU-to-hardware ratio, but at the end of the day it's the same exact thing in the diagram from 1965.
The next step is probably evaporative cooling, with liquid coolant ("freon") pumped to individual racks.
[I am still annoyed at how many people are dismissive of Google’s datacenter work simply because “severs have been water cooled before” which completely misses the point of datacenter level cooling. I also learned that AWS is doing this already, along with some elements of OVH] =)