I posed this further down in a reply-to-a-reply but I should call it out a little closer to the top: The innovation here is not “we are using water for cooling”. The innovation here is that they are direct cooling the servers with chillers that are outside of the facility. Most mainframes will use water cooling to get the heat from the core out to the edges where traditional where it can be picked up by the typical heatsink/cooling fans. Even home PCs do this by moving the heat to a reservoir that can be more effectively cooled.
What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server. The return water is then cooled in the chiller tower. This eliminates ANY air based transfer besides the chiller tower. This is one being done a server or a rack.. its being done on the whole data center all at once.
I am super curious how they handle things like chiller maintenance or pump failures. I am sure they have redundancy but the system for that has to be super impressive because it can’t be offline long before you experience hardware failure!
[Edit: It was pointed out in another comment that AWS is doing this as well and honestly their pictures make it way clearer what is happening: https://www.aboutamazon.com/news/aws/aws-liquid-cooling-data...]
> What Google is doing is using the huge chillers that would normally be cooling the air in the facility to cool water which is directly pumped into every server.
From the article:
> CDUs exchange heat between coolant liquid and the facility-level water supply.
Also, I know from attaching them at some point that plenty of mainframes used this exact same approach (water to water exchange with facility water), not water to air to water like you describe in this comment and others, so I think you may have just not had experience there? https://www.electronics-cooling.com/2005/08/liquid-cooling-i... contains a diagram in Figure 1 of this exact CDU architecture, which it claims was in use in mainframes dating back to 1965 (!).
I also don't think "This eliminates ANY air based transfer besides the chiller tower." is strictly true; looking at the photo of the sled in the article, there are fans. The TPUs are cooled by the liquid loop but the ancillaries are still air cooled. This is typical for water cooling systems in my experience; while I wouldn't be surprised to be wrong (it sure would be more efficient, I'd think!), I've never seen a water cooling system which successfully works without forced air, because there are just too many ancillary components of varying shapes to successfully design a PCB-waterblock combination which does not also demand forced air cooling.
Oh interesting I missed that when I went through in the first pass. (I think I space bared to pass the image and managed to skip the entire paragraph in between the two images so that’s on me.
I was running off an informal discussion I had with a hardware ops person several years ago where he mentioned a push to unify cooling and eliminate thermal transfer points since they were one of the major elements of inefficiency in modern cooling solutions. By missing that as I browsed through it I think I leaned too heavily on my assumptions without realizing it!
Also, not all chips can be liquid cooled so there will always be an element of air cooling so the fans and stuff are still there for the “everything else” cases and I doubt anybody will really eliminate that effectively. The comment you quoted was mostly directed towards the idea that Cray-1 had liquid cooling, it did, but it transferred to air outside of the server which was an extremely common model for most older mainframe setups. It was rare for the heat to be kept liquid along the whole path.
The problem is often exacerbated on PCBs designed for air cooling where the clearance between water cooled and air cooled components is not high enough to fit a water block. Usually the solution when design allows is to segment these components into a separate air cooled portion of the design, which is what Google look to have done on these TPU sleds (the last ~third of the assembly looks like it’s actively air cooled by the usual array of rackmount fans).
Messy.
You would have a liquid block on the CPU but you'd also have a heat sink on top that transfers heat from the air to the coolant block, working in reverse compared to normal air cooling heatsinks. The temperature difference would cause passive air circulation and the liquid cooling would now cool both the CPU and the air in the box, without fans.
Seems like something someone would have thought about and tested already though.
It doesn't have to do that much, but maybe you're right. I'm sure they'd be doing this if it was practical, being able to onit thousands of fans would probably save a pretty penny both on hardware and electricity.