> Liquid cooling is a familiar concept to PC enthusiasts, and has a long history in enterprise compute as well.
And the trend in data centers was to move towards more passive cooling at the individual servers and hotter operating temperatures for a while. This is interesting because it reverses that trend a lot, and possibly because of the per-row cooling.
Those style of jobs worked well but as Google has realized it has more high performance computing with unique workload characteristics that are mission-critical (https://cloud.google.com/blog/topics/systems/the-fifth-epoch...) their infrastructure has had to undergo a lot of evolution to adapt to that.
Google PR has always been full of "look we discovered something important and new and everybody should do it", often for things that were effectively solved using that approach a long time ago. MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).
Building out an idea for a bespoke supercomputer and making an iteration of that idea that applies to globally scaled workloads is a different thing. They are building computing factories, as is Amazon, MS, Facebook, etc.
That said, PR is PR, and companies always crow about something for their own reasons.