> Liquid cooling is a familiar concept to PC enthusiasts, and has a long history in enterprise compute as well.
And the trend in data centers was to move towards more passive cooling at the individual servers and hotter operating temperatures for a while. This is interesting because it reverses that trend a lot, and possibly because of the per-row cooling.
Those style of jobs worked well but as Google has realized it has more high performance computing with unique workload characteristics that are mission-critical (https://cloud.google.com/blog/topics/systems/the-fifth-epoch...) their infrastructure has had to undergo a lot of evolution to adapt to that.
Google PR has always been full of "look we discovered something important and new and everybody should do it", often for things that were effectively solved using that approach a long time ago. MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).
Google isn't claiming to have invented water cooling. This article recaps their talk at Hot Chips where they showed some details of their implementation.
Data center cooling is also a different beast than supercomputer cooling. Datacenter cooling operates at a larger scale and has different needs for maintenance operations like maintenance cycles.
There are also some interesting notes in there about new things Google is doing, like the direct-die cooling.
Water cooling is a big field. Data center operations is a big field. There is interesting content in the article.