←back to thread

Google's Liquid Cooling

(chipsandcheese.com)
399 points giuliomagnifico | 1 comments | | HN request time: 0.324s | source
Show context
jonathaneunice ◴[] No.45017586[source]
It’s very odd when mainframes (S/3x0, Cray, yadda yadda) have been extensively water-cooled for over 50 years, and super-dense HPC data centers have used liquid cooling for at least 20, to hear Google-scale data center design compared to PC hobbyist rigs. Selective amnesia + laughably off-target point of comparison.
replies(6): >>45017651 #>>45017716 #>>45018092 #>>45018513 #>>45018785 #>>45021044 #
spankalee ◴[] No.45017716[source]
From the article:

> Liquid cooling is a familiar concept to PC enthusiasts, and has a long history in enterprise compute as well.

And the trend in data centers was to move towards more passive cooling at the individual servers and hotter operating temperatures for a while. This is interesting because it reverses that trend a lot, and possibly because of the per-row cooling.

replies(2): >>45018019 #>>45018087 #
dekhn ◴[] No.45018087[source]
We've basically been watching Google gradually re-discover all the tricks of supercomputing (and other high performance areas) over the past 10+ years. For a long time, websearch and ads were the two main drivers of Google's datacenter architecture, along with services like storage and jobs like mapreduce. I would describe the approach as "horizontal scaling with statistical multiplexing for load balancing".

Those style of jobs worked well but as Google has realized it has more high performance computing with unique workload characteristics that are mission-critical (https://cloud.google.com/blog/topics/systems/the-fifth-epoch...) their infrastructure has had to undergo a lot of evolution to adapt to that.

Google PR has always been full of "look we discovered something important and new and everybody should do it", often for things that were effectively solved using that approach a long time ago. MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).

replies(6): >>45018386 #>>45018588 #>>45018809 #>>45019953 #>>45020485 #>>45021776 #
Sesse__ ◴[] No.45018809[source]
> MapReduce is a great example of that- Google certainly didn't invent the concepts of Map or Reduce, or even the idea of using those for doing high throughput computing (and the shuffle phase of MapReduce is more "interesting" from a high performance computing perspective than mapping or reducing anyway).

The “Map” in MapReduce does not originally stand for the map operation, it comes from the concept of “a map” (or, I guess, a multimap). MapReduce descends from “the ripper”, an older system that mostly did per-element processing, but wasn't very robust or flexible. I believe the map operation was called “Filter()” at the time, and reduce also was called something else. Eventually things were cleaned up and renamed into Map() and Reduce() (and much more complexity was added, such as combiners), in a sort of backnaming.

It may be tangential, but it's not like the MapReduce authors started with “aha, we can use functional programming here”; it's more like the concept fell out. The fundamental contribution of MapReduce is not to invent lambda calculus, but to show that with enough violence (and you should know there was a lot of violence in there!), you can actually make a robust distributed system that appears simple to the users.

replies(1): >>45019008 #
dekhn ◴[] No.45019008[source]
I believe Map in MapReduce stood for "map" function, not a multimap- I've never heard or read otherwise (maps can operate over lists of items, not just map types). That's consistent both with the original mapreduce paper: """Our abstraction is inspired by the map and reduce primitives present in Lisp and many other functional languages. We realized that most of our computations involved applying a map operation to each logical “record” in our input in order to compute a set of intermediate key/value pairs, and then applying a reduce operation to all the values that shared the same key, in order to combine the derived data appropriately"""

and with the internal usage of the program (I only started in 2008, but spoke to Jeff extensively about the history of MR as part of Google's early infra) where the map function can be fed with recordio (list containers) or sstable (map containers).

As for the ripper, if you have any links to that (rather than internal google lore), I'd love to hear about it. Jeff described the early infrastructure as being very brittle.

replies(1): >>45019564 #
1. Sesse__ ◴[] No.45019564[source]
> and with the internal usage of the program (I only started in 2008, but spoke to Jeff extensively about the history of MR as part of Google's early infra) where the map function can be fed with recordio (list containers) or sstable (map containers).

I worked on the MapReduce team for a while (coincidentally, around 2008), together with Marián Dvorský, who wrote up a great little history of this. I don't think it was ever made public, though.

> As for the ripper, if you have any links to that (rather than internal google lore), I'd love to hear about it. Jeff described the early infrastructure as being very brittle.

I believe it's all internal, unfortunately.