←back to thread

Google's Liquid Cooling

(chipsandcheese.com)
399 points giuliomagnifico | 1 comments | | HN request time: 0.236s | source
Show context
m463 ◴[] No.45018271[source]
I wonder what the economics of water cooling really is.

Is it because chips are getting more expensive, so it is more economical to run them faster by liquid cooling them?

Or is it data center footprint is more expensive, so denser liquid cooling makes more sense?

Or is it that wiring distances (1ft = 1nanosecond) make dense computing faster and more efficient?

replies(6): >>45018323 #>>45018352 #>>45018353 #>>45020042 #>>45022675 #>>45025262 #
1. jabl ◴[] No.45022675[source]
> Or is it that wiring distances (1ft = 1nanosecond) make dense computing faster and more efficient?

Contrary to other posters, I'd argue this effect is relatively small. A really good interconnect fabric might give you ping-pong times on the order of 1 microsecond, which is still 1000 times larger than a nanosecond. Most of the delay will be in the switches and the end nodes, not in the signal traveling over the wire or fiber. Say for a large-ish cluster with a diameter of, say, 100 feet (something like 7 rows of racks, each row 100 feet long, give or take), if liquid cooling allows you to double the density, you could condense it to a diameter of 100/sqrt(2) = 70 ft (about 5 rows of 70 ft each). As a ping-pong involves a signal going both ways, the worst-case increase in signal delay would be (100-70)*2 = 60 ft or 60 nanoseconds (in reality somewhat more since cables have to be routed). So about a 6% increase if we assume the baseline is 1 microsecond. Measurable, yes, but likely very small effect on application performance vs. a ping-pong microbenchmark.

Now where it can matter is that by packing the components more closely together, you can connect more chips via backplane and/or copper connectors vs. having to use optics.