> Or is it that wiring distances (1ft = 1nanosecond) make dense computing faster and more efficient?
Contrary to other posters, I'd argue this effect is relatively small. A really good interconnect fabric might give you ping-pong times on the order of 1 microsecond, which is still 1000 times larger than a nanosecond. Most of the delay will be in the switches and the end nodes, not in the signal traveling over the wire or fiber. Say for a large-ish cluster with a diameter of, say, 100 feet (something like 7 rows of racks, each row 100 feet long, give or take), if liquid cooling allows you to double the density, you could condense it to a diameter of 100/sqrt(2) = 70 ft (about 5 rows of 70 ft each). As a ping-pong involves a signal going both ways, the worst-case increase in signal delay would be (100-70)*2 = 60 ft or 60 nanoseconds (in reality somewhat more since cables have to be routed). So about a 6% increase if we assume the baseline is 1 microsecond. Measurable, yes, but likely very small effect on application performance vs. a ping-pong microbenchmark.
Now where it can matter is that by packing the components more closely together, you can connect more chips via backplane and/or copper connectors vs. having to use optics.