←back to thread

195 points tosh | 1 comments | | HN request time: 0.209s | source
Show context
shivak ◴[] No.42208324[source]
> > The power shelf distributes DC power up and down the rack via a bus bar. This eliminates the 70 total AC power supplies found in an equivalent legacy server rack within 32 servers, two top-of-rack switches, and one out-of-band switch, each with two AC power supplies

This creates a single point of failure, trading robustness for efficiency. There's nothing wrong with that, but software/ops might have to accommodate by making the opposite tradeoff. In general, the cost savings advertised by cloud infrastructure should be more holistic.

replies(6): >>42208347 #>>42208722 #>>42208748 #>>42208751 #>>42208787 #>>42208961 #
walrus01 ◴[] No.42208347[source]
The whole thing with eliminating 70 discrete 1U server size AC-to-DC power supplies is nothing new. It's the same general concept as the power distribution unit in the center of an open compute platform rack design from 10+ years ago.

Everyone who's doing serious datacenter stuff at scale knows that one of the absolute least efficient, labor intensive and cabling intensive/annoying ways of powering stuff is to have something like a 42U cabinet with 36 servers in it, each of them with dual power supplies, with power leads going to a pair of 208V 30A vertical PDUs in the rear of the cabinet. It gets ugly fast in terms of efficiency.

The single point of failure isn't really a problem as long as the software is architected to be tolerant of the disappearance of an entire node (mapping to a single motherboard that is a single or dual cpu socket config with a ton of DDR4 on it).

replies(2): >>42208552 #>>42209470 #
1. formerly_proven ◴[] No.42208552[source]
That’s one reason why 2U4N systems are kinda popular. 1/4 the cabling in legacy infrastructure.