←back to thread

199 points angadh | 2 comments | | HN request time: 0.436s | source
Show context
GlenTheMachine ◴[] No.44393313[source]
Space roboticist here.

As with a lot of things, it isn't the initial outlay, it's the maintenance costs. Terrestrial datacenters have parts fail and get replaced all the time. The mass analysis given here -- which appears quite good, at first glance -- doesn't including any mass, energy, or thermal system numbers for the infrastructure you would need to have to replace failed components.

As a first cut, this would require:

- an autonomous rendezvous and docking system

- a fully railed robotic system, e.g. some sort of robotic manipulator that can move along rails and reach every card in every server in the system, which usually means a system of relatively stiff rails running throughout the interior of the plant

- CPU, power, comms, and cooling to support the above

- importantly, the ability of the robotic servicing system toto replace itself. In other words, it would need to be at least two fault tolerant -- which usually means dual wound motors, redundant gears, redundant harness, redundant power, comms, and compute. Alternately, two or more independent robotic systems that are capable of not only replacing cards but also of replacing each other.

- regular launches containing replacement hardware

- ongoing ground support staff to deal with failures

The mass analysis also doesn't appear to include the massive number of heat pipes you would need to transfer the heat from the chips to the radiators. For an orbiting datacenter, that would probably be the single biggest mass allocation.

replies(17): >>44393394 #>>44393436 #>>44393528 #>>44393553 #>>44393882 #>>44394969 #>>44395311 #>>44395355 #>>44396009 #>>44396843 #>>44397057 #>>44397975 #>>44398392 #>>44398563 #>>44406204 #>>44410213 #>>44414799 #
vidarh ◴[] No.44395355[source]
I've had actual, real-life deployments in datacentres where we just left dead hardware in the racks until we needed the space, and we rarely did. Typically we'd visit a couple of times a year, because it was cheap to do so, but it'd have totally viable to let failures accumulate over a much longer time horizon.

Failure rates tend to follow a bathtub curve, so if you burn-in the hardware before launch, you'd expect low failure rates for a long period and it's quite likely it'd be cheaper to not replace components and just ensure enough redundancy for key systems (power, cooling, networking) that you could just shut down and disable any dead servers, and then replace the whole unit when enough parts have failed.

replies(8): >>44395420 #>>44395725 #>>44396217 #>>44397041 #>>44397169 #>>44398004 #>>44398178 #>>44398724 #
TheOtherHobbes ◴[] No.44397169[source]
The analysis has zero redundancy for either servers or support systems.

Redundancy is a small issue on Earth, but completely changes the calculations for space because you need more of everything, which makes the already-unfavourable space and mass requirements even less plausible.

Without backup cooling and power one small failure could take the entire facility offline.

And active cooling - which is a given at these power densities - requires complex pumps and plumbing which have to survive a launch.

The whole idea is bonkers.

IMO you'd be better off thinking about a swarm of cheaper, simpler, individual serversats or racksats connected by a radio or microwave comms mesh.

I have no idea if that's any more economic, but at least it solves the most obvious redundancy and deployment issues.

replies(4): >>44397239 #>>44397634 #>>44397680 #>>44399942 #
1. conradev ◴[] No.44397239[source]
Many small satellites also increases the surface area for cooling
replies(1): >>44401864 #
2. AtlasBarfed ◴[] No.44401864[source]
Like a neo-fractal surface? There's no atmosphere to wear it down.