Still kudos going this path in the cloud-centric time we live in.
One of the better was the dead possum in the drain during a thunderstorm.
>So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?
Sign up to my patreon to find out how the story ended.
Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.
What point are you trying to make? It does not matter where you are in the world, or what local laws exist or permits are required, racking up servers in a cage is much less difficult than physically building a data center (of which racking up servers is a part).
Also people doing the build outs of a DC aren't likely keen on talking about permits and confidential agreements in the industry quite publicly.
Yes the title is click baity, but that is par of the course these days.
I feel it's important to stress that the difficulty level of collocating something, let alone actually building a data center, is exactly what makes cloud computing so enticing and popular.
Everyone focuses on trivia items like OpEx vs CapEx and dynamic scaling, but the massive task of actually plugging in the hardware in a secure setting and get it to work reliably is a massive undertaking.
Regarding data centers that cost 9 figures and up:
For the largest players, there’s not a ton of variation. A combination of evaporative cooling towers and chillers are used to reject heat. This is a consequence of evaporative open loop cooling being 2-3x more efficient than a closed-loop system.
There will be multiple medium-voltage electrical services, usually from different utilities or substations, with backup generators and UPSes and paralleling switchgear to handle failover between normal, emergency, and critical power sources.
There’s not a lot of variation since the two main needs of a data center are reliable electricity and the ability to remove heat from the space, and those are well-solved problems in mature engineering disciplines (ME and EE). The huge players are plopping these all across the country and repeatability/reliability is more important than tailoring the build to the local climate.
FWIW my employer has done billions of dollars of data center construction work for some of the largest tech companies (members of Mag7) and I’ve reviewed construction plans for multiple data centers.
1 and 2 are independent of regulatory domain. 3 involves utilities, not governments, and is probably a clusterfck anywhere; 4 isn't as bad (anywhere in the US; not sure elsewhere) because it's not a monopoly, and you can probably find someone to say "yes" for a high enough price.
There are people everywhere who are experts in site acquisition, permits, etc. Not so many who know how to build the thermals and power, and who aren't employed by hyperscalers who don't let them moonlight. And depending on your geographic location, getting those megawatts from your utility may be flat out impossible.
This assumes a new build. Retrofitting an existing building probably ranges from difficult to impossible, unless you're really lucky in your choice of building.
[*] hmm, the one geographic issue I can think of is water availability. If you can't get enough water to run evaporative coolers, that might be a problem - e.g. dumping 10MW into the air requires boiling off I think somewhere around 100K gallons of water a day.
I'll point out that some of the key thermal and power stuff in those plans you saw may have come from the hyperscalers themselves - our experience a dozen years or so ago was that we couldn't just put it out to bid, as the typical big construction players knew how to build old data centers, not new ones, and we had to hire a (very small) engineering team to design it ourselves.
Heat removal is well-solved in theory. Heat removal from a large office building is well-solved in practice - lots of people know exactly what equipment is needed, how to size, install, and control it, what building features are needed for it, etc. Take some expert MEs without prior experience at this, toss them a few product catalogs, and ask them to design a solution from first principles using the systems available and it wouldn't be so easy.
There are people for whom data center heat removal is a solved problem in practice, although maybe not in the same way because the goalposts keep moving (e.g. watts per rack). Things may be different now, but a while back very few of those people were employed by companies who would be willing to work on datacenters they didn't own themselves.
Finally I'd add that "9 figures" seems excessive for building+power+cooling, unless you're talking crazy sizes (100MW?). If you're including the contents, then of course they're insanely expensive.