Most active commenters

    ←back to thread

    596 points dban | 16 comments | | HN request time: 1.237s | source | bottom
    Show context
    jonatron ◴[] No.42743915[source]
    Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
    replies(11): >>42743999 #>>42744262 #>>42744459 #>>42744487 #>>42744684 #>>42745045 #>>42745071 #>>42745397 #>>42745493 #>>42746491 #>>42748930 #
    1. xiconfjs ◴[] No.42744262[source]
    I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.

    Still kudos going this path in the cloud-centric time we live in.

    replies(4): >>42744417 #>>42744688 #>>42745193 #>>42745473 #
    2. j45 ◴[] No.42744417[source]
    Having been around and through both, setting up a cage or two is very different than the entire facility.
    replies(1): >>42744708 #
    3. matt-p ◴[] No.42744688[source]
    Yes, the second is much more work, orders of magnitude at least.
    replies(1): >>42748050 #
    4. HaZeust ◴[] No.42744708[source]
    I think you and GP are in agreement.
    5. llm_trw ◴[] No.42745193[source]
    Do I have stories.

    One of the better was the dead possum in the drain during a thunderstorm.

    >So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?

    Sign up to my patreon to find out how the story ended.

    replies(1): >>42746989 #
    6. manquer ◴[] No.42745473[source]
    While it is more complex to actually build out the center , a lot of that is specific to the regional you are doing it.

    Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

    replies(3): >>42747241 #>>42748833 #>>42752384 #
    7. pinoy420 ◴[] No.42746989[source]
    Give me a link to your patreon
    replies(1): >>42747974 #
    8. itsoktocry ◴[] No.42747241[source]
    >Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

    What point are you trying to make? It does not matter where you are in the world, or what local laws exist or permits are required, racking up servers in a cage is much less difficult than physically building a data center (of which racking up servers is a part).

    replies(1): >>42747306 #
    9. manquer ◴[] No.42747306{3}[source]
    I meant that the learning from doing actual build outs aren't going to translate in other geographies and regulatory climates, not that the work is less difficult or not interesting and important.

    Also people doing the build outs of a DC aren't likely keen on talking about permits and confidential agreements in the industry quite publicly.

    Yes the title is click baity, but that is par of the course these days.

    replies(1): >>42747586 #
    10. xiconfjs ◴[] No.42747586{4}[source]
    Sure, every business has confidential agreements which are usually kept secret but there are even on youtube a few people/companies who gave deep insides in the bits and bytes of building a data center from ground up across multiple hours of documentation. And the confidential business agreements in the data center world are up to a certain level the same as any other businesses.
    11. Imustaskforhelp ◴[] No.42747974{3}[source]
    pay to the man's patreon and then tell me the story please!
    12. motorest ◴[] No.42748050[source]
    > Yes, the second is much more work, orders of magnitude at least.

    I feel it's important to stress that the difficulty level of collocating something, let alone actually building a data center, is exactly what makes cloud computing so enticing and popular.

    Everyone focuses on trivia items like OpEx vs CapEx and dynamic scaling, but the massive task of actually plugging in the hardware in a secure setting and get it to work reliably is a massive undertaking.

    replies(1): >>42749967 #
    13. quickthrowman ◴[] No.42748833[source]
    > Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

    Regarding data centers that cost 9 figures and up:

    For the largest players, there’s not a ton of variation. A combination of evaporative cooling towers and chillers are used to reject heat. This is a consequence of evaporative open loop cooling being 2-3x more efficient than a closed-loop system.

    There will be multiple medium-voltage electrical services, usually from different utilities or substations, with backup generators and UPSes and paralleling switchgear to handle failover between normal, emergency, and critical power sources.

    There’s not a lot of variation since the two main needs of a data center are reliable electricity and the ability to remove heat from the space, and those are well-solved problems in mature engineering disciplines (ME and EE). The huge players are plopping these all across the country and repeatability/reliability is more important than tailoring the build to the local climate.

    FWIW my employer has done billions of dollars of data center construction work for some of the largest tech companies (members of Mag7) and I’ve reviewed construction plans for multiple data centers.

    replies(1): >>42757998 #
    14. matt-p ◴[] No.42749967{3}[source]
    I just honestly don't agree with that at all. That's the easy bit, the bit I don't enjoy is organising backups and storage in general. But it's not 'hard'.
    15. pjdesno ◴[] No.42752384[source]
    Issues in building your own physical data center (based on a 15MW location some people I know built): 1 - thermal. To get your PUE down below say 1.2 you need to do things like hot aisle containment or better yet water cooling - the hotter your heat, the cheaper it is to get rid of.[] 2 - power distribution. How much power do you waste getting it to your machines? Can you run them on 220v, so their power supplies are more efficient? 3 - power. You don't just call your utility company and as them to run 10+MW from the street to your building. 4 - networking. You'll probably need redundant dark fiber running somewhere.

    1 and 2 are independent of regulatory domain. 3 involves utilities, not governments, and is probably a clusterfck anywhere; 4 isn't as bad (anywhere in the US; not sure elsewhere) because it's not a monopoly, and you can probably find someone to say "yes" for a high enough price.

    There are people everywhere who are experts in site acquisition, permits, etc. Not so many who know how to build the thermals and power, and who aren't employed by hyperscalers who don't let them moonlight. And depending on your geographic location, getting those megawatts from your utility may be flat out impossible.

    This assumes a new build. Retrofitting an existing building probably ranges from difficult to impossible, unless you're really lucky in your choice of building.

    [*] hmm, the one geographic issue I can think of is water availability. If you can't get enough water to run evaporative coolers, that might be a problem - e.g. dumping 10MW into the air requires boiling off I think somewhere around 100K gallons of water a day.

    16. pjdesno ◴[] No.42757998{3}[source]
    You've got more experience there than me, and I've only seen the plans for a single center.

    I'll point out that some of the key thermal and power stuff in those plans you saw may have come from the hyperscalers themselves - our experience a dozen years or so ago was that we couldn't just put it out to bid, as the typical big construction players knew how to build old data centers, not new ones, and we had to hire a (very small) engineering team to design it ourselves.

    Heat removal is well-solved in theory. Heat removal from a large office building is well-solved in practice - lots of people know exactly what equipment is needed, how to size, install, and control it, what building features are needed for it, etc. Take some expert MEs without prior experience at this, toss them a few product catalogs, and ask them to design a solution from first principles using the systems available and it wouldn't be so easy.

    There are people for whom data center heat removal is a solved problem in practice, although maybe not in the same way because the goalposts keep moving (e.g. watts per rack). Things may be different now, but a while back very few of those people were employed by companies who would be willing to work on datacenters they didn't own themselves.

    Finally I'd add that "9 figures" seems excessive for building+power+cooling, unless you're talking crazy sizes (100MW?). If you're including the contents, then of course they're insanely expensive.