Most active commenters
  • matt-p(4)
  • chatmasta(3)

←back to thread

261 points dban | 37 comments | | HN request time: 0.606s | source | bottom
1. jonatron ◴[] No.42743915[source]
Why would you call colocation "building your own data center"? You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?
replies(9): >>42743999 #>>42744262 #>>42744459 #>>42744487 #>>42744684 #>>42745045 #>>42745071 #>>42745397 #>>42745493 #
2. macintux ◴[] No.42743999[source]
Dealing with power at that scale, arranging your own ISPs, seems a bit beyond your normal colocation project, but I haven’t bee in the data center space in a very long time.
replies(2): >>42744227 #>>42745372 #
3. redeux ◴[] No.42744227[source]
I worked for a colo provider for a long time. Many tenants arranged for their own ISPs, especially the ones large enough to use a cage.
4. xiconfjs ◴[] No.42744262[source]
I have to second this. While it takes mich effort and in-depth knowledge do build up from an “empty” cage it’s still far from dealing with everything from building permits, to plan and realize a data center to code including redundant power lines, AC and fibre.

Still kudos going this path in the cloud-centric time we live in.

replies(4): >>42744417 #>>42744688 #>>42745193 #>>42745473 #
5. j45 ◴[] No.42744417[source]
Having been around and through both, setting up a cage or two is very different than the entire facility.
replies(1): >>42744708 #
6. walrus01 ◴[] No.42744459[source]
> Why would you call colocation "building your own data center"?

The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.

By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.

I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.

replies(2): >>42744604 #>>42745241 #
7. TacticalCoder ◴[] No.42744487[source]
> You could call it "colocation" or "renting space in a data center". What are you building? You're racking. Can you say what you mean?

TFA explain what they're doing, they literally write this:

"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...

We chose the second option"

I don't know how much clearer they can be.

8. justjake ◴[] No.42744604[source]
I think that's my fault BTW (Railway Founder here). I asked Charith to cut down a bit on the details to make sure it was approachable to a wider audience (And most people have only done Cloud)

I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401

Next time I'll stay out of the writing room!

replies(1): >>42744922 #
9. chatmasta ◴[] No.42744684[source]
It seems a bit disingenuous but it’s common practice. Even the hyperscalers, who do have their own datacenters, include their colocation servers in the term “datacenter.” Good luck finding the actual, physical location of a server in GCP europe-west2-a (“London”). Maybe it’s in a real Google datacenter in London! Or it could be in an Equinix datacenter in Slough, one room away from AWS eu-west-1.

Cloudflare has also historically used “datacenter” to refer to their rack deployments.

All that said, for the purpose of the blog post, “building your own datacenter” is misleading.

replies(3): >>42744705 #>>42745298 #>>42745377 #
10. matt-p ◴[] No.42744688[source]
Yes, the second is much more work, orders of magnitude at least.
11. matt-p ◴[] No.42744705[source]
The hyperscalers are absolutely not colo-ing their general purpose compute at Equinix! A cage for routers and direct connect, maybe some limited Edge CDN/compute at most.

Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.

replies(4): >>42744718 #>>42744738 #>>42745016 #>>42745300 #
12. HaZeust ◴[] No.42744708{3}[source]
I think you and GP are in agreement.
13. chatmasta ◴[] No.42744718{3}[source]
Maybe leasing wholesale space shouldn’t be considered colocation, but GCP absolutely does this and the Slough datacenter was a real example.

I can’t dig up the source atm but IIRC some Equinix website was bragging about it (and it wasn’t just about direct connect to GCP).

replies(1): >>42744749 #
14. deelowe ◴[] No.42744738{3}[source]
Hyperscalers use colos all the time for edge presence.
15. matt-p ◴[] No.42744749{4}[source]
Google doesn't put GCP compute inside Equinx Slough. I could perhaps believe if they have a cage of routers and perhaps even CDN boxes/Edge, but no general cloud compute.

Google and AWS will put routers inside Equinx Slough sure, but that's literally written on the tin, and the only way a carrier hotel could work.

replies(1): >>42744779 #
16. chatmasta ◴[] No.42744779{5}[source]
Then why do they obfuscate the location of their servers? If they were all in Google datacenters, why not let me see where my VM is?
replies(1): >>42744859 #
17. achierius ◴[] No.42744859{6}[source]
Security reasons, I presume? Otherwise it would be trivial for an adversary to map out their resources by sampling VM rentals over a moderate time-period.
replies(1): >>42744894 #
18. lostlogin ◴[] No.42744894{7}[source]
I’m very naive on the subject here - what advantage would this give someone?
replies(1): >>42745209 #
19. haneefmubarak ◴[] No.42744922{3}[source]
Bro let him at the 401 and higher hahaha!
replies(1): >>42745008 #
20. justjake ◴[] No.42745008{4}[source]
"Booo who let this guy cook?"

Fair tbh

We will indeed write more on this so this is great feedback for next time!

21. fragmede ◴[] No.42745016{3}[source]
The best part about adamantly making such a claim is that anybody who knows better also knows better than to break NDA and pull a Warthunder to prove that the CSPs do use colo facilities, so you're not going to get anyone who knows better to disagree with you and say AWS S3 or GCP compute is colo-ed at a specific colo provider.
replies(1): >>42745213 #
22. ◴[] No.42745045[source]
23. ThatGuyRaion ◴[] No.42745071[source]
Not saying I don't agree with you but most tech businesses that have their own "Data center" usually have a private cage in a Colo.
replies(1): >>42745166 #
24. cortesoft ◴[] No.42745166[source]
They usually don’t say they are building their own datacenter, though. It is different to say something like, “our website runs in our datacenter” than saying you built a datacenter. You would still say, “at our office buildings”, even if you are only renting a few offices in an office park.
25. llm_trw ◴[] No.42745193[source]
Do I have stories.

One of the better was the dead possum in the drain during a thunderstorm.

>So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?

Sign up to my patreon to find out how the story ended.

26. chupasaurus ◴[] No.42745209{8}[source]
The knowledge of blast radii.
replies(1): >>42745413 #
27. matt-p ◴[] No.42745213{4}[source]
They consume wholesale space, but not retail Colo for general compute, that's all I'm saying.

Equinx is retail, with only a couple of exceptions, although I know they're trying to grow the wholesale side.

28. llm_trw ◴[] No.42745241[source]
I mean the more people realize the the cloud is now a bad deal the better.

When the original aws instance came out it would take you about two years or on demand to pay for the same hardware on prem. Now its between two weeks for ml heavy instances to six months for medium CPU instances.

It just doesn't make sence to use the cloud for anything past prototyping unless you want Bazos to have a bigger yacth.

29. boulos ◴[] No.42745298[source]
You're correct, there are multiple flavors of Google Cloud Locations. The "Google concrete" ones are listed at google.com/datacenters and London isn't on that list, today.

cloud.google.com/about/locations lists all the locations that GCE offers service, which is a super set of the large facilities that someone would call a "Google Datacenter". I liked to mostly refer to the distinction as Google concrete (we built the building) or not. Ultimately, even in locations that are shared colo spaces, or rented, it's still Google putting custom racks there, integrating into the network and services, etc. So from a customer perspective, you should pick the right location for you. If that happens to be in a facility where Google poured the concrete, great! If not, it's not the end of the world.

P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.

Edit: Yes! https://cloud.google.com/docs/geography-and-regions still says:

> These data centers might be owned by Google and listed on the Google Cloud locations page, or they might be leased from third-party data center providers. For the full list of data center locations for Google Cloud, see our ISO/IEC 27001 certificate. Regardless of whether the data center is owned or leased, Google Cloud selects data centers and designs its infrastructure to provide a uniform level of performance, security, and reliability.

So someone can probably use web.archive.org to get the ISO-27001 certificate PDF from whenever the last time it was still up.

30. boulos ◴[] No.42745300{3}[source]
See my sibling comment :).
31. latchkey ◴[] No.42745372[source]
One of the many reasons we went with Switch for our DC is because they have a service to handle all of that for you. Having stumbled on doing this ourselves before, it can be pretty tricky to negotiate everything.

We had one provider give us a great price and then bait and switch at the last moment to tell us that there is some other massive installation charge that they didn't realize we had to pay.

Switch Connect/Core is based off the old Enron business that Rob (CEO) bought...

https://www.switch.com/switch-connect/ https://www.switch.com/the-core-cooperative/

32. Over2Chars ◴[] No.42745377[source]
Indeed, I've seen "data center" maps, and was surprised they were just a tenant in some other guys data center.
33. vel0city ◴[] No.42745397[source]
How to build a house:

Step 1: sign a lease at an apartment

34. jazzyjackson ◴[] No.42745413{9}[source]
Gives whole new meaning to “reverse engineering”
replies(1): >>42745541 #
35. manquer ◴[] No.42745473[source]
While it is more complex to actually build out the center , a lot of that is specific to the regional you are doing it.

Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.

36. ◴[] No.42745493[source]
37. chupasaurus ◴[] No.42745541{10}[source]
Well, the alternative name for it is "backwards engineering" for a reason.