Still kudos going this path in the cloud-centric time we live in.
The cynic in me says this was written by sales/marketing people targeted specifically at a whole new generation of people who've never laid hands on the bare metal or racked a piece of equipment or done low voltage cabling, fiber cabling, and "plug this into A and B power AC power" cabling.
By this, I mean people who've never done anything that isn't GCP, Azure, AWS, etc. Many terminologies related to bare metal infrastructure are misused by people who haven't been around in the industry long enough to have been required to DIY all their own infrastructure on their own bare metal.
I really don't mean any insult to people reading this who've only ever touched the software side, but if a document is describing the general concept of hot aisles and cold aisles to an audience in such a way that it assumes they don't know what those are, it's at a very introductory/beginner level of understanding the OSI layer 1 infrastructure.
TFA explain what they're doing, they literally write this:
"In general you have three main choices: Greenfield buildout (...), Cage Colocation (getting a private space inside a provider's datacenter enclosed by mesh walls), or Rack colocation...
We chose the second option"
I don't know how much clearer they can be.
I wanted to start off with the 101 content to see if people found it approachable/interesting. He's got like reams and reams of 201, 301, 401
Next time I'll stay out of the writing room!
Cloudflare has also historically used “datacenter” to refer to their rack deployments.
All that said, for the purpose of the blog post, “building your own datacenter” is misleading.
Even where they do lease wholesale space, you'd be hard pushed to find examples of more than one in a single building. If you count them as Microsoft, Google, AWS then I'm not sure I can think of a single example off the top of my head. Only really possible if you start including players like IBM or Oracle in that list.
I can’t dig up the source atm but IIRC some Equinix website was bragging about it (and it wasn’t just about direct connect to GCP).
Google and AWS will put routers inside Equinx Slough sure, but that's literally written on the tin, and the only way a carrier hotel could work.
One of the better was the dead possum in the drain during a thunderstorm.
>So do we throw the main switch before we get electroduced? Or do we try to poke enough holes in it that it gets flushed out? And what about the half million in servers that are going to get ruined?
Sign up to my patreon to find out how the story ended.
When the original aws instance came out it would take you about two years or on demand to pay for the same hardware on prem. Now its between two weeks for ml heavy instances to six months for medium CPU instances.
It just doesn't make sence to use the cloud for anything past prototyping unless you want Bazos to have a bigger yacth.
cloud.google.com/about/locations lists all the locations that GCE offers service, which is a super set of the large facilities that someone would call a "Google Datacenter". I liked to mostly refer to the distinction as Google concrete (we built the building) or not. Ultimately, even in locations that are shared colo spaces, or rented, it's still Google putting custom racks there, integrating into the network and services, etc. So from a customer perspective, you should pick the right location for you. If that happens to be in a facility where Google poured the concrete, great! If not, it's not the end of the world.
P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.
Edit: Yes! https://cloud.google.com/docs/geography-and-regions still says:
> These data centers might be owned by Google and listed on the Google Cloud locations page, or they might be leased from third-party data center providers. For the full list of data center locations for Google Cloud, see our ISO/IEC 27001 certificate. Regardless of whether the data center is owned or leased, Google Cloud selects data centers and designs its infrastructure to provide a uniform level of performance, security, and reliability.
So someone can probably use web.archive.org to get the ISO-27001 certificate PDF from whenever the last time it was still up.
We had one provider give us a great price and then bait and switch at the last moment to tell us that there is some other massive installation charge that they didn't realize we had to pay.
Switch Connect/Core is based off the old Enron business that Rob (CEO) bought...
https://www.switch.com/switch-connect/ https://www.switch.com/the-core-cooperative/
Thy will vary by country, by state or even county , setting up a DC in the Bay Area and say one in Ohio or Utah is a very different endeavor with different design considerations.