Most active commenters

    ←back to thread

    242 points panrobo | 11 comments | | HN request time: 0.558s | source | bottom
    1. 0xbadcafebee ◴[] No.42057644[source]
    GEICO is moving away from the cloud because their IT is a joke. They had a horrible on-prem infrastructure, so they moved to the cloud not knowing how, and they made the same mistakes in the cloud as on-prem, plus the usual mistakes every cloud migration runs into. They are moving away from the cloud because their new VP's entire career is focused on running her own hardware. What we know about their new setup is absolutely bonkers (like, K8s-on-OpenStack-on-K8s bonkers). Look to them for what not to do.

    37signals is like the poster child for NIH syndrome. They keep touting cost savings as the reason for the move, but from what I have gathered, they basically did nothing to save cost in the cloud. It is trivial to save 75% off AWS's list price. They will even walk you through it, they literally want you to save money. That, plus using specific tech in specific ways, allows you to reap major benefits of modern designs while reducing cost more. 37signals didn't seem to want to go that route. But they do love to build their own things, so servers would be a natural thing for them to DIY.

    Almost every argument against the cloud - cost inefficiency, fear of vendor lock-in, etc - has easy solutions that make the whole thing extremely cost competitive, if not a way better value, than trying to become your own cloud hosting provider. It's very hard to estimate the real world costs, both known and unknown, of DIY hosting (specifically the expertise, or lack of it, and the impacts from doing it wrong, which is very likely to happen if cloud hosting isn't your core business). But it's a 100% guarantee that you will never do it better than AWS.

    AI is the only place I could reasonably imagine somebody having an on-prem advantage. At the moment, we still live in a world where that hardware isn't a commodity in the way every other server is. So you might just be faster to deploy, or cheaper to buy, with AI gear. Storage is similar but not nearly as tight a market. But that will change eventually once either the hype bubble bursts, or there's more gear for cheaper for the cloud providers to sell.

    replies(5): >>42057818 #>>42058164 #>>42058629 #>>42059150 #>>42081849 #
    2. cdchn ◴[] No.42057818[source]
    >K8s-on-OpenStack-on-K8s bonkers

    Do what now???

    replies(1): >>42058073 #
    3. p_l ◴[] No.42058073[source]
    It's actually quite reasonable if for bad reasons.

    TL;DR setting up OpenStack was so horrible I think SAP started deploying it through k8s.

    So if you want to setup local "private cloud" kind of setup it makes sense to set up OpenStack on k8s.

    If you then want to provide multiple clusters cloud-style to the rest of the organization... well, it's just layered again.

    In fact, at least one significantly-sized european vendor in on-prem k8s space did exactly that kind of sandwich, to my knowledge.

    replies(1): >>42058422 #
    4. vbezhenar ◴[] No.42058164[source]
    The main problem with AWS is their outrageous pricing on some aspects like traffic. And some very unexpected pricing nuances which could burn thousands of dollars in a blink of an eye.

    While AWS engineers are more competent, may be you don't need that much competency to run simple server or two. And expense structure will be more predictable.

    replies(1): >>42107722 #
    5. viraptor ◴[] No.42058422{3}[source]
    Openstack on k8s is basically giving up on openstack on openstack. https://wiki.openstack.org/wiki/TripleO You need some kind of orchestration for the control plane. Either you create it from scratch or use something existing.
    replies(1): >>42075195 #
    6. darkwater ◴[] No.42058629[source]
    > Almost every argument against the cloud - cost inefficiency, fear of vendor lock-in, etc - has easy solutions that make the whole thing extremely cost competitive, if not a way better value, than trying to become your own cloud hosting provider. It's very hard to estimate the real world costs, both known and unknown, of DIY hosting (specifically the expertise, or lack of it, and the impacts from doing it wrong, which is very likely to happen if cloud hosting isn't your core business)

    Please define your concept of self-hosting here. Does it mean you need to have your very own DC? Renting a few racks that you fill yourself? Rent CPU, storage and networking, with remote hands and all the bells and whistles? Depending on the scenario it changes dramatically the burden of ownership (at a monetary cost, obviously). And depending on the size of the company and the variability of the workload, it can (or can not) make sense to be on-prem. But being like "cloud is perfect for everyone and everything, if you tune it well enough" seems a bit too much black&white to me.

    replies(1): >>42072506 #
    7. vidarh ◴[] No.42059150[source]
    It's very easy to estimate the real-world cost of on-prem or dedicated hosting - there is a wide range of providers that will quote you fixed monthly prices to manage it for you (including me) because we know what it costs us to manage various things for you.

    AI is the only place I don't currently see much on-prem advantage, because buying SOTA equipment is hard, and it gets outdated too quickly.

    For pretty much everything else, if you can't save 70%+ TCO, maintenance/devops included, over an optimized cloud setup, you're usually doing something very wrong, usually because the system is designed by someone who defaults to "cloud assumptions" (slow cores, too little RAM, too little fast storage, resulting in systems that are far more distributed than they need be is the typical issue).

    8. jiggawatts ◴[] No.42072506[source]
    The co-location costs are relatively minor. The bulk of the cost of self-hosting comes from quantization: needing to buy 2x (for HA) of things and then not being able to use the full capacity. Things like tape libraries, Internet routers, hardware firewalls, etc...

    The multi-vendor aspect can be expensive as well. There's a lot of different end-of-life issues to track, firmware to update, specialist skills to hire, and so on.

    Just backup alone can be shockingly expensive, especially if it has a decently short RTO/RPO and is geo-redundant. In Azure and AWS this is a checkbox.

    9. p_l ◴[] No.42075195{4}[source]
    I am totally unsurprised people gave up on Triple-O.

    I haven't personally worked on OpenStack setups myself, but my coworkers in few places did (including supporting if not actually being contracted out to build some of the commercial offerings in that space) and especially upgrades were always this huge project where it made more sense to tear down the entire setup and bring it up again.

    That was made easier with OpenStack packaged as docker containers, but ansible was arguably still more pain to setup the cluster than just using k8s.

    10. keernan ◴[] No.42081849[source]
    I know nothing about Geico's IT but I find your comments surprising. GEICO is one of the most profitable insurance companies in the world which, of course, is the end goal of every company.
    11. lucidguppy ◴[] No.42107722[source]
    Right here - network is expensive. Another issue is cost estimation.

    Servers have prices, rack space has prices, engineers have salaries.

    You now literally have people hired now to work on how to calculate infra costs on aws (finops). Spot prices fluctuate. So what are the real savings in FTEs?