←back to thread

242 points panrobo | 1 comments | | HN request time: 0.24s | source
Show context
WaitWaitWha ◴[] No.42055490[source]
This is partially the result of cloud providers and partially business leadership. They, for whatever reason, insufficiently educated their clients on migration requirements. Lift & shift from on-premises to cloud only work for emergency. The shifted resources must be converted to cloud stack, or the cost will be multiples of on-prem costs. Business leadership was (is?) ignoring IT teams screaming of the problem with lift & shift.

Now, businesses shifting back to on-prem because they are still uneducated on how to make cloud useful. They will just shift all non-core activities to XaaS vendors, reducing their own cloud managed solutions.

Source: dealing with multiple non-software, tech firms that are doing just that, shifting own things back to on-prem, non-core resources to XaaS.

replies(1): >>42057868 #
Agingcoder ◴[] No.42057868[source]
I keep reading ´ Lift and shift is bad ‘ on HN - what is the opposite of lift and shift ? ( ´cloud native ´ does not mean much to me). Is it that instead of oracle running on a rented vm you use whatever db your cloud provider is selling you, you move your monolith to a service oriented architecture running in k8s, etc ?
replies(3): >>42058034 #>>42061244 #>>42066588 #
JackSlateur ◴[] No.42061244[source]
lift & shift = move your instances, which are expensive and not at all competitive

cloud native = use more serverless products (pay as you go with no base price)

For instance, one could implement internal DNS using instance which run bind, and connect everything through some VPC, and put a load balancer in front of the instances. One could also rework its DNS architecture and use route53, with private hosted zones associated with all the appropriate VPCs

Another real example: one could have hundreds of instance running gitlab runners, running all day, waiting for some job to do. One could put those gitlab runners into an autoscaled kubernetes, where nodes are added when there are lots of jobs, and deleted afterwards. One could even run the gitlab runners on Fargate, where a pod is created for each job, to run that job and then exit. No job = no pod = no money spent.

Of course, some work is required to extract the best value of cloud providers. If you use only instances, well, surprise: it costs a lot, and you still have to manage lots of stuff.

replies(2): >>42061999 #>>42062041 #
1. maccard ◴[] No.42061999[source]
I ran our CI on fargate at my last job. It was a mess. The time from api request for an instance to it being ready to handle a job was minutes. It was about 50x slower than running on a mid range laptop that I occasionally used for development, and in order to work around that we kept hot EBS volumes of caches around (which cost $$$$ and a decent amount of developer time).

Just before I left I was investigating using hetzner instead - our compute bill would have been about 15% cheaper, we would have had no cache storage costs (which was about 5x our compute costs), and the builds would have finished before fargate had processed the request.

Our numbers were small fry, but we spent more on that CI system than we did on every other part of our infra combined.