←back to thread

461 points thunderbong | 2 comments | | HN request time: 0.517s | source
Show context
dkersten ◴[] No.42134266[source]
I know it’s minor in comparison, but I will never use AWS again after running up a $100 bill trying to get an app deployed to ECS. There was an error (on my side) preventing the service from starting up, but cloud waatch only had logs about 20% of the time, so I had to redeploy five times just to get some logs, make changes, redeploy five more times, etc. They charged me for every single failed deploy.

After about two days of struggling and a $100 bill, I said fuck it, deleted my account and deployed to DigitalOcean’s app platform instead, where it also failed to deploy (the error was with my app), but I had logs, every time. I fixed it in and had it running in under ten minutes, total bill was a few cents.

I swore that day that I would never again use AWS for anything when given a choice, and would never recommend it.

replies(2): >>42134378 #>>42134527 #
te_chris ◴[] No.42134527[source]
I gave up on AWS when I realised you can’t deploy a container straight to ec2 like you can on GCP. For bigger things, yeah the support’s better, for anything small to mid GCP all day. Primitives that actually make sense to how we use containers and such these days. And Bigquery
replies(3): >>42134752 #>>42134771 #>>42135704 #
1. antonhag ◴[] No.42134771[source]
For AWS the solution for container deployments (without dealing with VMs) is Fargate, which imo works reasonably well.
replies(1): >>42138068 #
2. dkersten ◴[] No.42138068[source]
I believe I was actually trying to use that. It’s been a few years so my memory is hazy, but isn’t Fargate just a special case of ECS where they handle the host machines for you?

In any case, the problem wasn’t so much ECS or Fargate, beyond the complexity of their UI and config, but rather that Cloudwatch was flaky. The problem that prevented the deployment was in my end, some issue preventing the health check from succeeding or something like that, so the container never came up healthy when deployed (it worked locally). The issue is that AWS didn’t help me figure out what the problem was and Cloudwatch didn’t show any logs about 80% of the time. I literally clicked deploy, waited for the deploy to fail, refresh Cloudwatch, saw no logs, click deploy, repeat until logs. It took about five attempts to see logs. Every single time I made a change (it wasn’t clear the error was on my end so it was quite a frustrating process).

On digital ocean, the logs were shown correctly every single time and I was able to determine the problem was on my end after a few attempts, add the required extra logging to track it down, fix it, and get a working deployment in under ten minutes.