> It’s so difficult to be paranoid about every single technology you use.
I would be paranoid with anything related to AWS, I don't want to risk my bankruptcy (or near bankruptcy experience) on small mistakes or the goodwill of the AWS support.
I do not know what others feel but with this kind of frictionless setup, plus low intuitivity in the UX/UI of those services, people are not concerned about setting up a credit card, and billing bundling between services (e.g. AWS batch + Lambda + EC2) is part of the business model.
I do not know how to articulate it, but it's more or less like those modern amusement parks where you pay to enter the facility, and for every attraction and even the toilet you pay to go.
I was able to get the charges reversed, but definitely learned not to trust their guides.
- Choose the lowest cost resource (it's a tutorial!)
- Cleanup resources when the `delete` subscript is run
I don't think it's fair to expect developers to do paranoid sweeps of their entire AWS account looking for rogue resources after running something like this.
If a startup had this behavior would you shrug and say "this happens, you just have to be paranoid"? Why is AWS held to a different standard by some?
Instead they have some pencil pushers calculating that they can milk thousands here and there from "user mistakes" that can't be easily disputed, if at all. I'm sure I'm not the only person who's been deterred from their environment due to the rational fear of waking up to massive charges.
+1
Last year I went to self-hosting and I felt the same. I paid less than USD 2000 for a small laptop that I use as a server plus a home NAS and by my current utilization I got in 3 months the return plus the ownership and flexibility.
Perhaps that does not excuse the behaviour but AWS reversed a $600 charge I incurred using AWS Textract where the charges were completely legitimate and I was working for a billion dollar enterprise.
User mistakes of this type must be a drop in the bucket for AWS and in my experience they seem more keen to avoid such issues that can cost more in damaged reputation.
AWS is not cheap, and in some cases it's incredibly expensive (egress fees), but tricking their customers into accidentally spending a couple of hundred extra is not part of their playbook.
When the product was starting (2017/2018) the whole setup was quite straightforward: Notebook instances, Inferences, REST APIs for servicing. Some EFS on top and clear that the service centered around S3. And of course, closed price without any surprises.
Was a kind Digital Ocean vibe the whole experience, and a Data Scientist with a rudimentary knowledge and curiosity around infrastructure could setup something affordable, predictable, and simple.
Today we have Wrangler, Feature Store, and RStudio, the console for the notebooks has an awful UX, and several services are under the hood moving data around (and billing for that).
On the one hand I get that if your business depends on such a service you don't want it to suddenly go down. But on the other hand there is almost never a hard mechanism to limit your risk. Or if there is, it is opt-in. The conspiracist in me says this is working exactly as planned for AWS as they have no financial incentive to limit customer risk.
It's very clever - either people pay the overages, or contact you and you can look good by giving them company scrip to spend on other of your services.
Building a business on blank cheques and accidental spends is shady. It's also a large barrier to adoption. The more times devs see reports like, "I tried [random 20-minute tutorial] and woke up to a bill for my life's savings and luckily support waived the fee this one time but next time they're coming for my house", the less they'll want to explore your offerings.
Using AWS for smaller personal projects will always be more expensive and probably less fun.
On the other hand I recently had to run an ML model over hundreds of thousands of media files. I used AWS to launch 100s of GPUs using spot instances and complete the job in a few hours, then just turned it all off and moved on. It cost a few hundred dollars total.
In my mind it's at this kind of scale AWS really makes sense.
That's the thing that annoys me the most about AWS. There's no easy way to find out all the resources I'm currently paying for (or if there's a way, I couldn't find it).
Without an easy to understand overview, it feels like I don't have full control of my own account.
I joke, but this persona is very real, and it leads you to this nickel and dime billing model.
needed to send "raw" http requests instead of using their bloated sdk for reasons, and requests failed with "content-type: application/json" header, but succeeded with "content-type: application/x-amz-json-1.0". get out of here with that nonsense.
...when asked to. But what percentage of mistakes like this end up just being "eaten" by the end-user, not realizing that they can ask for a refund? What percentage don't even get noticed?
How about postgres with postgis? https://postgis.net/docs/using_postgis_query.html
There's a reason there are very well paid positions in companies to guide colleagues on how to use AWS cost-effectively and with lower risk.
I think its all hiding the fact that people don't want to take the time to design (and maintain) scalable infrastructure and instead rely on fake abstractions that pretend to be infinite, always-available, magic, or w/e. I'm sure there is some open source software that helps here.
They lead young devs into their framework and make them believe that the only way to serve their sites is through them, and to pay their extortionate prices…
People are not educated to self host. Everything is run in a “droplet” and just a click away.
I suspect a lot of the huge AWS customers just eat this because it's so hard to mitigate.
If a business has hundreds of AWS accounts it becomes very hard to track, and if each account can only shave a few hundred dollars a month off their individual bill then there's very little impetus to actually do the work for the individual teams. Despite that possibly adding up substantial savings for the overall business.
It clearly does, it's just different skills/time/energy requirements compared to colocation.
After about two days of struggling and a $100 bill, I said fuck it, deleted my account and deployed to DigitalOcean’s app platform instead, where it also failed to deploy (the error was with my app), but I had logs, every time. I fixed it in and had it running in under ten minutes, total bill was a few cents.
I swore that day that I would never again use AWS for anything when given a choice, and would never recommend it.
e.g. for me. I never dared to get my foot wet with AWS, despite interest. Better safe with a cheap, flat-rate VPS than sorry.
Honestly? All the better. There are obviously use cases where AWS is the right tool for the job but it's extremely rare. It's coasting on hype and somehow attaining "no one was ever fired for buying IBM" status.
Neither is what I want. I wish there was a provider with clear and documented limits to allow proper capacity planning while at the same time shifting all the availability risk to the customer but taking on the financial risk. I'd be willing to pay a higher fixed price for that, as long as it is not excessive.
because internally most apps are using the coral framework, which is kind of old, using this json format as it has a well defined shape for inputs, outputs, and errors.
I just don't get it.
The story of "it's easier" is fake.
The story of "you won't need highly paid technical experts to maintain things" is fake.
The story of "it's cheaper" is fake.
The story of "you can't run your own computers it's too complex for ordinary companies to work out" is fake.
Its all fake and people still are diving headlong into the clouds, falling through and hitting the earth hard.
There's enough discussion in the community about the risks and hazards of major clouds - you only have yourself to blame when that huge bill hits because you did some thing that would have not cost an extra cent on self hosted systems or virtual servers.
Go learn Linux. Go buy virtual servers from IONOS where they charge zero for traffic.
Ended up saving at least $4000 dollars a month. And this was mostly sandbox environments that people forgot about.
Charing per deployment sounds crazy though.
I keep raising this but it's never prioritized as it means taking my time from developing our products
Essentially my manager then loses dev time to reduce a bill that due to institutional accounting practices they never actually see!
It's always been. They are always pushing boundaries and checking what they can get away with. The response "we’ve processed a billing adjustment for the unexpected charges as a one time courtesy." even though it looks like a bug and it hasn't been fixed since is already telling.
That said: fuck, that's expensive and poorly explained! Not doing anything cloud without hard limits!
Asking Amazon to do something makes little sense. Create laws that force Amazon, and all the rest, to respect their users money. By default, corporations will do what makes them money, not what is ethical or good for the economy.
One of the problems highlighted was that the documented teardown procedure did not properly delete the OpenSearch domain. Would AWS Nuke (https://github.com/ekristen/aws-nuke) correctly destroy everything that the tutorial sets up?
It's better business to have people beg for mercy and then magnanimously waive fee than to have any discussion about actual hard limits (which would be used by big corps too not just students).
Yes it can be done technically - Azure already has a not loudly advertised account type that is hardcapped. And no billing alerts aren't a solution. Hell you could even do opt-in "yes I understand my data will be deleted" hardcaps.
This is a fixable problem - they just don't want to because a fix would be bad for earnings.
A good rule when working with any sort of cloud service: Everything that can be charged for, will be.
There are plenty of stories of people getting charged massively, and one may wonder whether this has any negative effects on them getting new customers. Unfortunately it's usually not the ones working with it who are the ones making the decision to use AWS or other cloud services, and the ones who are have their minds fully clouded by the propaganda --- I mean marketing.
I've deployed multiple Lambda for many years and I have yet to pay anything for them given how _generous_ their free tier is.
Nowadays I must be at around ~100 Lambda executions per day and my billing for Lambda is still $0/month.
To achieve something similar with self-hosting it would require me to have a server running 24/7 just to allow my code running when needed.
So, almost as with everything else in tech (and life in general), the idea is to not see AWS or self-hosting as the best tools for everything. Sometimes AWS is better, sometimes self-hosting is.
Having the freedom to pick the best one in each situation is quite nice!
In my opinion people end up in these billing situations because they don't actually "dig in" to AWS. They make their pricing easily accessible, and while it's not always easy to understand, it is relatively easy to test as most costs scale nearly linearly.
> the rational fear of waking up to massive charges.
Stay away from the "wrapper" services. AWS Amplify, or Cloudformation, or any of their Stack type offerings. Use the core services directly yourself. All services have an API. Getting an API key tied to an IAM user is as simple as clicking a button.
Everything else is manageable with reasonable caching and ensuring that your cost model is matched to your revenue model so the services that auto scale cost a nearly fixed percentage of your revenue regardless of current demand. We take seasonal loads without even noticing most years.
Bandwidth is the only real nightmare on AWS, but they offer automatic long term discounts through the console, and slightly better contract discounts through a sales rep. Avoid EC2 for this reason and because internal bandwidth is more expensive from EC2 and favor direct use of Lambda + S3 + CloudFront.
After about 3 months it became pretty easy to predict what combination of services would be the most cost effective to use in the implementation of new user facing functionality.
https://docs.github.com/en/code-security/secret-scanning/int...
https://docs.gitlab.com/ee/user/application_security/secret_...
I think technically I was just being charged for the container host machine, but while each individual deploy only lasted a minute or so, I was being charged the minimum each time. And each new deploy started a new host machine. Something like that anyway, it was a few years ago, so I don't remember the specifics.
So I can understand why, but it doesn't change that if their logging hadn't been so flaky, I should have been able to fix the issue in minutes with minimal cost, like I did on Digital Ocean. Besides, the $100 they charged me doesn't include the much more expensive two days I wasted on it.
Didn't know there was a verb for it! I "homelab" too and so far am very happy. With a (free) CDN in front of it it can handle spikes in traffic (that are rare anyways), and everything is simple and mostly free (since the machines are already there).
You have reached your Configured Maximum Monthly Spend Limit.
As per your settings we have removed all objects from S3, All RDS Databases, All Route53 Domains, all ESB volumes, all elastic IPs, All EC2 instances and all Snapshots.
Please update your spend limit before you recreate the above.
Yours, AWS
It wouldn’t solve the problem for usage-based billing, but it would have solved the problem here.
There's always a dunning period and multiple alerts
Even having the option of a hard spend limit would be hazardous, because accounting teams might push the use of such tools, and thereby risk data loss incidents when problems happen.
Hard spend limits might make sense for indie / SME focused cloud vendors though.
Governments have a systematic pressure, at least in sane countries, to be at least partially responsible towards customers - their citizens and voters.
Corporations do not. especially in businesses with high barriers to entry, and where they can vendor lock you.
Do I absolutely trust each government in every democracy to make the right decisions for any problem? Of course not!
But I still trust them way more than corporations or the "invisible hand of the market"
What a ridiculous point. AWS achieves non-trivial things at scale all the time, and brag about it too.
So many smart engineers with high salaries and they can't figure out a solution like "shut down instances so costs don't continue to grow, but keep the data so nothing critical is lost, at least for a limited time"?
Disingenuous is what you are writing - oh no, it's a hard problem, they can't be expected to even try to solve it.
"Hazardous" feels like the wrong word here - if your customer decides to enact a spend limit it should not be up to you to decide whether that's good for them or not.
Great, so they don’t have to use the feature?
That excuse was a great excuse when AWS was an MVP for someone. 20+ years later… there is no excuse.
Businesses with lawyers and stuff can afford to negotiate with AWS etc. when things go wrong. Individuals who want to upskill on AWS to improve their job prospects have to roll the dice on AWS maybe bankrupting them. AWS actively encourages developers to put themselves in this position.
I don't know if AWS should be regulated into providing spending controls. But if they don't choose to provide spending controls of their own accord, I'll continue to call them out for being grossly irresponsible, because they are.
There isn't a boxed product like Bigquery, but the pieces are all there - DynamoDB, Athena, Quicksight...
BTW I'm a supporter of spending caps, not saying this should be the only way.
Also you have variable costs (like s3 traffic) that could put you over your limit half way through the month. Then how does AWS stop you breaching your limit?
On a more practical level I don't think AWS keeps tracks of bills on a minute-by-minute basis.
Second time I've configured virtual machine with some fancy disk. It was supposed to work as CI build server, so I've chosen the fastest disk. Apparently this fastest disk was billed by IOPS or something like that, so it ate few thousands of dollars in a month. I couldn't even imagine disk could cost that much.
Basically these pricing nuances contradicted everything I ever encountered on multiple hosters I worked with and it felt like malicious traps designed specifically for people to fall into.
I'd recommend either learning the basic building blocks (these skills also transfers well to other clouds and self hosting) or using a higher level service provider than AWS (Vercel etc) - they do it better than AWS.
Sort of related, another wishlist feature I have is a way to start an EC2 instance with a deadline up front, and have the machine automatically suspended or terminated if it exceeds the deadline. I have some programs that start an EC2 instance, do some work, and shut it down (e.g. AMI building), and I would sleep a tiny bit better at night if AWS could deadline the instance as a backstop in case my script unexpectedly died before it could.
> Also you have variable costs (like s3 traffic)
Yeah, that's what I mean by it wouldn't solve the problem of usage-based billing. There they could just cut you off, and I think that's the bargain that people who want hard caps are asking for (there is always a spend cap at which I'd assume something had gone horribly wrong and would rather not keep spending), but I agree that the lack of real-time billing data is probably what stops them there.
The details don't matter, really. For those who decide to set up a hard cap and agree to its terms, there could be a grace period or not. In the end, all instances would be shut down and all data lost, just like in traditional services when you haven't paid your bill so you are no longer entitled to them, pure and simple.
They haven't implemented and never will because Amazon is a company that is obsessed with optimization. There is negative motivation to implement anything related to that.
That feels like a bit of a red herring — if that was their ethos, then you'd _have_ to choose burstable/autoscaling config on every service. If I can configure a service to fall over rather than scale at a hard limit, that points to them understanding different their use cases (prod vs dev) and customer types (start-up vs enterprise).
Additionally, anytime I've worked for an enterprise customer, they've had a master service agreement set-up with AWS professional services rather than entering credit card info, so they could use that as a simple way to offer choices.
In the olden days if we spotted a customer ringing up a colossal bill, we would tell them. These huge Amazon bills are fast but still multiple days. They can trivially use rolling-projection windows to know when an account is having a massive spike.
They could use this foresight to call the customer, ensure they're informed, give them the choice about how to continue. This isn't atomic rocket surgery.
"Oh but profit" isn't an argument. They are thousands of dollars up before a problem occurs. The only money they lose is money gained through customer accident. Much of it forgiven to customers who cannot afford it. It's not happily spent. They can do better business.
I find it funny people bring this pseudo-argument up whenever this issue is discussed. Customers: "We want A, it's crucial for us". People on the Internet: "Do you have any idea how difficult is to implement A? How would it work?" And the discussion diverges into technical details obscuring the main point: AWS is bent on on never implementing this feature even though in the past (that is more than a decade ago) they promised they would do that.
Many of these new AWS-provided stacks, however, seem to create stuff all over your account.
The moral of the story? Don't ever use AWS tools like the one the OP describes, ones which create a bunch of different resources for you automatically.
Yes, why not? I don't see the problem here? If you didn't want that, you could set a higher spending limit.
If they want a little more user-friendly approach they could give you X hours grace.
> You've been above your spending limit for 4 hrs (200%), in 4 hrs your services will go into suspended state. Increase your spending limit to resume.
(Office Space)
I once ran up a bill of $60 accidentally, didn't get a refund. I've had three friends with bills, one got a refund.
It might depend on who you know, if you look like someone who is likely to spend more money in future, how stupid your mistake was, I don't know.
The problem is the electorate, and the lack of actual regulation.
Why can we not have a "billable items" dashboard which simply shows, globally, a list of all items in your account which are billable, and how much they will cost if left running for 1 more hour/month?
Or that my card will expire and AWS will send that $0.03 bill to collections and slap court fees on and send a bailiff.
Their whole setup seems intended to cause expensive mistakes.
AWS's growth doesn't come from courting small random devs working on side projects.
Instead of interesting technical challenges I now get to worry about the minutia of Amazon's billing system. Neat! Where do I sign?
Tell me you're Shadow IT without telling me you're Shadow IT.
I know legitimizing shadow IT is still the value proposition of AWS to a lot of organizations. But it sucks if that's the reason the rest of us can't get an optional feature.
They used to be better about refunding accidental or misunderstood charges. I had a couple winners a long time ago like a $600 bill for a giant EC2 instance I meant to stop. They refunded it quickly, no questions. The last time I needed to refund some accidental charges though, there was a lot more stalling and forms.
You know what's insane? RDS (database) instances can be stopped, but automatically restart themselves after 7 days. Didn't read the fine print and thought you could spin up a giant DB for as-needed usage? There's a thousand bucks a month.
I'd fire a box up at home instead but at ~£35/mo I can never quite find the motivation compared to spending the time hacking on one of my actual projects instead.
(I do suspect if I ever -did- find the motivation I'd wonder why I hadn't done so sooner; so it goes)
Having one AWS account where you actually run stuff, and one that follows the rule of "if it can't be paved and recreated from github, don't put it there" is exactly how a lot of people do it anyway.
This is an interesting point and something I can totally imagine happening! I guess if you have fixed spending limits in a large enough organisation, you lose some of the benefit of cloud infrastructure. Convincing a (metaphorically) remote Finance department to increase your fixed spending limit is probably a tougher task than ordering a load of new hardware!
Yada yada yada, that's the same old excuse the cloud providers trot out.
Now, forgive me for my clearly Nobel Prize winning levels of intellect when I point out the following...
Number one: You would not have to turn on the hard spend limit if such functionality were to be provided.
Number two: You could enable customers to set up hard limits IN CONJUNCTION WITH alerts and soft limits, i.e. hitting the hard limit would be the last resort. A bit like trains hitting the buffers at a station ... that is preferable to killing people at the end of the platform. The same with hard spend limits, hitting the limit is better than waking up in the morning to a $1m cloud bill.
I would bet that the reason they don't implement it is not that they're being "shady" but because they don't care about the hobbyists and personal projects and implementing hard spending limits would be a huge, complicated feature to implement. And even if they did put in the huge effort to do it, individuals would still manage to not use it and the steady trickle of viral "I accidentally spent X thousands bucks on AWS" stories would continue as usual.
That's been one of the more interesting inside baseball facts I've learned here.
Actually there is a third category, those who care. I will grant you it is a rare category but it is there.
One example name: Exoscale[1]
Swiss cloud provider, they offer:
(a) hard spend limits via account pre-pay balances (or you can also have post-pay if you want the "usual" cloud "surprises included" payment model).
(b) good customer service that doesn't hide behind "community forums"
Sure they don't offer all the bells and whisles range of services of the big names, but what they do do, they do well.No I am not an Exoscale shill, and no I don't work for Exoscale. I just know some of their happy customers. :)
Also in my opinion billing is the new perf test but post factum and obscure, i.e. it is super easy to miss some key points in the development and then wake up with the costs falling down the responsibility sink (https://news.ycombinator.com/item?id=41891694)
I am just amazed that people are able to navigate the services and configure them properly.
What would you suggest me instead?
I see Hetzner and maybe OVH could cost significantly less.
Actually looks like with Hetzner I would get better specs at 115 eur per month. With 16 vCP, 64GB ram, 360 GB SSD.
That's crazy, I didn't realize there's such a huge difference.
I think your comment is actually impressively valuable to me...
I'm going to bring it all over during the weekend. That's really exciting to find out much cheaper prices, because I've had disk space running out constantly on my DO. I do all my docker builds, files, databases and everything there as well. And for side projects I usually value all the cache and development speed as opposed to trying to optimize everything to take min performance and storage.
Also I spend way too much on Supabase. I think also $300+/month and increasing, I will bring that over and self host with a large SSD. I think at this point I prefer self hosting a postgres anyway to Supabase. I just tried it for a while, but the costs started going up.
Thank you so much.
Edit: there's some limits in Hetzner initially, so it might take few months before I can actually migrate.
i use them to create a small test stack to look at it for a day or two.
then go through, delete all of the resources, put what i need into terraform etc.
has worked well for me in the past.
but yeah, i would never blindly use aws tools to magically put something into production.
I also have an account with L̵i̵n̵u̵x̵A̵c̵a̵d̵e̵m̵y̵ A̵C̵l̵o̵u̵d̵G̵u̵r̵u̵ PluralSight: and while the courses are very variable (and mostly exam cramming focused) it has their Cloud Playground as a super nice feature.
I get four hours to play around with setting things up and then it will all get automatically torn down for me. There's no cloud bill, just my subscription (I think that's about $400pa at the moment - can't check right now as annoyingly their stuff is blocked by our corporate network!) It has a few limitations, but none that have been problems for me with exploring stuff so far.
To make my case: just ponder the opposite: "What would an honest version of AWS do?". They would address the concerns publicly, document their progress towards fixing the issue, and even try to determine who was overcharged due to their faulty code, and offer them some compensation.
"We're too big to fix our own code" is, sadly, taken from the MS playbook (IIRC, something like that was made public after the breach of MS manager mailboxes after the whole Azure breach fiasco that was discovered by, IIRC, the DOJ that paid to have access to logs).
Most people are forced to use AWS because CTO at their company was pushing a "wE mUsT shIp tO cLoUd" initiative. It's not out of choice, but out of survival.
There are parts of AWS that feel like magic and parts that cause me to bang my head against the wall, overall I like it more than it annoys me so I use AWS but it’s not a silver bullet and not all workloads make sense on AWS.
AWS prices are ridiculous. I pay OVH $18/mo for a 4-core, 32 GB RAM, 1 TB SSD dedicated server. The cheapest on AWS would be r6g.xlarge, which costs $145/mo. Almost 10x.
Yes, AWS hardware is usually better, but they give me 4 "vCPUs". OVH gives me 4 "real" CPU cores. There's a LOT of difference. E even if my processor is worse than AWS', I still prefer 4 real CPUs than virtual ones, which are overbooked by AWS and rarely give me 100% of their power.
OVH gives me 300 Mbit, while r6g.xlarge gives "up to" 10 Gbit. But still, 10x? 300 Mbit gives me ~37 mb/s. I use a CDN for large stuff: HTML, images, JS, anyways...
There are certainly cases where AWS is the go-to option, but I think it's a small minority where it actually makes sense.
There's some AWS resources, like for example route53 hosted zones, which bill only once at the end of the month, and so a daily or hourly bill won't tell you anything about leaked resources there.
There's at least a one resource that only bills once a year, so yet again you won't catch those with even monthly usage reports.
> The spending limit in Azure prevents spending over your credit amount. [1]
Once it's your money miraculously the spending limit is no longer available...
> The spending limit isn’t available for subscriptions with commitment plans or with pay-as-you-go pricing. [1]
[1]: https://learn.microsoft.com/en-us/azure/cost-management-bill...
There are stories online of folks getting their accounts deactivated or not even getting approved.
But not all applications are critical, and the company deploying those applications should be able to differentiate between what's critical and what's not. If they're unable to, that's their fault. If there's no option to set hard limits, that's AWS' fault.
I ran it for 1 minute expecting to be paying the $5 or whatever it was per minute and was charged around $100 for it to "boot up". Cancelled it. Never trusted amazon billing again = (
Bezos keeps waxing lyrical in all his interviews on how he "tests" his company services by calling them on the phone to make sure the SLAs or whatever are accurate. But they aren't. TBH I was kind of confused how proud he was that it took 10 mins to get through to someone on the phone instead of 1 minute or something, on how they noticed it and had to "rearrange" things. Like wtf, I would have fired ALL of my executive below me if such an egregious false advertising existed. It can't be that bloody hard as one of the richest people on the planet to just pay some dude $5 an hour to make sure services are billed as expected, run as expected.
I am sorry to complain, I know they have all done great jobs, but it makes me wonder whether I would be "out of touch" if I was ever in a C-suite role. From what I see around me, I definately would be. But maybe those margins don't matter?
I honestly am confused after decades in IT why management is never held responsible. If I ran a company mgmt would be the FIRST to be fired if there were any issues. I swear I read a comment on HN once from a manager saying why should they be held responsible if there is a fuck up lower down the chain, I was like wtf, the whole point of being a manager is to be RESPONSIBLE. Management isnt a luxury "earn the big bucks" because Im better than everyone else, and thus should be protected.
The easiest way to diagnose this as a CEO is to see how often MGMT have been let go at different tiers, and if there havent been any, well there must be a form of corruption / nepotism occuring.
Been burnt as a "small" entrepeneur by all of the greats, Google ( shutting down my instance when it went viral because I decided to upgrade the hosting which for some unknown reason mean it had to be shutdown WITHOUT warning for 24 hours, possibly to transfer it or something god knows. AWS etc.
I know it might seem like a small gripe, but as a millionaire now I remember how I was treated by these companies.
Maybe I should just be grateful I could use them at all.
I think I'm just saying its crazy how the "low" b2b customer is treated when it would be so cheap to just make sure these collosal fuck ups don't happen.
Again, Amazon isn't stepping up to protect users from themselves. They could do a lot more.
Given that all your usage and traffic other than that at the limit request should not be gated or limited, why would you want someone else injecting additional complexity and bottleneck risk inline?
Determine your own graceful envelope, implement accordingly.
My point was, if that's a reason to have unbounded spending, why allow me to spin up a service that can get CPU or RAM bound?
> Determine your own graceful envelope, implement accordingly.
Which most people do have, but we then also want an "ungraceful" backstop — trains have both a brake and a dead man's switch.
Then apply the hard limit in the billing code. If it took a minute or two to shut off all the instances, maybe the customer's bill should have been $1.001M instead of $1M, but cap the bill to $1M anyway. Given their profit margins of x,000% I think they can afford the lost pennies.
Many companies achieve non-trivial things at scale. Pretty much every good engineer I speak to will list out all the incredibly challenging thing they did. And follow it up with "however, this component in Billing is 100x more difficult than that!"
I've worked in Billing and I'd say a huge number of issues come from the business logic. When you add a feature after-the-fact, you'll find a lot of technical and business blockers that prevent you doing the most obvious path. I strongly suspect AWS realised they passed this point of no return some time ago and now the effort to implement it vastly outweighs any return they'd ever hope to see.
And, let's be honest, there will be no possible implementation of this that will satisfy even a significant minority of the people demanding this feature. Everyone things they're saying the same thing but the second you dig into the detail and the use-case, everyone will expect something slightly (but critically) different.
Simply claiming this does not make it true. Anyway, the original claim was simply that it is not trivial. This is what is known as moving the goalposts, look it up.
> let's be honest, there will be no possible implementation of this
Prefixing some assertion with "let's be honest" does not prove it or even support it in any way. If you don't have any actual supporting arguments, there's nothing to discuss, to be honest.
Excuse me for a minute, I have to go reset my password that expires every 10 days and can not match any previous password and enter my enterprise mandated sms 2fa because authenticator scary -- woops my SharePoint session expired after 120 seconds of inactivity let me just redo that -- oh what's that my corporate vpn disconnected because I don't have the mandatory patches applied, let me just restart -- Woah would you look at that my network interface doesn't work anymore after that update -- yes yes I'm sorry I know my activity light on MS Teams turned yellow for 5 minutes I'm working on it, just gotta do these other 12 steps so I can reset my password to -- oh look it's time to fill out the monthly company values survey, what's that, it's due by end of day?
The people "claiming" this actually worked on it. I read a post from HN just yesterday talking about the complexities of billing. Look it up.
> If you don't have any actual supporting arguments
You can read other responses in this post. Look it up.
In any case, the problem wasn’t so much ECS or Fargate, beyond the complexity of their UI and config, but rather that Cloudwatch was flaky. The problem that prevented the deployment was in my end, some issue preventing the health check from succeeding or something like that, so the container never came up healthy when deployed (it worked locally). The issue is that AWS didn’t help me figure out what the problem was and Cloudwatch didn’t show any logs about 80% of the time. I literally clicked deploy, waited for the deploy to fail, refresh Cloudwatch, saw no logs, click deploy, repeat until logs. It took about five attempts to see logs. Every single time I made a change (it wasn’t clear the error was on my end so it was quite a frustrating process).
On digital ocean, the logs were shown correctly every single time and I was able to determine the problem was on my end after a few attempts, add the required extra logging to track it down, fix it, and get a working deployment in under ten minutes.
I don't believe that for a minute.
You know why ?
Let's turn it on its head. What happens if the credit card on your AWS account expires or is frozen ?
You think AWS are going to continue letting you rack up money with no card to charge against ?
I betcha they'll freeze your AWS assets the nanosecond they can't get a charge against the card.
The mechanism for customer-defined hard limits is basically no different.
Comcast (mostly) disagrees, you have a 1.2 TB data cap and "After that, blocks of 50 GB will automatically be added to your account for an additional fee of $10 each plus tax." They do have a limit of $100 on these charges per month at least.
That was 18(!) years ago. It's still nowhere to be found.
There's like 17 ways to do analysis, some of them paid, but none address the actual problem of capping a bill. It's pure malice.
Like many times more than you've ever even spent with them.
I mean it's like 6 months of that before you even get your first non-standard form email.
https://github.com/aws/aws-cdk/issues/12563#issuecomment-771...
Another implicit surprise under the hood of CDK
OTOH for example the default quota of 55PB (yes, go check it) for your daily limit of data extraction from GBQ is funny, until you make a costly mistake, or some forked process turns zombie.
This is predatory practice, that I can't set up MONEY limits for cloud services.