Most active commenters
  • user5994461(3)
  • manigandham(3)

←back to thread

DigitalOcean App Platform

(pages.news.digitalocean.com)
646 points digianarchist | 14 comments | | HN request time: 0.688s | source | bottom
Show context
user5994461 ◴[] No.24700185[source]
I am so glad to see this. I was looking to deploy an app and the choice is either Heroku or manage your own server which I don't want to do.

Heroku gives instant deployment for the most common types of apps (python/java/ruby). It's PaaS done right, it's fantastic. You should really have a look if you're not aware of it, it's only $7 for a starter app.

Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial. You're forced to go to digital ocean / Linode / OVH instead to have something affordable.

That leaves Digital Ocean as the only alternative (don't trust Linode) and it sucks because it only gives me a server to manage. I don't want to manage a server I want to run a (python) application. It's 2020 this sort of things should auto deploy from GitHub without bothering me to manage an operating system.

replies(19): >>24700693 #>>24700794 #>>24701039 #>>24702228 #>>24702633 #>>24702880 #>>24703398 #>>24703543 #>>24703620 #>>24704410 #>>24704873 #>>24705031 #>>24705668 #>>24706188 #>>24706382 #>>24707003 #>>24709134 #>>24716137 #>>24727185 #
076ae80a-3c97-4 ◴[] No.24701039[source]
It's probably worth looking into the big cloud providers rather than the little guys. In Azure you can have an app service (a deployed app in any one of loads of languages without looking after the machine it sits on) with 1.75GB RAM for about $12 a month. Obviously your usage may vary and that will effect the price. But I get the feeling that the big players are cheaper than people think they are for small projects.
replies(2): >>24701157 #>>24702367 #
user5994461 ◴[] No.24701157[source]
The big players have separate charges for bandwidth and disk and other hidden stuff. They are way more expensive than Digital Ocean / OVH all inclusive. Worse, the costs is unpredictable which makes them a no go for a side project, I can't risk accidentally getting a $1000 bill.

As a real world example, I run a personal blog. If it were running on S3, my personal finance would have been obliterated when it got featured on HN and served 1+ TB of traffic.

replies(6): >>24701406 #>>24702400 #>>24702759 #>>24705226 #>>24705619 #>>24718090 #
earthboundkid ◴[] No.24701406[source]
Can HN really deliver enough traffic to a static site to cost a significant amount? I've had mildly popular posts on HN for my Netlify blog (John Carmack tweeted about it!) and not had to pay for bandwidth.
replies(3): >>24702460 #>>24703261 #>>24703969 #
1. donmcronald ◴[] No.24702460[source]
No. I don't think so.

The concern for me is a lack of hard limit on spending on GCP, Azure, and AWS. If I screw up and allocate a bunch of resources unintentionally, I'm left holding the bill. That's a terrible setup for PaaS because all programming involves mistakes eventually, especially for new users learning the system.

Granted, there are likely limits on accounts, but those are to protect the services from fraud, no to protect the user from overspending. The limits aren't well defined and it's not something you can rely on because MS might consider $10k / month a small account while it's a ton of money for me.

Azure customers have been asking for hard limits on spending for 8 [1] years with radio silence for the last 5.

There's a difference in goals I guess. If I spend more than expected I WANT things to break. Microsoft, Google, and Azure want me to spend unlimited amounts of money, even if I don't have it. At least AWS can be set up using a prepaid credit card so if I screw up they have to call me to collect their money and I negotiate.

1. https://feedback.azure.com/forums/170030-signup-and-billing/...

replies(4): >>24703268 #>>24703320 #>>24705648 #>>24708057 #
2. ev1 ◴[] No.24703268[source]
It's a difference in goals.

- Hobby kid doesn't want to overpay, shut everything down

- Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget

Guess which one they optimise for?

replies(2): >>24703514 #>>24703988 #
3. user5994461 ◴[] No.24703320[source]
Yes it can, if you consider hundreds of dollars a significant amount. I do.

A good article is around 50k visits. The most I've done was 300k over a few days of going viral on HN/reddit/twitter/other. I published some stats there https://thehftguy.com/2017/09/26/hitting-hacker-news-front-p...

4. FpUser ◴[] No.24703514[source]
>"Business absolutely doesn't care about spend, if they get some kind of marketing result traffic spike they just want the site to stay up even if it blows the average budget"

While this statement can be true in some cases I vividly remember bosses of largish (budget wise company) running around like headless chickens yelling to kill every running instance of service just because they were hit by way more "success" that they've planned for.

5. bleepblorp ◴[] No.24703988[source]
Very large businesses might not care about spend, but pretty much everyone else does.

Almost everyone will be unhappy if they're stuck with a six figure bill for non-converting visits because their site went viral. Everyone will be unhappy if they're stuck with a six figure bill because their site was used in a DDoS reflection attack, or got pwned and used in a DDoS attack directly.

Everything I run on nickle-and-dime-to-death cloud services, such as AWS, won't even respond to unauthenticated requests (Nginx return 444, or reachable only via Wireguard) precisely to mitigate this risk. To do anything else is just financially irresponsible.

I've even considered coding a kill switch that will shut down AWS instances if they exceed billing limits, but the fact that AWS charges a fee to check your spend via an API makes this awkward and speaks volumes about Amazon's motivations.

Amazon's refusal to offer spending caps on AWS benefits Amazon and only Amazon.

replies(1): >>24704332 #
6. mwarkentin ◴[] No.24704332{3}[source]
They have free anomaly detection on spending now (not sure how useful yet).
7. manigandham ◴[] No.24705648[source]
Hard spend limits are not an easy problem with cloud. There are too many things that incur costs. Everytime this comes up, I ask the same question: what do you expect to happen when the quota is hit?

Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.

Also for most customers, the data and service is far more important than the cost. Bills can be negotiated or forgiven afterwards. Lost data and customers can't.

replies(3): >>24706257 #>>24706715 #>>24707275 #
8. fauigerzigerk ◴[] No.24706257[source]
I want all services to be rate limited. What I don't want is for some runaway process (whatever the cause) to bankrupt me before I can respond to any alerts (i.e within hours).

In other words, I don't necessarily need to set a hard spending limit, but I want to set a hard spending growth limit (allowing for short bursts), either directly in monetary terms or indirectly through rate limits on individual services.

replies(1): >>24706840 #
9. donmcronald ◴[] No.24706715[source]
> Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent?

I'd be absolutely fine with that in a sub-account or resource group as long as I had to enable it.

A while back I wanted to try out an Azure Resource Manager template as part of learning something. Since I was _learning_ it, I wasn't 100% positive what it was going to do, but I knew that it should cost about $1 to deploy it.

With a hard limit on spending I would have set it to $10, run the thing and been ok with the account being wiped if I hit $10. Even $100 I could tolerate. Unlimited $$ was too risky for me, so I chickened out.

The worst part is I can't even delete my CC because it's tied to an expired trial that I can't update billing for.

> Also for most customers, the data and service is far more important than the cost.

So don't enable the hard limit.

replies(1): >>24730405 #
10. ngcc_hk ◴[] No.24706840{3}[source]
I avoid those for the sane reason. I do not mind to pay for a few dollars for side projects. But not unlimited bill.
11. imtringued ◴[] No.24707275[source]
>Shutdown your servers? Wipe your SSDs and storage buckets? Remove your DNS records? Should it be permanent? If not then they're just subsidizing the costs. If it's soft-limit then its just a warning, and if you just want a warning then billing alarms already exist in every cloud.

You know. When I hit the storage limit of my SSD it doesn't wipe my data. It just ceases to store more data. When I rent a server for a fixed price and my service is under a DDOS attack then it will simply cease to work for the duration of the attack. If there is a variable service like lamba that charges per execution then lambda can simply cease to run my jobs.

You can neatly separate time based and usage based charges and set a limit for them separately. It doesn't even need to be a monetary limit, it could be a resource based limit. Every service would be limited to 0GB storage, 0GB RAM, 0 nodes, 0 queries, 0 api calls by default and you set the limit to whatever you want. AWS or Google Cloud could then calculate the maximum possible bill for the limits you have chosen. People will can then set their limits so that a surprise bill won't be significantly above their usual bill.

Your comment is lazy and not very creative. You're just throwing your hands up and pretending there is no other way even though cloud providers have created this situation for their own benefit.

replies(1): >>24726622 #
12. raphaelj ◴[] No.24708057[source]
I can't agree more with that.

I run a small side business, and these unlimited cloud plans are just a no go. A medium to large company could totally absorb a 5 figures bill, but that would be a dead sentence to my side project. Also, considering the variable costs of bandwidth of AWS, Azure or Cloudflare, one competitor could simple rent an OVH server and incur insane costs to my business while only spending 1/10 of the money.

Right now, I'm using Heroku (with a limited number of dynos and a single PgSQL database) together with BunnyCDN (which allow me to pay for prepaid usage). If I ever get DDoS'ed, my app will most probably be inaccessible or at least significantly slower, while I'll receive an email alert, from which I can decide myself to allocate more resources.

13. manigandham ◴[] No.24726622{3}[source]
The vast majority of overages are due to user error. These errors would just be shifted to include quota mistakes, which can incur data or service loss. Usage limits might be softer than monetary limits which are bounded by the time dimension, but can still cause problems since they do not discriminate between good vs bad traffic.

Before you go around calling people lazy, I suggest you put more thought into why creating more options for people who are overwhelmed by options is generally not productive and can cause unintended consequences and expose liability. With some more thought, you'll also realize that AWS is optimized for businesses and, as stated, losing customers or data is much worse than paying a higher bill, which can always be negotiated after the fact.

14. manigandham ◴[] No.24730405{3}[source]
Since the vast majority of errors are user error, this is just another potential disaster waiting to happen.