←back to thread

DigitalOcean App Platform

(pages.news.digitalocean.com)
646 points digianarchist | 1 comments | | HN request time: 0s | source
Show context
user5994461 ◴[] No.24700185[source]
I am so glad to see this. I was looking to deploy an app and the choice is either Heroku or manage your own server which I don't want to do.

Heroku gives instant deployment for the most common types of apps (python/java/ruby). It's PaaS done right, it's fantastic. You should really have a look if you're not aware of it, it's only $7 for a starter app.

Problem is, scaling up is about $50 per gigabyte of memory which makes it a dead end for anything non trivial. You're forced to go to digital ocean / Linode / OVH instead to have something affordable.

That leaves Digital Ocean as the only alternative (don't trust Linode) and it sucks because it only gives me a server to manage. I don't want to manage a server I want to run a (python) application. It's 2020 this sort of things should auto deploy from GitHub without bothering me to manage an operating system.

replies(19): >>24700693 #>>24700794 #>>24701039 #>>24702228 #>>24702633 #>>24702880 #>>24703398 #>>24703543 #>>24703620 #>>24704410 #>>24704873 #>>24705031 #>>24705668 #>>24706188 #>>24706382 #>>24707003 #>>24709134 #>>24716137 #>>24727185 #
dvcrn ◴[] No.24705668[source]
Why not take the initial complexity cost and learn k8s and containerization? That's what I've been doing as a step-up from Heroku and have been very happy with it.

My project currently runs on Digitalocean managed k8s and setting it up really wasn't hard. I had everything already in containers for dev/prod anyway, and having those run on k8s just meant I had to write the deployment manifests that pull the containers and setup the pods.

What I love about managed k8s (and also shared a couple times in comments on HN) is that it's separated from the servers below. I can have 20 containers (that can be separate things all-together) running on the cheapest Droplet and would only pay whatever that droplet costs, so under $20. Then when I need more power, I just scale the Droplets used for the k8s cluster and my pods/containers get shoveled around the available resources automatically.

I liked this approach so much that I now have a private 'personal projects cluster' that runs on digitalocean with the cheapest/weakest droplet avvailable, and whenver I have a small hobby project that needs to be hosted somewhere, I just add that container to the k8s cluster and be done with it.

replies(4): >>24705748 #>>24705749 #>>24706558 #>>24706813 #
nojvek ◴[] No.24705748[source]
I’m waiting for digital ocean to have something like google cloud run.

Google cloud run is essentially here is a docker image that listens on the $PORT env variable. Spin it up when you get requests. It will handle X queries per second (you can set limit). If more than X, scale it up to this many replicas.

I pay about 10 cents for my site. Zero maintenance. I push code to GitHub, GitHub builds an image, pushes to GCR and tells cloud run to use new image.

This is how things ought to work for simple web server like functionality. “Here’s a dockerfile and source tree, build it, run it and auto scale it with this https domain” Boom!

replies(1): >>24707316 #
mcintyre1994 ◴[] No.24707316[source]
What sort of cold start time do you get with that out of interest?
replies(2): >>24708148 #>>24715311 #
1. nojvek ◴[] No.24715311{3}[source]
Even with no cron, it was <500ms cold start. With a cron that hit every 5 mins, I saw less <100ms to hit US central and back (from Seattle).

I pay 10 cents a month to google. They have no shame charging me 3 cents on my credit card.