How many times per second is the DB actually accessed? As far as I can tell my the metrics, they're doing ~1.7 requests/minute, you'll have a hard time finding a DB that couldn't handle that.
In fact, I'd bet you'd be able to host that website (the database) in a text file on disk without any performance issues whatsoever.
Modern computers are mind-bogglingly powerful. An old laptop off eBay can probably handle the load for business needs for all but the very largest corporations.
That said, I completely agree-a $4/month DO VPS can run MySQL, and should easily handle this load; in fact I’ve handled far bigger loads in practice.
On a tangent: any recommendations for good US-based bare metal providers (with a convenience factor comparable to OVH, etc)?
As someone who is literally using old laptops to host things from my basement on my consumer line (personal, non-commercial) and a business line (commercial)...
I can host this for under 50 bucks a year, including the domain and power costs, and accounting for offsite backup of the data.
I wish people understood just how much the "cloud" is making in pure profit. If you're already a software dev... you can absolutely manage the complexity of hosting things yourself for FAR cheaper. You won't get five 9s of reliability (not that you're getting that from any major cloud vendor anyways without paying through the nose and a real SLA) but a small UPS will easily get you to 99% uptime - which is absolutely fine for something like this.
* Usage won't be uniformly distributed and you may need to deal with burst traffic for example when a new version is released and all your users are pulling new config data.
* Your application data may be very important to your users and keeping it on a single server is a significant risk.
* You're users may be geographically distributed such that a user on the other side of the world may have a severely degraded experience.
* Not all traffic is created equal and, especially paired with burst traffic, could have one expensive operation like heavy analytical query from one user cause timeouts for another user.
Vercel does not solve all of these problems, but they are problems that may be exasperated by a $4 droplet.
All said I still highly encourage developers to not sell their soul to a SaaS product that could care less about them and their use case and consider minimal infrastructure and complexity in order to have more success with their projects.
It seems like computers are getting more capable, but developers are becoming less capable at roughly the same pace.
I was surprised by the cost of Vercel in that blog post too, which is why I dislike all kinds of serverless/lambda/managed services. For me, having a dozen people subscribing to $1-$2/month sponsorship on GitHub Sponsors is enough to cover all the costs. Even if no one donates, I’d still have no trouble keeping the project running on my own.
And that makes perfect sense. Why should humans inconvenience themselves to please the machine? If anyone’s at fault, it’s the database for not being smart enough to optimize the query on its own.
* Okay, I guess that means we should use 2? So that's $8 now.
* Vercel really doesn't help you there beyond serving static files from cdn. That hardly matters at this scale, you should keep in mind that you "only" add about 100ms of latency by serving from the other side of the globe. While that has an impact, it's not really that much. And you can always use another cdn too. They're very often free for html/js/css
* Burst traffic is an issue, especially trolls that just randomly DOS your public servers for shits and giggles. That's pretty much the only one vercel actually helps you against. But so would others, they're not the only ones providing that service, and most do it for free.
Frankly, the only real and valid reason is the previously mentioned: they've likely got the money and don't mind spending it for the ecosystem. And if they like it... Who are we to interfere? Aside from pointing out how massively they're overpaying, but they've gotta be able to handle that if they're willing to publish an article like this
I heard we had a 7 figure annual compute spend, and IIRC we only had a few hundred requests per second peak plus some batch jobs for a few million accounts. A single $160 N100 minipc could probably handle the workload with better reliability than we had if we hadn't gone down that particular road to insanity.
You can read a little bit more about my analytics setup here:
https://joeldare.com/private-analtyics-and-my-raspberry-pi-4...
Heh, remind me of a discussion I had with a coworker roughly 6 month ago. I tried to explain to them that the ability to scale each microservices separately almost never improves the actual performance of the platform as a whole - after all, you still need to have network calls between each service and could've also just started the monolith twice. And that would've most likely even needed less RAM too, even if each instance will likely consume more - after all, you now need less applications running to serve the same request.
This discussion took place in the context of a b2e saas platform with very moderate usage, almost everything being plain CRUD. Like 10-15k simultaneous users making data entries etc.
I'm always unsure how I should feel after such discussions. On the one hand, I'm pretty sure he probably thinks that I'm dumb for not getting microservices. On the other hand... Well... ( ꈍ ᴗ ꈍ )
"Andy giveth, and Bill taketh away."
Computers keep getting faster (personified as Andy Grove, from Intel), and software keeps getting slower (Bill Gates, from Microsoft).
If you can understand programming, you can understand Linux. Might take a while to be really confident, but do you need incredible confidence when you have backups? :)
Especially with that meme he showed about vercel is laws +500% markup lmaoo
Don't be afraid of computers, don't be the pink elephant!
Is the author even getting paid for their services though? If they aren't then why would they care? I don't mean that as rude as it sounds, but why would they pay that much money so people can use their product for free?
It’s a systemic problem. You’re going to loose the battle against human nature: Ever noticed how, after moving from a college dorm into a house, people suddenly manage to fill all the space with things? It’s not like the dorm was enough to fit everything they ever needed, but they had to accommodate themselves to it. This constraint is artificial, exhausting to keep up, and, if gone, will no longer be adhered to.
If a computer suddenly becomes more powerful, developers aren’t going to keep up their good habits performance optimisation, because they had those only out of necessity in the first place.
Okay, am I crazy or can you not really solve this without going full on multi-region setup of everything? Maybe your web server is closer to them but database requests are still going back to the "main" region which will have latency.
It does feel like the tech community as a whole has forgotten how simple and low resource usage hosting most things is, maybe due to the proliferation of stuff like AWS trying to convince us that we need all this crazy stuff to do it?
I agree with this statement for normal people. Not for software developers. You're just begging for stagnation. Your job is literally dealing with computers and making them do neat stuff. When you refuse to do that because "computers should be making my life easier" you should really find another line of employment where you're a consumer of software, not a producer.