I'm 99% sure I'm reading something wrong, as that's incredible expensive unless this is hosting LLM models or something similar, but it seems like it's a website for sharing expenses?
I'm 99% sure I'm reading something wrong, as that's incredible expensive unless this is hosting LLM models or something similar, but it seems like it's a website for sharing expenses?
A more charitable reading is that they pick the technologies that the jobs they want are hiring for, even if they don’t make sense for this simple application.
I'm not sure, I'm also "new generation of devs" I suppose, cloud had just entered the beginning of the hype cycle when I started out professionally. Most companies/individuals at that point were pushing for "everything cloud" but after experiencing how expensive it really is, you start to look around for alternatives.
I feel like that's just trying to have a "engineering mindset" rather than what generation you belong to.
One would think that to be the common sense case...but, in corporate America - at least the last handful of companies that i worked at - some companies are *only now just getting work loads up to the cloud now*...so they have not yet felt the cost pain....Or, in other cases, other firms are living in the cloud, have seen the exorbitant costs, but move waaaaay toooo sloooow to migrate workloads off cloud (or hybridize them in smart ways for their business)....Or, in even other cases that i have seen, instead of properly analyzing function and costs of cloud usage - and truly applying an engineering mindset to matters - some of these so called IT leaders (who are too busy with powerpoint slides) will simply layoff people and "achieve savings" that way.
Welcome to being a technologist employed at one of several/many American corporations in 2024!
It's certainly possible to spin up your own db backup scripts, monitor that, make sure it gets offsite to an s3 bucket or something, set yourself a calendar reminder to test that all once a month, etc... but if I had to write out a list of things that I enjoy doing and a list of things that I don't, that work would feature heavily on the "yeah, but no" list.
I unfortunately sorta put myself in this category where my PaaS of choice is Firebase. For this cost-splitting app however, what would you personally recommend if not Vercel? Would you recommend something like a Digital Ocean Droplet or something else? What are the best alternatives in your opinion?
You'd have a Linux machine (the VPS) that would have at least 3 programs running (or it is running Docker, with these programs running inside containers):
- Node.js
- the database (likely MySQL or PostgreSQL)
- Nginx or Apache
You'd set up a DNS record pointing your domain at the VPS's IP address. When someone visits your website, their HTTP requests will be routed to port 80 or 443 on the VPS. Nginx will be listening on those ports, and forward (aka proxy) the requests to Node, which will respond back to Nginx, which will then send the response back to the user.
There are of course security and availability concerns that are now your responsibility to handle and configure correctly in order to reach the same level of security and availability provided by a good PaaS. That's what you're paying the PaaS for. However, it is not too difficult to reach a level of security and availability that is more than sufficient for a small, free web app such as this one.
DHH (Rails founder) thinks you should dare to connect a server to the internet: https://world.hey.com/dhh/dare-to-connect-a-server-to-the-in...
(I already submitted this once, but given the discussion here, I think it's worth posting again, if my rate limit allows it)
Awesome first sentence! I know I'm going to agree with the article just by that. This applies to so many things in life, too. We're been taught that so many things people routinely did in the past are now scary and impossible.
Also, sidenote: but for small stuff you can just deploy in your home. I've done it before. It's really not that scary, and odds are you have a computer laying around. The only "spooky" part is relying on my ISP router. I don't trust that thing, but that can be fixed.
That can backfire and give an employer the idea you want to do that work though. I not only hate it, but nobody gives a damn until stuff breaks and then everyone is mad. You rarely get rewarded for stuff silently sitting there and working.
edit: to be clear, I think doing it yourself once is great experience. And I've run small web apps on a single server, all the way from supervisord -> nginx -> passenger -> rails with pg and redis. I'd rather build features or work on marketing.
I’ve commented here before that on AWS (which I’m fairly familiar with) I could set up ECS with a load balancer and have a simple web app with rds running in about 30 minutes, and literally never have to touch the infra again.
This is of course a rather pessimistic view on platforms. Perhaps the sweet spot, where the parent commenter is probably referring to, is something where you have more control over the actual applications running, exposed network services, etc., such as a virtual machine or even dedicated hardware. This does require more in-depth knowledge of the systems involved (a good guideline, but I'm unsure where I picked this up, is to have knowledge of 1 abstraction layer above and below the system where you're involved in). This also means you'll need to invest a lot of time in your own platform.
If you're looking for a gentle intro into security and availability, have a look at the OWASP Top Ten[0] that shows ten subjects on web application security with prevention measures and example attacks. A more deep dive in security concepts can be found on the Arch Linux wiki[1]; it also focuses on hardening computer systems, but for a start look at 1. Concepts, 2. Passwords, 5. Storage, 6. User setup, 11. Networks and Firewall. From 14. See Also, perhaps look into [2], not necessarily for the exact steps involved (it's from 2012), but for the overall thought process.
As for availability in an internet-accessible service, look into offering your services from multiple, distinct providers that are geographically separate. Automate the setup of your systems and data distribution, such that you can easily add or switch providers should you need to scale up. Have at least one external service regularly monitor your publicly-accessible infrastructure. Look into fail-over setups using round robin DNS, or multiple CDNs.
But I suppose that's just the tip of the iceberg.
[0] https://owasp.org/Top10/ [1] https://wiki.archlinux.org/title/Security [2] https://www.debian.org/doc/manuals/securing-debian-manual/in...
> If the platform cut corners regarding customer data separation (you know, because you can be cheaper than the competition), your users' passwords may be next on HIBP (haveibeenpwned.com).
This all applies to running on a VPS in the cloud too. You have to own much more of the stack to avoid this than is usually realistic for one person running a free web app.
What I mean about the security and availability being provided for you is that you don't have to worry about configuring a firewall, configuring SSH and Nginx, patching the OS, etc.
Security looks like:
- Ensure SSH (the method by which you'll access the server) is secured. Here is a good article of steps to take to secure SSH on a new server (but you don't have to make your username 16 random characters like the article says): https://hiandrewquinn.github.io/til-site/posts/common-sense-...
- Have a firewall running, which will prevent incoming network connections until you explicitly open ports on the firewall. This helps prevent lack of knowledge and/or misconfiguration of other programs on the server from burning you. The easiest firewall is ufw ("uncomplicated firewall"). Here is a DigitalOcean article that goes into more depth than you probably need at first, or ask Claude/ChatGPT some questions about ufw: https://www.digitalocean.com/community/tutorials/how-to-set-...
- Keep the OS and programs (esp. Nginx/Apache and Node) up to date.
Availability looks like:
- Have a backup of important data (the database). You can set up a 'cron job' that will run a shell script on a schedule that dumps the database to a file (ex. mysqldump) and then copies that file into your backup destination, which could be some cloud storage or another VPS. If you can, backing up to 2 separate destinations is better than one, keeping a history of backups is good, and doing "health checks" of the backup system and the backups is good (meaning periodically check that the backup system is working as intended and that you could restore from a backup if needed)
- Ability to respond to outages, or failure of the host (the server/VPS). This means either having another machine that can be failed over to (probably overkill if you don't have paying customers and an SLA), or you are able to spin up a new server and deploy the app quickly if the server gets borked somehow and goes down. To do that you have some options: have a clear list of instructions that you can manually perform relatively quickly (slowest and most painful), or have automated the deployment process. This is what something like Ansible is for, or you can just use shell scripts. Using Docker can speed up and simplify deployment, since you're building an image that can then be deployed on a new server pretty simply. You will of course also need the backup of the data that you've hopefully been taking.
- Rate limiting may not be necessary depending on the popularity of your site, but it can be useful or necessary and the simplest way is to put your website behind Cloudflare: https://developers.cloudflare.com/learning-paths/get-started...
There are "better" techniques to do all of those that require more know-how, which can prevent and handle more failure scenarios faster or more gracefully, and would be used in a professional context.