←back to thread

260 points scastiel | 4 comments | | HN request time: 0.628s | source
Show context
diggan ◴[] No.41880040[source]
Do I read something wrong, or does the stats amount to ~400 daily visitors with ~2500 page views per day? That's about ~1.7 requests per minute... And they pay $115/month for this?

I'm 99% sure I'm reading something wrong, as that's incredible expensive unless this is hosting LLM models or something similar, but it seems like it's a website for sharing expenses?

replies(4): >>41880046 #>>41880064 #>>41880217 #>>41880433 #
Vegenoid ◴[] No.41880433[source]
I think this is just the natural conclusion of the new generation of devs being raised in the cloud and picking a scalable serverless PaaS like Vercel as the default option for any web app.

A more charitable reading is that they pick the technologies that the jobs they want are hiring for, even if they don’t make sense for this simple application.

replies(3): >>41880565 #>>41881037 #>>41881889 #
joshdavham ◴[] No.41881889[source]
> new generation of devs being raised in the cloud

I unfortunately sorta put myself in this category where my PaaS of choice is Firebase. For this cost-splitting app however, what would you personally recommend if not Vercel? Would you recommend something like a Digital Ocean Droplet or something else? What are the best alternatives in your opinion?

replies(1): >>41882260 #
Vegenoid ◴[] No.41882260[source]
Yes, I believe a Droplet or VPS (virtual private server) from some other provider would be sufficient. Digital Ocean isn't the cheapest, but it's pretty frictionless, slick, and has a lot of good tutorial articles about setting up servers.

You'd have a Linux machine (the VPS) that would have at least 3 programs running (or it is running Docker, with these programs running inside containers):

- Node.js

- the database (likely MySQL or PostgreSQL)

- Nginx or Apache

You'd set up a DNS record pointing your domain at the VPS's IP address. When someone visits your website, their HTTP requests will be routed to port 80 or 443 on the VPS. Nginx will be listening on those ports, and forward (aka proxy) the requests to Node, which will respond back to Nginx, which will then send the response back to the user.

There are of course security and availability concerns that are now your responsibility to handle and configure correctly in order to reach the same level of security and availability provided by a good PaaS. That's what you're paying the PaaS for. However, it is not too difficult to reach a level of security and availability that is more than sufficient for a small, free web app such as this one.

replies(2): >>41884057 #>>41884898 #
1. wonger_ ◴[] No.41884898[source]
Could you continue on about security and availability? This is exactly the gentle intro I've been looking for.

I'm guessing rate limiting, backups, and monitoring are important, but I'm not sure how to go about it.

replies(2): >>41887683 #>>41891538 #
2. mrngm ◴[] No.41887683[source]
I'm not entirely on the same page as the parent comment regarding "[t]hat's what you're paying a good PaaS for" in terms of security and availability. If the platform is down, having a service level agreement (SLA) is nice, but worthless because your application is also unavailable. Depending on how integrated your application is with said platform, migrating to another platform is difficult. If the platform cut corners regarding customer data separation (you know, because you can be cheaper than the competition), your users' passwords may be next on HIBP (haveibeenpwned.com).

This is of course a rather pessimistic view on platforms. Perhaps the sweet spot, where the parent commenter is probably referring to, is something where you have more control over the actual applications running, exposed network services, etc., such as a virtual machine or even dedicated hardware. This does require more in-depth knowledge of the systems involved (a good guideline, but I'm unsure where I picked this up, is to have knowledge of 1 abstraction layer above and below the system where you're involved in). This also means you'll need to invest a lot of time in your own platform.

If you're looking for a gentle intro into security and availability, have a look at the OWASP Top Ten[0] that shows ten subjects on web application security with prevention measures and example attacks. A more deep dive in security concepts can be found on the Arch Linux wiki[1]; it also focuses on hardening computer systems, but for a start look at 1. Concepts, 2. Passwords, 5. Storage, 6. User setup, 11. Networks and Firewall. From 14. See Also, perhaps look into [2], not necessarily for the exact steps involved (it's from 2012), but for the overall thought process.

As for availability in an internet-accessible service, look into offering your services from multiple, distinct providers that are geographically separate. Automate the setup of your systems and data distribution, such that you can easily add or switch providers should you need to scale up. Have at least one external service regularly monitor your publicly-accessible infrastructure. Look into fail-over setups using round robin DNS, or multiple CDNs.

But I suppose that's just the tip of the iceberg.

[0] https://owasp.org/Top10/ [1] https://wiki.archlinux.org/title/Security [2] https://www.debian.org/doc/manuals/securing-debian-manual/in...

replies(1): >>41889509 #
3. Vegenoid ◴[] No.41889509[source]
> I'm not entirely on the same page as the parent comment regarding "[t]hat's what you're paying a good PaaS for" in terms of security and availability. If the platform is down, having a service level agreement (SLA) is nice, but worthless because your application is also unavailable.

> If the platform cut corners regarding customer data separation (you know, because you can be cheaper than the competition), your users' passwords may be next on HIBP (haveibeenpwned.com).

This all applies to running on a VPS in the cloud too. You have to own much more of the stack to avoid this than is usually realistic for one person running a free web app.

What I mean about the security and availability being provided for you is that you don't have to worry about configuring a firewall, configuring SSH and Nginx, patching the OS, etc.

4. Vegenoid ◴[] No.41891538[source]
TBH there's more that goes into it than I really want to type out here. LLMs are a good resource for this kind of thing, they generally give correct advice. A quick overview:

Security looks like:

- Ensure SSH (the method by which you'll access the server) is secured. Here is a good article of steps to take to secure SSH on a new server (but you don't have to make your username 16 random characters like the article says): https://hiandrewquinn.github.io/til-site/posts/common-sense-...

- Have a firewall running, which will prevent incoming network connections until you explicitly open ports on the firewall. This helps prevent lack of knowledge and/or misconfiguration of other programs on the server from burning you. The easiest firewall is ufw ("uncomplicated firewall"). Here is a DigitalOcean article that goes into more depth than you probably need at first, or ask Claude/ChatGPT some questions about ufw: https://www.digitalocean.com/community/tutorials/how-to-set-...

- Keep the OS and programs (esp. Nginx/Apache and Node) up to date.

Availability looks like:

- Have a backup of important data (the database). You can set up a 'cron job' that will run a shell script on a schedule that dumps the database to a file (ex. mysqldump) and then copies that file into your backup destination, which could be some cloud storage or another VPS. If you can, backing up to 2 separate destinations is better than one, keeping a history of backups is good, and doing "health checks" of the backup system and the backups is good (meaning periodically check that the backup system is working as intended and that you could restore from a backup if needed)

- Ability to respond to outages, or failure of the host (the server/VPS). This means either having another machine that can be failed over to (probably overkill if you don't have paying customers and an SLA), or you are able to spin up a new server and deploy the app quickly if the server gets borked somehow and goes down. To do that you have some options: have a clear list of instructions that you can manually perform relatively quickly (slowest and most painful), or have automated the deployment process. This is what something like Ansible is for, or you can just use shell scripts. Using Docker can speed up and simplify deployment, since you're building an image that can then be deployed on a new server pretty simply. You will of course also need the backup of the data that you've hopefully been taking.

- Rate limiting may not be necessary depending on the popularity of your site, but it can be useful or necessary and the simplest way is to put your website behind Cloudflare: https://developers.cloudflare.com/learning-paths/get-started...

There are "better" techniques to do all of those that require more know-how, which can prevent and handle more failure scenarios faster or more gracefully, and would be used in a professional context.