←back to thread

556 points campuscodi | 9 comments | | HN request time: 1.084s | source | bottom
1. rcarmo ◴[] No.41867081[source]
Ironically, the site seems to currently be hugged to death, so maybe they should consider using Cloudflare to deal with HN traffic?
replies(2): >>41867409 #>>41867964 #
2. timeon ◴[] No.41867409[source]
If it is unintentional DDoS, we can wait. Not everything needs to be on demand.
replies(1): >>41867495 #
3. dewey ◴[] No.41867495[source]
The website is built to get attention, the attention is here right now. Nobody will remember to go back tomorrow and read the site again when it’s available.
replies(1): >>41869655 #
4. sofixa ◴[] No.41867964[source]
Doesn't have to be using CloudFlare, just a static web host that will be able to scale to infinity (of which CloudFlare is one with Pages, but there's also Google with Firebase Hosting, AWS with Amplify, Microsoft with something in Azure with a verbose name, Netlify, Vercel, GitHub Pages, etc etc etc).
replies(1): >>41869132 #
5. kawsper ◴[] No.41869132[source]
Or just add Varnish or Nginx configured with a cache in front.
replies(2): >>41869241 #>>41870150 #
6. sofixa ◴[] No.41869241{3}[source]
That can still exhaust system resources on the box it's running on (file descriptors, inodes, ports, CPU/memory/bandwidth, etc) if you hit it too big.

For something like entirely static content, it's so much easier (and cheaper, all of the static hosting providers have an extremely generous free tier) to use static hosting.

And I say this as an SRE by heart who runs Kubernetes and Nomad for fun across a number of nodes at home and in various providers - my blog is on a static host. Use the appropriate solution for each task.

7. BlueTemplar ◴[] No.41869655{3}[source]
I'm not sure an open web can exist under this kind of assumption...

Once you start chasing views, it's going to come at the detriment of everything else.

replies(1): >>41869739 #
8. dewey ◴[] No.41869739{4}[source]
This happened at least 15 years ago and we are doing okay.
9. vundercind ◴[] No.41870150{3}[source]
I used to serve low-tens-of-MB .zip files—worse than a web page and a few images or what have you—statically from Apache2 on a boring Linux server that'd qualify as potato-tier today, with traffic spikes into the hundreds of thousands per minute. Tens of thousands per minute against other endpoints gated by PHP setting a header to tell Apache2 to serve the file directly if the client authenticated correctly, and I think that one could have gone a lot higher, never really gave it a workout. Wasn't even really taxing the hardware that much for either workload.

Before that, it was on a mediocre-even-at-the-time dedicated-cores VM. That caused performance problems... because its Internet "pipe" was straw-sized, it turned out. The server itself was fine.

Web server performance has regressed amazingly badly in the world of the Cloud. Even "serious" sites have decided the performance equivalent of shitty shared-host Web hosting is a great idea and that introducing all the problems of distributed computing at the architecture level will help their moderate-traffic site work better (LOL; LMFAO), so now they need Cloudflare and such just so their "scalable" solution doesn't fall over in a light breeze.