←back to thread

454 points positiveblue | 1 comments | | HN request time: 0s | source
Show context
ctoth ◴[] No.45068556[source]
The web doesn't need attestation. It doesn't need signed agents. It doesn't need Cloudflare deciding who's a "real" user agent. It needs people to remember that "public" means PUBLIC and implement basic damn rate limiting if they can't handle the traffic.

The web doesn't need to know if you're a human, a bot, or a dog. It just needs to serve bytes to whoever asks, within reasonable resource constraints. That's it. That's the open web. You'll miss it when it's gone.

replies(9): >>45068690 #>>45068959 #>>45069370 #>>45069779 #>>45069921 #>>45070226 #>>45070359 #>>45071126 #>>45071216 #
johncolanduoni ◴[] No.45068690[source]
Basic damn rate limiting is pretty damn exploitable. Even ignoring botnets (which is impossible), usefully rate limiting IPv6 is anything but basic. If you just pick some prefix from /48 to /64 to key your rate limits on, you'll either be exploitable by IPs from providers that hand out /48s like candy or you'll bucket a ton of mobile users together for a single rate limit.
replies(1): >>45068822 #
ctoth ◴[] No.45068822[source]
You make unauthenticated requests cheap enough that you don't care about volume. Reserve rate limiting for authenticated users where you have real identity. The open web survives by being genuinely free to serve, not by trying to guess who's "real."

A basic Varnish setup should get you most of the way there, no agent signing required!

replies(3): >>45068881 #>>45069206 #>>45070262 #
Lammy ◴[] No.45069206[source]
> You make unauthenticated requests cheap enough that you don't care about volume.

In the days before mandatory TLS it was so easy to set up a Squid proxy on the edge of my network and cache every plain-HTTP resource for as long as I want.

Like yeah, yeah, sure, it sucked that ISPs could inject trackers and stuff into page contents, but I'm starting to think the downsides of mandatory TLS outweigh the upsides. We made the web more Secure at the cost of making it less Private. We got Google Analytics and all the other spyware running over TLS and simultaneously made it that much harder for any normal person to host anything online.

replies(1): >>45069596 #
AnthonyMouse ◴[] No.45069596{3}[source]
You can still do that, you have the caching reverse proxy at the edge of the network be the thing that terminates TLS.
replies(1): >>45069838 #
Lammy ◴[] No.45069838{4}[source]
Not really. At minimum you will break all of these sites on the HSTS preload list: https://source.chromium.org/chromium/chromium/src/+/main:net...
replies(2): >>45069989 #>>45079232 #
1. AnthonyMouse ◴[] No.45079232{5}[source]
It isn't the client side who does this, it's the server side. Doing it on the client side has a nominal benefit in the typical case but is very little value to you when the problem is some misbehaving third party AI scraper taking down the server when you need to get something from it that isn't already in the local cache.

If you have three local machines, you might be able to turn three queries into one, assuming they all visit the same site instead of different people using different sites.

If you do this on the server, a request that requires the execution of PHP code and three SQL queries goes from happening on every request for the same resource to happening once and then the subsequent requests are just shoveling the cached response back out the pipe instead of having to process it again. Instead of reducing the number of requests that reach the back end by 3:1 you reduce it by a million to one.

And that doesn't cause any HSTS problems because a reverse proxy operated by the site owner has the real certificate in it.