Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
Use-cases:
1. helps auto-ban hosts doing port-scans or using online vulnerability scanners
2. helps reduce further ingress for a few minutes as the hostile sees the site is "down". Generally, try to waste as much of a problem users time as possible, as it changes the economics of breaking networked systems.
3. the firewall rule-trigger delay means hostiles have a harder time guessing which action triggered a IP ban. If every login attempt costs 3 days, folks would have to be pretty committed to breaking into a simple website.
4. keeps failed login log noise to a minimum, so spotting actual problems is easier
5. Easier to forensically analyze the remote packet stream when doing a packet dump tap, as only the key user traffic is present
6. buys time to patch vulnerable code when zero day exploits hits other hosts exposed services
7. most administrative ssh password-less key traffic should be tunneled over SSL web services, and thus attackers have a greater challenge figuring out if dynamic service-switching is even active
People that say it isn't a "security policy" are somewhat correct, but are also naive when it comes to the reality of dealing with nuisance web traffic.
Fail2ban is slightly different in that it is for setting up tripwires for failed email logins, and known web-vulnerability scanners etc. Then whispering that IP ban period to the firewall (must override the default config.)
Finally, if the IP address for some application login session changes more than 5 times an hour, one should also whisper a ban to the firewalls. These IP ban rules are often automatically shared between groups to reduce forum spam, VoIP attacks, and problem users. Popular cloud-based VPN/proxies/Tor-exit-nodes run out of unique IPs faster than most assume.
Have a nice day, =3
"Don’t waste resources putting lipstick on the pig."
I would never kink-shame someone that ignored the recent CVE-2025-48416, that proved exposing unprotected services is naive =3
Your services should simply be unreachable over anything but wireguard (or another secure VPN option).
I had some additional logic that gave me a really easy but unintuitive way to tell with an incredibly high degree of confidence the difference between a bot and a human on keyboard scenario and for what it’s worth I think that is the specific thing that makes it worth the effort.
If I have reasons to suspect it’s a bot I just drop the request and move on with my day. The signal to noise ratio isn’t worth it to me.
One may believe whatever they like, as both our intentions are clear friend.
Have a wonderful day =3
At some point, the idealism of white-listed pears and VPN will fail due to maintenance service costs. Two things may be true at the same time friend. =3
https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...
So we made coffee-money wasting spammers time, and attacks stayed rudimentary. =3
However, even with all those choices, “port knocking” still wouldn’t be a solution for anything.
[edit]
Are you just searching for random WireGuard CVEs now?
CVE-2024-26950 was a *local-only* DoS and potential UaF requiring privileged access to wireguard netlink sockets.
- You should be using WireGuard.
- “Port knocking” is pointless theater.
99.98% of hostile traffic simply reuse already published testing tools, or services like Shodan to target hosts.
One shouldn't waste resources guessing the motives behind problem traffic. =3
IPSec is simply a luxury unavailable on some LANs =3
<edit>
Firewall administrative network port traffic priority is important for systems under abnormal stress.
Personally I use fwknop for port knocking as it doesn't suffer from replay attacks as it's an encrypted packet. But still serves the same niche
Also by collecting data on the IP addresses that are triggering fail2ban I can identify networks and/or ASes that disproportionally host malicious traffic and block them at a global level.
It's possible that some compliance regimes exist that mandate keeping logs of all unsuccessfully authentication attempts. There's surely a compliance regime out there that mandates every possible permutation of thing.
But the far more common permutation, like we see with NIST, is that the organization has to articulate which logs it keeps, why those logs are sufficient for conducting investigations into system activity, and how it supports those investigations.
You're back on prevention instead of detection, but also no: an attacker with valid creds isn't going to run other checks first before using them.
And yes: by volume, most attacks on the internet are just spam reusing published tools and IP lists. And that traffic is zero percent risky unless your auth is already busted.
Logging both successful and failed requests is important for troubleshooting my systems, especially the client-facing ones (a subset of which are the only ones that are accessible to the open internet), and failed authentication attempts are just one sort of request failure. Sometimes those failures are legitimate client systems where someone misconfigured something, and the logs allow me to troubleshoot that after the fact. That it can also be fed to fail2ban to block attackers is just another benefit.
> You can't meaningfully characterize attacker traffic this way. They'll come from any AS they want to.
Obviously in a world full of botted computers, IoT devices, etc. it's true that an attacker can hypothetically come from anywhere, but in practice at least from the perspective of a small service provider I just don't see that happen. I'm aware that you are involved with much larger scale operations than I'm likely to ever touch so perhaps that's where our experiences differ. No one's targeting my services specifically, they're just scanning the internet for whatever's out there and occasionally happen to stumble upon one of my systems that needs to be accessible to wherever my clients happen to bring their devices.
Sure, I see random domestic residential ISP addresses get banned from individual servers from time to time, but I never see the organized attacks I see which are usually coming from small hosting providers half way around the world from my clients. I have on multiple occasions seen fail2ban fire off rapidly sequential IP addresses like xxx.xxx.xxx.1 followed by xxx.xxx.xxx.2 then xxx.xxx.xxx.3, or in other cases a series of semi-random addresses all in the same subnet, which then triggers my network block and magically they're stopped instead of just moving on to another network. If I were to be packet sniffing on the outside of the relevant firewall I'm sure I'd see another address in the blocked network trying to do its thing but I've never looked.
Open source tools are good at actually doing the job, as long as it's a programmer type of job. We've known how to do unbreakable encryption for decades now. Even PGP still hasn't been broken. Wireguard is one of those solutions in the "so simple it has obviously no bugs" category - that's actually what differentiates it from protocols like OpenVPN.
Think about the recent satellite listening talk at DEFCON and how that massive data leak could have been prevented by even just running your traffic through AES with a fixed key of the CEO's cat's name on a Raspberry Pi, but that's a non-corporate solution and so not acceptable to a corporation, who will only ever consider enabling encryption if it comes with a six figure per year license fee which is what the satellite box makers charged for it. Corporations, as a rule, are only barely competent enough to make money and no more.
I don't like or trust OpenVPN. I'd sooner expose OpenSSH itself, which has really a pretty stunning security track record.
> The need to limit unsuccessful logon attempts and take subsequent action when the maximum number of attempts is exceeded applies regardless of whether the logon occurs via a local or network connection. Due to the potential for denial of service, automatic lockouts initiated by systems are usually temporary and automatically release after a predetermined, organization-defined time period.
A lot of VPN installations are simply done wrong, and it only takes 1 badly configured client or cloud side-channel to make it pointless. IPSec is not supported on a lot of LANs, and 5k users would prove rather expensive to administer.
Also, GnuPG Kyber will not be supported by VPN software anytime soon, but it would be super cool if it happens. =3
The biggest weakness in VPN is client-side cross-network leaks.
IPSec is simply a luxury if the LAN supports it, but also an administrative nightmare for >5k users. =3
Adding layers of complexity rarely improves security, and doesn't usually address the underlying issue of accountability. And I often ponder if a bastion host is even still meaningful in modern clouds. =3
https://www.cve.org/CVERecord/SearchResults?query=ipsec
https://www.cve.org/CVERecord/SearchResults?query=wireguard
https://www.cve.org/CVERecord/SearchResults?query=strongswan
Best of luck, and straw-man arguments are never taken seriously. =3
Almost, it is more that I don't care specifically why a IPSec option is often a liability, and would rather stick with something less silly.
Ad hominem attacks do not change the fact there are new issues in IPSec/VPN approaches found regularly. Pick any failure mode(s) on the list that applies to your specific use-case and platform.... or could find new ones if you are still bored.
Have a great day =3
I'm not totally following what Fail2Ban has to do with Wireguard. Are we talking strictly about homelabs you don't expose to the internet?
Because I have a homelab I can connect to with Wireguard. That's great. But there are certain services I want to expose to everybody. So I have a VPS that can connect to my homelab via Wireguard and forward certain domain traffic to it.
That's a safe setup in that I don't expose my IP to the internet and don't have to open ports, but I could still be DDOS'd. Would it not make sense for me to use Fail2Ban (or some kind of rate limiting) even if I'm using Wireguard? I can still be DDOS'd.
Well it's a waste of our time and resources. I'm not just going to let people make 100 requests per second for no reason?