Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
As a side note I just happen to be reading a book at the moment that contains a fairly detailed walkthrough of the procedure required to access the Russian SVRs headquarters in New York in 1995.
Think of this as an analogue version and in no way a perfect analogy but it does include a step that has more or less the same security properties as this… anyways here’s a relevant quote:
“After an SVR officer passed through various checkpoints in the mission’s lower floors, he would take an elevator or stairs to an eighth-floor lobby that had two steel doors. Neither had any identifying signs.
One was used by the SVR, the other by the GRU. The SVR’s door had a brass plate and knob, but there was no keyhole. To open the door, the head of the screw in the lower right corner of the brass plate had to be touched with a metal object, such as a wedding ring or a coin.
The metal would connect the screw to the brass plate, completing an electrical circuit that would snap open the door’s bolt lock and sometimes shock the person holding the coin.The door opened into a small cloakroom. No jackets or suit coats were allowed inside the rezidentura because they could be used to conceal documents and hide miniature cameras.
SVR officers left their coats, cell phones, portable computers, and all other electronic devices in lockers. A camera videotaped everyone who entered the cloakroom. It was added after several officers discovered someone had stolen money from wallets left in jackets. Another solid steel door with a numeric lock that required a four-digit code to open led from the cloakroom into the rezidentura.
A male secretary sat near the door and kept track of who entered, exited, and at what times. A hallway to the left led to the main corridor, which was ninety feet long and had offices along either side. ”
Excerpt from Comrade J by Pete Earley
As another funny side note… I once discovered years ago that the North Koreans had a facility like this that they used to run a bunch of financing intelligence operations using drugs in Singapore where I was at the time and thought it would be funny to go and visit. It was in a business complex rather than a dedicated diplomatic facility from memory. But as I recall it was a similar scenario of unmarked door with no keyhole.
Every complex services running, is a door someone can potentially break. Even with the most secure and battle tested service, you never know where someone fucked up and introduced an exploit or backdoor. Happened too often to be not a concern. XZ Utils backdoor for example was just last year.
> Your network authentication should not be a fun game or series of Rube Goldberg contraptions.
If there is no harm, who cares...
Use-cases:
1. helps auto-ban hosts doing port-scans or using online vulnerability scanners
2. helps reduce further ingress for a few minutes as the hostile sees the site is "down". Generally, try to waste as much of a problem users time as possible, as it changes the economics of breaking networked systems.
3. the firewall rule-trigger delay means hostiles have a harder time guessing which action triggered a IP ban. If every login attempt costs 3 days, folks would have to be pretty committed to breaking into a simple website.
4. keeps failed login log noise to a minimum, so spotting actual problems is easier
5. Easier to forensically analyze the remote packet stream when doing a packet dump tap, as only the key user traffic is present
6. buys time to patch vulnerable code when zero day exploits hits other hosts exposed services
7. most administrative ssh password-less key traffic should be tunneled over SSL web services, and thus attackers have a greater challenge figuring out if dynamic service-switching is even active
People that say it isn't a "security policy" are somewhat correct, but are also naive when it comes to the reality of dealing with nuisance web traffic.
Fail2ban is slightly different in that it is for setting up tripwires for failed email logins, and known web-vulnerability scanners etc. Then whispering that IP ban period to the firewall (must override the default config.)
Finally, if the IP address for some application login session changes more than 5 times an hour, one should also whisper a ban to the firewalls. These IP ban rules are often automatically shared between groups to reduce forum spam, VoIP attacks, and problem users. Popular cloud-based VPN/proxies/Tor-exit-nodes run out of unique IPs faster than most assume.
Have a nice day, =3
"Don’t waste resources putting lipstick on the pig."
I would never kink-shame someone that ignored the recent CVE-2025-48416, that proved exposing unprotected services is naive =3
But I see you’ve backpedaled to this being about log noise, not security.
Your services should simply be unreachable over anything but wireguard (or another secure VPN option).
I had some additional logic that gave me a really easy but unintuitive way to tell with an incredibly high degree of confidence the difference between a bot and a human on keyboard scenario and for what it’s worth I think that is the specific thing that makes it worth the effort.
If I have reasons to suspect it’s a bot I just drop the request and move on with my day. The signal to noise ratio isn’t worth it to me.
“Port knocking” et al were most definitively not.
OpenVPN is basically 1000 configuration options and magic incantations wearing a trenchcoat, and if you get any of them wrong the whole thing crumbles (or worse, appears to work but is not secure).
One may believe whatever they like, as both our intentions are clear friend.
Have a wonderful day =3
At some point, the idealism of white-listed pears and VPN will fail due to maintenance service costs. Two things may be true at the same time friend. =3
https://www.poetry.com/poem/101535/the-blind-men-and-the-ele...
Is knocking incredibly weak security through obscurity? Sure, but part of what it does is cut down on log volume.
So we made coffee-money wasting spammers time, and attacks stayed rudimentary. =3
However, even with all those choices, “port knocking” still wouldn’t be a solution for anything.
[edit]
Are you just searching for random WireGuard CVEs now?
CVE-2024-26950 was a *local-only* DoS and potential UaF requiring privileged access to wireguard netlink sockets.
- You should be using WireGuard.
- “Port knocking” is pointless theater.
99.98% of hostile traffic simply reuse already published testing tools, or services like Shodan to target hosts.
One shouldn't waste resources guessing the motives behind problem traffic. =3
Just skip the plaintext password (the sequence of ports transmitted) and use certificate based auth, as you note below.
IPSec is simply a luxury unavailable on some LANs =3
<edit>
Firewall administrative network port traffic priority is important for systems under abnormal stress.
Personally I use fwknop for port knocking as it doesn't suffer from replay attacks as it's an encrypted packet. But still serves the same niche
The most mundane setup is two peers with each other’s public keys that let each peer talk to the other via the WireGuard link.
Also by collecting data on the IP addresses that are triggering fail2ban I can identify networks and/or ASes that disproportionally host malicious traffic and block them at a global level.
It's possible that some compliance regimes exist that mandate keeping logs of all unsuccessfully authentication attempts. There's surely a compliance regime out there that mandates every possible permutation of thing.
But the far more common permutation, like we see with NIST, is that the organization has to articulate which logs it keeps, why those logs are sufficient for conducting investigations into system activity, and how it supports those investigations.
You're back on prevention instead of detection, but also no: an attacker with valid creds isn't going to run other checks first before using them.
And yes: by volume, most attacks on the internet are just spam reusing published tools and IP lists. And that traffic is zero percent risky unless your auth is already busted.
Logging both successful and failed requests is important for troubleshooting my systems, especially the client-facing ones (a subset of which are the only ones that are accessible to the open internet), and failed authentication attempts are just one sort of request failure. Sometimes those failures are legitimate client systems where someone misconfigured something, and the logs allow me to troubleshoot that after the fact. That it can also be fed to fail2ban to block attackers is just another benefit.
> You can't meaningfully characterize attacker traffic this way. They'll come from any AS they want to.
Obviously in a world full of botted computers, IoT devices, etc. it's true that an attacker can hypothetically come from anywhere, but in practice at least from the perspective of a small service provider I just don't see that happen. I'm aware that you are involved with much larger scale operations than I'm likely to ever touch so perhaps that's where our experiences differ. No one's targeting my services specifically, they're just scanning the internet for whatever's out there and occasionally happen to stumble upon one of my systems that needs to be accessible to wherever my clients happen to bring their devices.
Sure, I see random domestic residential ISP addresses get banned from individual servers from time to time, but I never see the organized attacks I see which are usually coming from small hosting providers half way around the world from my clients. I have on multiple occasions seen fail2ban fire off rapidly sequential IP addresses like xxx.xxx.xxx.1 followed by xxx.xxx.xxx.2 then xxx.xxx.xxx.3, or in other cases a series of semi-random addresses all in the same subnet, which then triggers my network block and magically they're stopped instead of just moving on to another network. If I were to be packet sniffing on the outside of the relevant firewall I'm sure I'd see another address in the blocked network trying to do its thing but I've never looked.
Open source tools are good at actually doing the job, as long as it's a programmer type of job. We've known how to do unbreakable encryption for decades now. Even PGP still hasn't been broken. Wireguard is one of those solutions in the "so simple it has obviously no bugs" category - that's actually what differentiates it from protocols like OpenVPN.
Think about the recent satellite listening talk at DEFCON and how that massive data leak could have been prevented by even just running your traffic through AES with a fixed key of the CEO's cat's name on a Raspberry Pi, but that's a non-corporate solution and so not acceptable to a corporation, who will only ever consider enabling encryption if it comes with a six figure per year license fee which is what the satellite box makers charged for it. Corporations, as a rule, are only barely competent enough to make money and no more.
It's not extra security but it is a little extra efficiency.
Wireguard has something like this built in though, the PresharedKey (which is in addition to the public key crypto, and doesn't reduce your security to the level of a shared-key system). It's still more work to verify that than a port knock however.
IMO, "only wireguard" is too restrictive of a policy - I also trust openssh and nginx to be open to the internet, if configured moderately carefully. Most FOSS servers that are widely deployed on the internet are safe to be deployed on the internet, or we'd know about it. I reviewed something that's not widely deployed on the internet though (Apache Zookeeper) and couldn't convince myself that every code path was properly checking authentication. That would have to go behind a VPN.
I don't like or trust OpenVPN. I'd sooner expose OpenSSH itself, which has really a pretty stunning security track record.
> The need to limit unsuccessful logon attempts and take subsequent action when the maximum number of attempts is exceeded applies regardless of whether the logon occurs via a local or network connection. Due to the potential for denial of service, automatic lockouts initiated by systems are usually temporary and automatically release after a predetermined, organization-defined time period.
A lot of VPN installations are simply done wrong, and it only takes 1 badly configured client or cloud side-channel to make it pointless. IPSec is not supported on a lot of LANs, and 5k users would prove rather expensive to administer.
Also, GnuPG Kyber will not be supported by VPN software anytime soon, but it would be super cool if it happens. =3
The biggest weakness in VPN is client-side cross-network leaks.
IPSec is simply a luxury if the LAN supports it, but also an administrative nightmare for >5k users. =3
Adding layers of complexity rarely improves security, and doesn't usually address the underlying issue of accountability. And I often ponder if a bastion host is even still meaningful in modern clouds. =3
https://www.cve.org/CVERecord/SearchResults?query=ipsec
https://www.cve.org/CVERecord/SearchResults?query=wireguard
https://www.cve.org/CVERecord/SearchResults?query=strongswan
Best of luck, and straw-man arguments are never taken seriously. =3
Almost, it is more that I don't care specifically why a IPSec option is often a liability, and would rather stick with something less silly.
Ad hominem attacks do not change the fact there are new issues in IPSec/VPN approaches found regularly. Pick any failure mode(s) on the list that applies to your specific use-case and platform.... or could find new ones if you are still bored.
Have a great day =3
I'm not totally following what Fail2Ban has to do with Wireguard. Are we talking strictly about homelabs you don't expose to the internet?
Because I have a homelab I can connect to with Wireguard. That's great. But there are certain services I want to expose to everybody. So I have a VPS that can connect to my homelab via Wireguard and forward certain domain traffic to it.
That's a safe setup in that I don't expose my IP to the internet and don't have to open ports, but I could still be DDOS'd. Would it not make sense for me to use Fail2Ban (or some kind of rate limiting) even if I'm using Wireguard? I can still be DDOS'd.
Well it's a waste of our time and resources. I'm not just going to let people make 100 requests per second for no reason?
I also find it hard to believe it is engineering malpractice to use one technology over another.
What happens if there is a vulnerability in WireGuard? Or if WireGuard traffic is not allowed in or out of a network due to a policy or security restriction?