←back to thread

203 points mooreds | 1 comments | | HN request time: 0.336s | source
Show context
thisnullptr ◴[] No.45960639[source]
It’s fascinating to me people think their services are so important they can’t survive any downtime. Can we all admit that, while annoying, nothing really bad happened even when us-east-1 was down for almost half a working day?
replies(5): >>45960777 #>>45960806 #>>45961671 #>>45961769 #>>45966649 #
bostik ◴[] No.45961671[source]
As other posters have commented, an external auth service is a very special thing indeed. In modern and/or zero-trust systems if auth doesn't work, then effectively nothing works.

My rule of thumb from the past experiences is that if you demand a 99.9% uptime for your own systems and you have an in-house auth, then that auth system must have 99.99% reliability. If you are serving auth for OTHERS, then you have a system that can absolutely never be down, and at that point five nines becomes a baseline requirement.

Auth is a critical path component. If your service is in the critical path in both reliability and latency[ß] for third parties, then every one of your failures is magnified by the number of customers getting hit by it.

ß: The current top-voted comment thread includes a mention that latency and response time should also be part of an SLA concern. I agree. For any hot-path system you must be always tracking the latency distribution, both from the service's own viewpoint AND from the point of view of the outside world. The typically useful metrics for that are p95, p99, p999 and max. Yes, max is essential to include: you want to always know what was the worst experience someone/something had during any given time window.

replies(1): >>45962931 #
wparad ◴[] No.45962931[source]
The sad truth of the world is that in many cases latency isn't the most critical aspect for tracking. We absolutely do track it because we have the expectation that authentication requests complete. But there are many moving parts to this that make reliable tracking not entirely feasible: * end location of user * end location of customer service * third party login components (login with google, et al) * corporate identity providers * webauthn * customer specific login mechanism workflows * custom integrations for those login mechanisms * user's user agent * internet connectivity

All of those significantly influence the response capability in a way which makes tracking latency next to useless. Maybe there is something we can be doing though. In more than a couple scenarios we do have tracking in place, metrics, and alerting, it just doesn't end up in our SLA.

replies(2): >>45968351 #>>45970154 #
1. bostik ◴[] No.45968351[source]
While I agree with parts of the above, there are bits that I disagree with. It's true that you cannot control the network conditions for third parties, and therefore can never be in a position where you would guarantee an SLA for round-trip experience. But I object the notion that tracking end-to-end latency is useless. After all, the three Nielsen usability thresholds are all about latency(!)

Funnily enough, looking through your itemisation I spot two groups that would each benefit from their own kinds of latency monitoring. End location and internet connectivity of the client go into the first. Third-party providers go into the second.

For the first, you'd need to have your own probes reporting from the most actively used networks and locations around the world - that would give you a view into the round-trip latency per major network path. For the second, you'd want to track the time spent between the steps that you control - which in turn would give you a good view into the latency-inducing behaviour of the different third-party providers. Neither are SLA material but they certainly would be useful during enterprise contract negotiations. (Shooting impossible demands down by showing hard data tends to fend off even the most obstinate objections.)

User-agent and bespoke integrations/workflows are entirely out of your hands, and I agree it's useless to try to measure latency for them specifically.

Disclaimer: I have worked with systems where the internal authX roundtrip has to complete within 1ms, and the corresponding client-facing side has to complete its response within 3ms.