←back to thread

596 points pimterry | 3 comments | | HN request time: 0.208s | source
1. lbriner ◴[] No.36863124[source]
The problem with most of these systems is they can never cope with any edge cases. This means it works fine for 99% of the population but the other 1% can get stuffed.

It would be like having a robot deny you access to the office after work hours even though you only need to grab your car keys that you forgot. The system is designed to be secure so you can't talk your way past a robot. If it was a human, it would be much easier to reason with them (normally!) and find a solution that works.

Techies gonna tech though. "If there was a problem yo I'll solve it, check out my tech while the DJ revolves it."

replies(2): >>36863213 #>>36865255 #
2. lbriner ◴[] No.36863213[source]
Another problem is what is the actual root of the attestation? If it was the means to say, "yes this is a real person" it might be useful but this is simply system attestation so no real way of knowing whether it would stop bad actors from doing bad stuff and whether it would be misunderstood and misused like many other systems (CORs anyone?)

The logical conclusion of this system is "if you have a legit system, you are legit; if you don't you aren't".

3. kccqzy ◴[] No.36865255[source]
> If it was a human, it would be much easier to reason with them (normally!) and find a solution that works.

The Internet has shown that if you drive down the cost of interacting with this human gatekeeper to zero (you can be anywhere in the world rather than a specific place and time), social engineering attacks inevitably result. That's how we get hackers getting into your bank accounts just because they are eloquent and they make a great case reasoning with the human gatekeeper.