This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This was also one of the more advanced theories about the people selection and targeting AI apps used in Gaza. I've only heard one journalist spell it out, because many journalists believe that AI works.
But the dissenter said that they know it does not work and just use it to blame the AI for mistakes.
Perfect? Of course not, nothing we make ever is. A damn bit better than racist security cameras though.
> Why do we even have photos on licenses and passports
To protect against trivial theft-and-use, mostly. Your mention of licenses, in particular, was interesting given how straightforward it is for a relatively-dedicated actor to forge the photo on them (it's tougher to forge the security content in the license; the photo is one of the weakest pieces of security protection in the document).
he didn't say he didn't want to have photos on licenses and passports, indeed it seems to me as the support is for standard ids that he would want these things as they are part of the standard id set.
He said he was against computer vision identifying people, and gave as a reason that they are a computer vision engineer implying that they know what they are talking about. Although that was only implied without any technical discussion as to why the distrust.
Then you say they trust a piece of paper you hand them, which they never claimed to do either, they discussed established processes, which a process may or may not be more involved than being handed a piece of paper, depending on context and security needs.
>You can't be serious.
I sort of feel you have difficulties with this as well.
We have photos on licenses and passports so that if you're an ethnic Russian in your 20s and you present an ID with a photo of a black man in his 70s, we can be confident that this is not you.
If you're an ethnic Russian in your 20s and there is another ethnic Russian in their 20s on some kind of list, that is very much not conclusive proof that you're them, because there could be any number of people who look similar enough to each other to cause a false positive for both a person looking at an ID and a computer vision system.
Good point. Computer vision systems are very fickle wrt pixel changes and from my experience trying to make them robust to changes in lighting, shadows or adversarial inputs, very hard to deploy in production systems. Essentially, you need tight control over the environment so that you can minimize out of distribution images and even then it’s good to have a supervising human.
If you’re interesting in reading more about this, I recommend looking up: domain adaptation, open set recognition, adversarial machine learning.
> allows users to regain access to their funds without a traditional seed phrase by leveraging trusted contacts (guardians) and a predefined recovery protocol. If a user loses access, they coordinate with a quorum of these guardians, who each provide a piece of the necessary information to restore
Hmmm, that sounds like it would fail outright in some severe edge cases.
For example mass casualty events (fire, earthquake, war, etc) that only leaves a few survivors.
Those events require special government attention and cost anyway.
Getting Grandma's taxes paid? Not so much. Or: shouldn't!
(The idea is to remove as much user and support burden as possible, not solve societies woes, haha)
Of course the technical solution isn’t easy, (or necessarily all good),
but that doesn’t make it any less likely, or intriguing to discuss the roadmap.
(You combine the scanned data together from both of those scans, regardless of value, as your recovery mechanism, by the way - accounting for abnormal anatomy in a defined, reproducible way is a challenge, not a barrier)
So you don't trust the computer vision algorithm...
But you do trust the meatbags?
Reminds me of the whole discussion around self driving cars. About how people wanted perfection, both in executing how cars move and ethics. While they drove around humans every day just fine
sure, if an expert in self driving cars came in and said self driving cars are untrustworthy.
That's the magic with not setting a mathematically verifiable acceptance criteria. You just fall back to that kind of horrible argument
no I don't think humans are trustworthy, I think the procedures discussed are more secure than the alternative on offer which an expert in that technology described as being untrustworthy, implying that it was less trustworthy than the processes it was offered as an alternative to, and then gave technical reasons why which basically boiled down to the reasons why I expected that alternative would be untrustworthy