This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This was also one of the more advanced theories about the people selection and targeting AI apps used in Gaza. I've only heard one journalist spell it out, because many journalists believe that AI works.
But the dissenter said that they know it does not work and just use it to blame the AI for mistakes.
I am happy though that we are starting to seem more of this kind of content on HN. I understand that these political (?) posts can descend into finger-pointing and trolling. And that is too bad since I think we should not have blinders on in these rather unsettling times.
I will say that I remember when posts like this one were very quickly flagged when they hit the front page. I am happy to see that more and more people are finding them (unfortunately) relevant.
HN would appreciate you not making low quality comments in the first place though. The broader view of your comments on this post seem to be ideologically instead of curiosity driven
Perfect? Of course not, nothing we make ever is. A damn bit better than racist security cameras though.
> Why do we even have photos on licenses and passports
To protect against trivial theft-and-use, mostly. Your mention of licenses, in particular, was interesting given how straightforward it is for a relatively-dedicated actor to forge the photo on them (it's tougher to forge the security content in the license; the photo is one of the weakest pieces of security protection in the document).
he didn't say he didn't want to have photos on licenses and passports, indeed it seems to me as the support is for standard ids that he would want these things as they are part of the standard id set.
He said he was against computer vision identifying people, and gave as a reason that they are a computer vision engineer implying that they know what they are talking about. Although that was only implied without any technical discussion as to why the distrust.
Then you say they trust a piece of paper you hand them, which they never claimed to do either, they discussed established processes, which a process may or may not be more involved than being handed a piece of paper, depending on context and security needs.
>You can't be serious.
I sort of feel you have difficulties with this as well.
https://pages.nist.gov/frvt/html/frvt11.html?utm_source=chat...
https://www.ftc.gov/news-events/news/press-releases/2023/12/...
https://www.theguardian.com/technology/2020/jan/24/met-polic...
https://link.springer.com/article/10.1007/s00146-023-01634-z
https://www.mozillafoundation.org/en/blog/facial-recognition...
https://surface.syr.edu/cgi/viewcontent.cgi?article=2479&con...
Yeah it's pretty fucking shit, actually.
Here's the science.
We have photos on licenses and passports so that if you're an ethnic Russian in your 20s and you present an ID with a photo of a black man in his 70s, we can be confident that this is not you.
If you're an ethnic Russian in your 20s and there is another ethnic Russian in their 20s on some kind of list, that is very much not conclusive proof that you're them, because there could be any number of people who look similar enough to each other to cause a false positive for both a person looking at an ID and a computer vision system.
Anyways, I think it’s perfectly reasonable to nowadays take that philosophy and apply it universally. Just because it was done unfairly and hypocritically in the past is no excuse for us to also be hypocrites nowadays.
Good point. Computer vision systems are very fickle wrt pixel changes and from my experience trying to make them robust to changes in lighting, shadows or adversarial inputs, very hard to deploy in production systems. Essentially, you need tight control over the environment so that you can minimize out of distribution images and even then it’s good to have a supervising human.
If you’re interesting in reading more about this, I recommend looking up: domain adaptation, open set recognition, adversarial machine learning.
> allows users to regain access to their funds without a traditional seed phrase by leveraging trusted contacts (guardians) and a predefined recovery protocol. If a user loses access, they coordinate with a quorum of these guardians, who each provide a piece of the necessary information to restore
Hmmm, that sounds like it would fail outright in some severe edge cases.
For example mass casualty events (fire, earthquake, war, etc) that only leaves a few survivors.
While doing so can be ok, you should probably do some checking via non-LLM means as well.
Otherwise you'll end up misunderstanding things that you _think_ you've learned about. :(
No. The model is, "Hey! this guy is being a pain in the ass. He even claimed that The President wasn't blessed with superintelligence and doesn't actually smell really good!
We need to get this terrorist off the streets! He sure looks a whole lot like that illegal on the FBI most wanted list, doesn't he? Off to CECOT with him!
What's that? He's a twelfth generation citizen? No way! Look, the app I used to claim this guy matches an illegal who's also a child rapist!
Your papers are all fake (if, as a citizen he's even carrying them). Onto the plane with you Senor.
That's the model. Feel free to disagree, but come back and reread this comment in 18 months. I hope you read it then and think "what a paranoid guy! Nothing like that could ever happen here!" But I'm not holding my breath. :(
Those events require special government attention and cost anyway.
Getting Grandma's taxes paid? Not so much. Or: shouldn't!
(The idea is to remove as much user and support burden as possible, not solve societies woes, haha)
Of course the technical solution isn’t easy, (or necessarily all good),
but that doesn’t make it any less likely, or intriguing to discuss the roadmap.
(You combine the scanned data together from both of those scans, regardless of value, as your recovery mechanism, by the way - accounting for abnormal anatomy in a defined, reproducible way is a challenge, not a barrier)
So you don't trust the computer vision algorithm...
But you do trust the meatbags?
Reminds me of the whole discussion around self driving cars. About how people wanted perfection, both in executing how cars move and ethics. While they drove around humans every day just fine
sure, if an expert in self driving cars came in and said self driving cars are untrustworthy.
“ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien,”
"Trust the word of the black box" is pure technocratic dystopian nonsense.
That's the magic with not setting a mathematically verifiable acceptance criteria. You just fall back to that kind of horrible argument
no I don't think humans are trustworthy, I think the procedures discussed are more secure than the alternative on offer which an expert in that technology described as being untrustworthy, implying that it was less trustworthy than the processes it was offered as an alternative to, and then gave technical reasons why which basically boiled down to the reasons why I expected that alternative would be untrustworthy