Most active commenters
  • DANmode(5)
  • bryanrasmussen(4)

←back to thread

574 points nh43215rgb | 19 comments | | HN request time: 0.003s | source | bottom
Show context
hexbin010 ◴[] No.45781498[source]
> “ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien,”

This is "computer says no (not a citizen)". Which is horrifying

They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?

replies(13): >>45781606 #>>45781662 #>>45781821 #>>45782252 #>>45782541 #>>45782817 #>>45782848 #>>45782971 #>>45783123 #>>45783772 #>>45784468 #>>45784720 #>>45786967 #
rgsahTR ◴[] No.45781662[source]
> They've just created an app to justify what they were already doing right?

This was also one of the more advanced theories about the people selection and targeting AI apps used in Gaza. I've only heard one journalist spell it out, because many journalists believe that AI works.

But the dissenter said that they know it does not work and just use it to blame the AI for mistakes.

replies(5): >>45782107 #>>45782130 #>>45782878 #>>45783028 #>>45783384 #
bko ◴[] No.45782878[source]
It's better that the alternative which is humans. Unless you think enforcing laws or ever having the need to establish identity should never take place
replies(11): >>45782905 #>>45782914 #>>45782959 #>>45782980 #>>45783029 #>>45783156 #>>45783385 #>>45784431 #>>45787217 #>>45788483 #>>45792841 #
gessha ◴[] No.45783029[source]
As a computer vision engineer, I wouldn’t trust any vision system for important decisions. We have plenty of established process for verification via personal documents such as ID, birth certificate, etc and there’s no need to reinvent the wheel.
replies(2): >>45783373 #>>45785072 #
1. bko ◴[] No.45783373[source]
So I hand you a piece of paper saying I'm so and so and you just take it on face value? Why do we even have photos on licenses and passports?

You can't be serious.

replies(5): >>45783474 #>>45783516 #>>45783667 #>>45784466 #>>45786816 #
2. ToucanLoucan ◴[] No.45783474[source]
I love how you're contrasting the credibility of demonstrably-proven-to-be-unreliable face recognition tech against MERELY government-issued documents that have been the basis for establishing identity for more than a century.

Perfect? Of course not, nothing we make ever is. A damn bit better than racist security cameras though.

3. shadowgovt ◴[] No.45783516[source]
That is, generally, how it works in most contexts, yes.

> Why do we even have photos on licenses and passports

To protect against trivial theft-and-use, mostly. Your mention of licenses, in particular, was interesting given how straightforward it is for a relatively-dedicated actor to forge the photo on them (it's tougher to forge the security content in the license; the photo is one of the weakest pieces of security protection in the document).

4. bryanrasmussen ◴[] No.45783667[source]
(using he as gender neutral here)

he didn't say he didn't want to have photos on licenses and passports, indeed it seems to me as the support is for standard ids that he would want these things as they are part of the standard id set.

He said he was against computer vision identifying people, and gave as a reason that they are a computer vision engineer implying that they know what they are talking about. Although that was only implied without any technical discussion as to why the distrust.

Then you say they trust a piece of paper you hand them, which they never claimed to do either, they discussed established processes, which a process may or may not be more involved than being handed a piece of paper, depending on context and security needs.

>You can't be serious.

I sort of feel you have difficulties with this as well.

replies(1): >>45787234 #
5. AnthonyMouse ◴[] No.45784466[source]
> So I hand you a piece of paper saying I'm so and so and you just take it on face value? Why do we even have photos on licenses and passports?

We have photos on licenses and passports so that if you're an ethnic Russian in your 20s and you present an ID with a photo of a black man in his 70s, we can be confident that this is not you.

If you're an ethnic Russian in your 20s and there is another ethnic Russian in their 20s on some kind of list, that is very much not conclusive proof that you're them, because there could be any number of people who look similar enough to each other to cause a false positive for both a person looking at an ID and a computer vision system.

6. DANmode ◴[] No.45786816[source]
It’s ALL security theater of varying degrees until we’re using public/private keypairs as identities.
replies(1): >>45787233 #
7. Terr_ ◴[] No.45787233[source]
We'll still need a layer for replacement and revocation though. It'd be nice if nobody ever had their private key lost/destroyed/stolen but it's going to happen.
replies(1): >>45787265 #
8. gessha ◴[] No.45787234[source]
> Although that was only implied without any technical discussion as to why the distrust.

Good point. Computer vision systems are very fickle wrt pixel changes and from my experience trying to make them robust to changes in lighting, shadows or adversarial inputs, very hard to deploy in production systems. Essentially, you need tight control over the environment so that you can minimize out of distribution images and even then it’s good to have a supervising human.

If you’re interesting in reading more about this, I recommend looking up: domain adaptation, open set recognition, adversarial machine learning.

replies(2): >>45788698 #>>45792263 #
9. DANmode ◴[] No.45787265{3}[source]
DNA+iris, and or whatever the next thing is.

Also: social recovery via trusted relatives.

Downvoted should know I’m not referring to SSO, or social media network auth.

replies(2): >>45787773 #>>45791668 #
10. DANmode ◴[] No.45787773{4}[source]
//

> allows users to regain access to their funds without a traditional seed phrase by leveraging trusted contacts (guardians) and a predefined recovery protocol. If a user loses access, they coordinate with a quorum of these guardians, who each provide a piece of the necessary information to restore

replies(1): >>45788008 #
11. justinclift ◴[] No.45788008{5}[source]
> they coordinate with a quorum of these guardians

Hmmm, that sounds like it would fail outright in some severe edge cases.

For example mass casualty events (fire, earthquake, war, etc) that only leaves a few survivors.

replies(1): >>45788897 #
12. bryanrasmussen ◴[] No.45788698{3}[source]
I assumed you knew what you were talking about, but yes it's not my domain. Thanks for the explanation.
13. DANmode ◴[] No.45788897{6}[source]
Definitely.

Those events require special government attention and cost anyway.

Getting Grandma's taxes paid? Not so much. Or: shouldn't!

(The idea is to remove as much user and support burden as possible, not solve societies woes, haha)

14. elondaits ◴[] No.45791668{4}[source]
It’s possible to lose one’s irises. Most identical twins have almost identical DNA. Then there’s the “right to be forgotten”, people on witness protection, refugees and immigrants who enter the system as adults, etc. I don’t think there’s an easy technical solution.
replies(1): >>45792006 #
15. DANmode ◴[] No.45792006{5}[source]
Blind twins* will need to carry an alternative. /s

Of course the technical solution isn’t easy, (or necessarily all good),

but that doesn’t make it any less likely, or intriguing to discuss the roadmap.

(You combine the scanned data together from both of those scans, regardless of value, as your recovery mechanism, by the way - accounting for abnormal anatomy in a defined, reproducible way is a challenge, not a barrier)

16. TrololoTroll ◴[] No.45792263{3}[source]
The discussion is missing the point of the original snarky comment

So you don't trust the computer vision algorithm...

But you do trust the meatbags?

Reminds me of the whole discussion around self driving cars. About how people wanted perfection, both in executing how cars move and ethics. While they drove around humans every day just fine

replies(1): >>45792605 #
17. bryanrasmussen ◴[] No.45792605{4}[source]
>Reminds me of the whole discussion around self driving cars. About how people wanted perfection,

sure, if an expert in self driving cars came in and said self driving cars are untrustworthy.

replies(1): >>45793242 #
18. TrololoTroll ◴[] No.45793242{5}[source]
As someone who has dealt with humans all your life, do you think humans are trustworthy?

That's the magic with not setting a mathematically verifiable acceptance criteria. You just fall back to that kind of horrible argument

replies(1): >>45794288 #
19. bryanrasmussen ◴[] No.45794288{6}[source]
somehow it seems not as magic as setting the mathematically verifiable acceptance criteria that fails 99% of the time. (percentage chosen to show absurdity of claiming that mathematically verifiable acceptance criteria is inherently superior)

no I don't think humans are trustworthy, I think the procedures discussed are more secure than the alternative on offer which an expert in that technology described as being untrustworthy, implying that it was less trustworthy than the processes it was offered as an alternative to, and then gave technical reasons why which basically boiled down to the reasons why I expected that alternative would be untrustworthy