←back to thread

579 points nh43215rgb | 4 comments | | HN request time: 0.42s | source
Show context
hexbin010 ◴[] No.45781498[source]
> “ICE officials have told us that an apparent biometric match by Mobile Fortify is a ‘definitive’ determination of a person’s status and that an ICE officer may ignore evidence of American citizenship—including a birth certificate—if the app says the person is an alien,”

This is "computer says no (not a citizen)". Which is horrifying

They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?

replies(13): >>45781606 #>>45781662 #>>45781821 #>>45782252 #>>45782541 #>>45782817 #>>45782848 #>>45782971 #>>45783123 #>>45783772 #>>45784468 #>>45784720 #>>45786967 #
rgsahTR ◴[] No.45781662[source]
> They've just created an app to justify what they were already doing right?

This was also one of the more advanced theories about the people selection and targeting AI apps used in Gaza. I've only heard one journalist spell it out, because many journalists believe that AI works.

But the dissenter said that they know it does not work and just use it to blame the AI for mistakes.

replies(5): >>45782107 #>>45782130 #>>45782878 #>>45783028 #>>45783384 #
bko ◴[] No.45782878[source]
It's better that the alternative which is humans. Unless you think enforcing laws or ever having the need to establish identity should never take place
replies(11): >>45782905 #>>45782914 #>>45782959 #>>45782980 #>>45783029 #>>45783156 #>>45783385 #>>45784431 #>>45787217 #>>45788483 #>>45792841 #
gessha ◴[] No.45783029[source]
As a computer vision engineer, I wouldn’t trust any vision system for important decisions. We have plenty of established process for verification via personal documents such as ID, birth certificate, etc and there’s no need to reinvent the wheel.
replies(2): >>45783373 #>>45785072 #
bko ◴[] No.45783373[source]
So I hand you a piece of paper saying I'm so and so and you just take it on face value? Why do we even have photos on licenses and passports?

You can't be serious.

replies(5): >>45783474 #>>45783516 #>>45783667 #>>45784466 #>>45786816 #
bryanrasmussen ◴[] No.45783667[source]
(using he as gender neutral here)

he didn't say he didn't want to have photos on licenses and passports, indeed it seems to me as the support is for standard ids that he would want these things as they are part of the standard id set.

He said he was against computer vision identifying people, and gave as a reason that they are a computer vision engineer implying that they know what they are talking about. Although that was only implied without any technical discussion as to why the distrust.

Then you say they trust a piece of paper you hand them, which they never claimed to do either, they discussed established processes, which a process may or may not be more involved than being handed a piece of paper, depending on context and security needs.

>You can't be serious.

I sort of feel you have difficulties with this as well.

replies(1): >>45787234 #
gessha ◴[] No.45787234[source]
> Although that was only implied without any technical discussion as to why the distrust.

Good point. Computer vision systems are very fickle wrt pixel changes and from my experience trying to make them robust to changes in lighting, shadows or adversarial inputs, very hard to deploy in production systems. Essentially, you need tight control over the environment so that you can minimize out of distribution images and even then it’s good to have a supervising human.

If you’re interesting in reading more about this, I recommend looking up: domain adaptation, open set recognition, adversarial machine learning.

replies(2): >>45788698 #>>45792263 #
1. TrololoTroll ◴[] No.45792263[source]
The discussion is missing the point of the original snarky comment

So you don't trust the computer vision algorithm...

But you do trust the meatbags?

Reminds me of the whole discussion around self driving cars. About how people wanted perfection, both in executing how cars move and ethics. While they drove around humans every day just fine

replies(1): >>45792605 #
2. bryanrasmussen ◴[] No.45792605[source]
>Reminds me of the whole discussion around self driving cars. About how people wanted perfection,

sure, if an expert in self driving cars came in and said self driving cars are untrustworthy.

replies(1): >>45793242 #
3. TrololoTroll ◴[] No.45793242[source]
As someone who has dealt with humans all your life, do you think humans are trustworthy?

That's the magic with not setting a mathematically verifiable acceptance criteria. You just fall back to that kind of horrible argument

replies(1): >>45794288 #
4. bryanrasmussen ◴[] No.45794288{3}[source]
somehow it seems not as magic as setting the mathematically verifiable acceptance criteria that fails 99% of the time. (percentage chosen to show absurdity of claiming that mathematically verifiable acceptance criteria is inherently superior)

no I don't think humans are trustworthy, I think the procedures discussed are more secure than the alternative on offer which an expert in that technology described as being untrustworthy, implying that it was less trustworthy than the processes it was offered as an alternative to, and then gave technical reasons why which basically boiled down to the reasons why I expected that alternative would be untrustworthy