This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This is "computer says no (not a citizen)". Which is horrifying
They've just created an app to justify what they were already doing right? And the argument will be "well it's a super complex app run by a very clever company so it can't be wrong"?
This was also one of the more advanced theories about the people selection and targeting AI apps used in Gaza. I've only heard one journalist spell it out, because many journalists believe that AI works.
But the dissenter said that they know it does not work and just use it to blame the AI for mistakes.
https://pages.nist.gov/frvt/html/frvt11.html?utm_source=chat...
https://www.ftc.gov/news-events/news/press-releases/2023/12/...
https://www.theguardian.com/technology/2020/jan/24/met-polic...
https://link.springer.com/article/10.1007/s00146-023-01634-z
https://www.mozillafoundation.org/en/blog/facial-recognition...
https://surface.syr.edu/cgi/viewcontent.cgi?article=2479&con...
Yeah it's pretty fucking shit, actually.
Here's the science.