←back to thread

111 points rabinovich | 1 comments | | HN request time: 0s | source
Show context
greatgib ◴[] No.45813260[source]
We often make fun at stupid European regulations, like AI ones, but it is typically in such a case that it is useful. So to ensure that it could not happen when companies like that have such a monopoly that users have no power.
replies(2): >>45813358 #>>45813449 #
chao- ◴[] No.45813358[source]
Do those regulations really "ensure that [incidents like this] could not happen"?

I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.

*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.

replies(2): >>45813452 #>>45813572 #
gmueckl ◴[] No.45813572[source]
I don't recall the full stack of EU rwgulations in detail, but a requirement that appeal to an actual human is possible after automated decisions is in there somewhere AFAIK.
replies(4): >>45813653 #>>45813687 #>>45813697 #>>45813768 #
sidewndr46 ◴[] No.45813768[source]
But what would it matter? Wouldn't the human be an employee of the company that already made the automated decision?
replies(1): >>45815158 #
gmueckl ◴[] No.45815158{3}[source]
A human can understand and process arguments outside the bounded input domain of automated clssification systems.
replies(1): >>45815197 #
1. sidewndr46 ◴[] No.45815197{4}[source]
They can, but what incentive would they have to do so? They are probably measured off the number of cases they close. The fastest way to close them would be to agree with the conclusions of the algorithm