←back to thread

111 points rabinovich | 1 comments | | HN request time: 0s | source
Show context
greatgib ◴[] No.45813260[source]
We often make fun at stupid European regulations, like AI ones, but it is typically in such a case that it is useful. So to ensure that it could not happen when companies like that have such a monopoly that users have no power.
replies(2): >>45813358 #>>45813449 #
chao- ◴[] No.45813358[source]
Do those regulations really "ensure that [incidents like this] could not happen"?

I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.

*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.

replies(2): >>45813452 #>>45813572 #
gmueckl ◴[] No.45813572[source]
I don't recall the full stack of EU rwgulations in detail, but a requirement that appeal to an actual human is possible after automated decisions is in there somewhere AFAIK.
replies(4): >>45813653 #>>45813687 #>>45813697 #>>45813768 #
nalak ◴[] No.45813687[source]
If I were Google I would make it a point to have the human always confirm what the AI said.
replies(2): >>45813706 #>>45813788 #
badsectoracula ◴[] No.45813706[source]
Why?
replies(1): >>45813730 #
1. naIak ◴[] No.45813730[source]
Because humans cost a lot of money and I don’t want to train my users to think they can get a more favourable answer by asking to have a human review the decision.