←back to thread

111 points rabinovich | 1 comments | | HN request time: 0s | source
Show context
greatgib ◴[] No.45813260[source]
We often make fun at stupid European regulations, like AI ones, but it is typically in such a case that it is useful. So to ensure that it could not happen when companies like that have such a monopoly that users have no power.
replies(2): >>45813358 #>>45813449 #
chao- ◴[] No.45813358[source]
Do those regulations really "ensure that [incidents like this] could not happen"?

I ask this in good faith, because my observation of the last few years is that the incidents still occur, with all of the harms to individuals also occurring. Then, after N number of incidents, the company pays a fine*, and the company does not necessarily make substantive changes. Superficial changes, but not always meaningful changes that would prevent future harms to individuals.

*Do these fines tend to be used to compensate the affected individuals? I am not educated on that detail, and would appreciate info from someone who is.

replies(2): >>45813452 #>>45813572 #
gmueckl ◴[] No.45813572[source]
I don't recall the full stack of EU rwgulations in detail, but a requirement that appeal to an actual human is possible after automated decisions is in there somewhere AFAIK.
replies(4): >>45813653 #>>45813687 #>>45813697 #>>45813768 #
nalak ◴[] No.45813687[source]
If I were Google I would make it a point to have the human always confirm what the AI said.
replies(2): >>45813706 #>>45813788 #
1. sidewndr46 ◴[] No.45813788{3}[source]
That'd likely be a violation of some kind of laws, but you could probably work to have HR ensure that various teams were aligned in the goals of the operational attributes the company finds necessary to produce an an environment which maximizes the opportunities for individuals to contribute without fear of repression.