←back to thread

Accountability sinks

(aworkinglibrary.com)
493 points l0b0 | 5 comments | | HN request time: 0.001s | source
Show context
alilleybrinker ◴[] No.41892299[source]
Cathy O'Neil's "Weapons of Math Destruction" (2016, Penguin Random House) is a good companion to this concept, covering the "accountability sink" from the other side of those constructing or overseeing systems.

Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.

https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

replies(4): >>41892714 #>>41892736 #>>41892843 #>>41900231 #
spencerchubb ◴[] No.41892736[source]
It's much easier to hold an algorithm accountable than an organization of humans. You can reprogram an algorithm. But good look influencing an organization to change
replies(2): >>41892817 #>>41893364 #
conradolandia ◴[] No.41892817[source]
That is not accountability. Can the algorithm be sent to jail if it commit crimes?
replies(3): >>41892911 #>>41892927 #>>41893184 #
lucianbr ◴[] No.41892927[source]
Is the point revenge or fixing the problem? Fixing the algorithm to never do that again is easy. Or is the point to instill fear?
replies(3): >>41893170 #>>41893289 #>>41893485 #
lazide ◴[] No.41893289[source]
The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences for who are responsible for them. Those consequences can be good or bad, though it is often used to refer to bad.

An algorithm has no concept of consequences (unless programmed to be aware of such), and the more plausibly whoever wrote it can deny knowledge of the resulting consequences, also the more whoever wrote it can avoid consequences/accountability themselves. After all, we can tell Soldiers or Clerks that ‘just following orders’ is no excuse. But computers don’t do anything but follow orders.

Most people/organizations/etc have strong incentives to be able to avoid negative consequences, regardless of their actions or the results of their actions.

Everyone around them has strong incentives to ensure negative consequences for actions with foreseeable negative outcomes are applied to them.

Sometimes, organizations and people will find a way for the consequences of their actions to be borne by other people that have no actual control or ability to change actions being performed (scapegoat). Accountability ideally should not refer to that situation, but sometimes is abused to mean that.

That tends to result in particularly nasty outcomes.

replies(1): >>41893522 #
1. lucianbr ◴[] No.41893522[source]
> The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences

What I read is yes, the point is revenge. If I can offer you a different way of preventing harmful activity, apparently you're not interested. There has to be some unpleasant consequences inflicted, you insist on it.

I think you should reconsider.

replies(3): >>41893612 #>>41893755 #>>41894551 #
2. lazide ◴[] No.41893612[source]
Every rule/boundary/structure needs both a carrot, and a stick, to continue to exist long term.

Ideally, the stick never gets used. We aren’t dealing with ideals, however, we have to deal with reality.

On any sufficiently large scale, an inability/lack of will to use the stick, results in wide scale malfeasance. Because other constrains elsewhere result in wide scale push to break those rules/boundaries/structures for competitive reasons.

No carrot, magnifies the need to use the stick, eh? And turns it into nothing but beatings. Which is not sustainable either.

It has nothing to do with revenge. But if it makes you feel more comfortable, go ahead and call it that.

It’s ensuring cause and effect get coupled usefully. And is necessary for proper conditioning, and learning. One cannot learn properly if there is no ‘failure’ consequence correct?

All you need to do to verify this is, literally, look around at the structures you see everywhere, and what happens when they are or are not enforced. (Aka accountability vs a lack of it).

3. KronisLV ◴[] No.41893755[source]
I think they’re just right in this case.

Suppose I’m a bad actor that creates an unfair algorithm that overcharges the clients of my company. Eventually it’s discovered. The algorithm could be fixed, the servers decommissioned, whatever, but I’ve already won. If the people who requested the algorithm be made in that way, if the people who implemented it or ran it see no consequences, there’s absolutely nothing preventing me from doing the same thing another time, elsewhere.

Punishment for fraud seems sane, regardless of whether it’s enabled by code or me cooking some books by hand.

replies(1): >>41894125 #
4. lazide ◴[] No.41894125[source]
One could even argue (from a raw individual utility perspective - aka selfish) that if the person/people who did that suffered no negative consequences, they’d be fools to not do it again elsewhere.

The evolutionary function certainly encourages it, correct?

Ignoring that means that not applying consequences makes one actually culpable in the bad behavior occurring.

Especially if nothing changed re: rules or enforcement, etc.

5. ◴[] No.41894551[source]