Most active commenters
  • lucianbr(4)
  • lazide(3)

←back to thread

Accountability sinks

(aworkinglibrary.com)
493 points l0b0 | 17 comments | | HN request time: 0.001s | source | bottom
Show context
alilleybrinker ◴[] No.41892299[source]
Cathy O'Neil's "Weapons of Math Destruction" (2016, Penguin Random House) is a good companion to this concept, covering the "accountability sink" from the other side of those constructing or overseeing systems.

Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.

https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

replies(4): >>41892714 #>>41892736 #>>41892843 #>>41900231 #
spencerchubb ◴[] No.41892736[source]
It's much easier to hold an algorithm accountable than an organization of humans. You can reprogram an algorithm. But good look influencing an organization to change
replies(2): >>41892817 #>>41893364 #
1. conradolandia ◴[] No.41892817[source]
That is not accountability. Can the algorithm be sent to jail if it commit crimes?
replies(3): >>41892911 #>>41892927 #>>41893184 #
2. Timwi ◴[] No.41892911[source]
Yes. Not literally of course, but it can be deleted/decommissioned, which is even more effective than temporary imprisonment (it's equivalent to death penalty but without the moral component obviously).
replies(1): >>41892953 #
3. lucianbr ◴[] No.41892927[source]
Is the point revenge or fixing the problem? Fixing the algorithm to never do that again is easy. Or is the point to instill fear?
replies(3): >>41893170 #>>41893289 #>>41893485 #
4. hammock ◴[] No.41892953[source]
Why should it be obvious that the moral component is absent? Removing an algorithm is like reducing the set of choices available to society… roughly equivalent to a law or regulation, or worse, a destructive act of coercion. There are moral implications of laws even though laws are not human
5. TeMPOraL ◴[] No.41893170[source]
The point is that "accountability of an algorithm" is a category error.
replies(1): >>41893566 #
6. closeparen ◴[] No.41893184[source]
Interesting that you mention jail… the rule of law is kind of the ultimate accountability sink.
7. lazide ◴[] No.41893289[source]
The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences for who are responsible for them. Those consequences can be good or bad, though it is often used to refer to bad.

An algorithm has no concept of consequences (unless programmed to be aware of such), and the more plausibly whoever wrote it can deny knowledge of the resulting consequences, also the more whoever wrote it can avoid consequences/accountability themselves. After all, we can tell Soldiers or Clerks that ‘just following orders’ is no excuse. But computers don’t do anything but follow orders.

Most people/organizations/etc have strong incentives to be able to avoid negative consequences, regardless of their actions or the results of their actions.

Everyone around them has strong incentives to ensure negative consequences for actions with foreseeable negative outcomes are applied to them.

Sometimes, organizations and people will find a way for the consequences of their actions to be borne by other people that have no actual control or ability to change actions being performed (scapegoat). Accountability ideally should not refer to that situation, but sometimes is abused to mean that.

That tends to result in particularly nasty outcomes.

replies(1): >>41893522 #
8. melagonster ◴[] No.41893485[source]
If algorithm can do some thing wrong, but nobody should be responsible for it, everyone will just hide their crimes under algorithm and replace it when someone find problem.
replies(1): >>41893561 #
9. lucianbr ◴[] No.41893522{3}[source]
> The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences

What I read is yes, the point is revenge. If I can offer you a different way of preventing harmful activity, apparently you're not interested. There has to be some unpleasant consequences inflicted, you insist on it.

I think you should reconsider.

replies(3): >>41893612 #>>41893755 #>>41894551 #
10. lucianbr ◴[] No.41893561{3}[source]
If a mechanical device does something wrong, are we in the same conundrum?

I don't see what the problem is. There's malice, there's negligence, and there's accident. We can figure out which it was, and act accordingly. Must we collapse these to a single situation with a single solution?

replies(1): >>41900061 #
11. lucianbr ◴[] No.41893566{3}[source]
That's reasonable. Let's just call it root cause analysis in this case.

The original point seemed to me to be "we can't use computers because they're not accountable". I say, we can, because we can do fault analysis and fix what is wrong. I won't say "we can hold them accountable", to avoid the category error.

replies(1): >>41894522 #
12. lazide ◴[] No.41893612{4}[source]
Every rule/boundary/structure needs both a carrot, and a stick, to continue to exist long term.

Ideally, the stick never gets used. We aren’t dealing with ideals, however, we have to deal with reality.

On any sufficiently large scale, an inability/lack of will to use the stick, results in wide scale malfeasance. Because other constrains elsewhere result in wide scale push to break those rules/boundaries/structures for competitive reasons.

No carrot, magnifies the need to use the stick, eh? And turns it into nothing but beatings. Which is not sustainable either.

It has nothing to do with revenge. But if it makes you feel more comfortable, go ahead and call it that.

It’s ensuring cause and effect get coupled usefully. And is necessary for proper conditioning, and learning. One cannot learn properly if there is no ‘failure’ consequence correct?

All you need to do to verify this is, literally, look around at the structures you see everywhere, and what happens when they are or are not enforced. (Aka accountability vs a lack of it).

13. KronisLV ◴[] No.41893755{4}[source]
I think they’re just right in this case.

Suppose I’m a bad actor that creates an unfair algorithm that overcharges the clients of my company. Eventually it’s discovered. The algorithm could be fixed, the servers decommissioned, whatever, but I’ve already won. If the people who requested the algorithm be made in that way, if the people who implemented it or ran it see no consequences, there’s absolutely nothing preventing me from doing the same thing another time, elsewhere.

Punishment for fraud seems sane, regardless of whether it’s enabled by code or me cooking some books by hand.

replies(1): >>41894125 #
14. lazide ◴[] No.41894125{5}[source]
One could even argue (from a raw individual utility perspective - aka selfish) that if the person/people who did that suffered no negative consequences, they’d be fools to not do it again elsewhere.

The evolutionary function certainly encourages it, correct?

Ignoring that means that not applying consequences makes one actually culpable in the bad behavior occurring.

Especially if nothing changed re: rules or enforcement, etc.

15. sethammons ◴[] No.41894522{4}[source]
I think folks may have different interpretations of accountability.

If your algorithm kills someone, is the accountability an improvement to the algorithm? A fine and no change to the algorithm? Imprisonment for related humans? Dissolution of some legal entity?

16. ◴[] No.41894551{4}[source]
17. melagonster ◴[] No.41900061{4}[source]
>If a mechanical device does something wrong, are we in the same conundrum?

Sure! But oop mentioned a phenomenon that some companies can hide after algorithms and reject taking responsibility for it. If machines cause damage, people can easily find who's fault, but sometimes the same way does not work on software.