Most active commenters
  • lucianbr(4)
  • lazide(3)

←back to thread

Accountability sinks

(aworkinglibrary.com)
493 points l0b0 | 31 comments | | HN request time: 0.218s | source | bottom
1. alilleybrinker ◴[] No.41892299[source]
Cathy O'Neil's "Weapons of Math Destruction" (2016, Penguin Random House) is a good companion to this concept, covering the "accountability sink" from the other side of those constructing or overseeing systems.

Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in.

https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

replies(4): >>41892714 #>>41892736 #>>41892843 #>>41900231 #
2. bigiain ◴[] No.41892714[source]
Brings to mind old wisdom:

"A computer can never be held accountable, therefore a computer must never make a Management Decision." IBM presentation, 1979

replies(4): >>41893321 #>>41893743 #>>41894623 #>>41895644 #
3. spencerchubb ◴[] No.41892736[source]
It's much easier to hold an algorithm accountable than an organization of humans. You can reprogram an algorithm. But good look influencing an organization to change
replies(2): >>41892817 #>>41893364 #
4. conradolandia ◴[] No.41892817[source]
That is not accountability. Can the algorithm be sent to jail if it commit crimes?
replies(3): >>41892911 #>>41892927 #>>41893184 #
5. dragonwriter ◴[] No.41892843[source]
"Cathy argues that the use of algorithm in some contexts permits a new scale of harmful and unaccountable systems that ought to be reigned in."

Algorithms are used by people. An algorithm only allows "harmful and unaccountable systems" if people, as the agents imposing accountability, choose to not hold the people acting by way of the algorithm accountable on the basis of the use of the algorithm, but...that really has nothing to do with the algorithm. If you swapped in a specially-designated ritual sceptre for the algorithm in that sentence (or, perhaps more familiarly, allowed "status as a police officer" to confer both formal immunity from most civil liability and practical immunity from criminal prosecution for most harms done in that role), it functions exactly the same way: what enables harmful and unaccountable systems is when humans choose not to hold other humans accountable for harms, on whatever basis.

replies(1): >>41892866 #
6. alilleybrinker ◴[] No.41892866[source]
Yeah, I think you're conflating the arguments of "Weapons of Math Destruction" and "The Unaccountability Machine" here.

"The Unaccountability Machine," based on Mandy's summary in the OP, argues that organizations can become "accountability sinks" which make it impossible for anyone to be held accountable for problems those organizations cause. Put another way (from the perspective of their customers), they eliminate any recourse for problems arising from the organization which ought to in theory be able to address, but can't because of the form and function of the organization.

"Weapons of Math Destruction" argues that the scale of algorithmic systems often means that when harms arise, those harms happen to a lot of people. Cathy argues this scale itself necessitates treating these algorithmic systems differently because of their disproportionate possibility for harm.

Together, you can get big harmful algorithmic systems, able to operate at scale which would be impossible without technology, which exist in organizations that act as accountability sinks. So you get mass harm with no recourse to address it.

This is what I meant by the two pieces being complementary to each other.

7. Timwi ◴[] No.41892911{3}[source]
Yes. Not literally of course, but it can be deleted/decommissioned, which is even more effective than temporary imprisonment (it's equivalent to death penalty but without the moral component obviously).
replies(1): >>41892953 #
8. lucianbr ◴[] No.41892927{3}[source]
Is the point revenge or fixing the problem? Fixing the algorithm to never do that again is easy. Or is the point to instill fear?
replies(3): >>41893170 #>>41893289 #>>41893485 #
9. hammock ◴[] No.41892953{4}[source]
Why should it be obvious that the moral component is absent? Removing an algorithm is like reducing the set of choices available to society… roughly equivalent to a law or regulation, or worse, a destructive act of coercion. There are moral implications of laws even though laws are not human
10. TeMPOraL ◴[] No.41893170{4}[source]
The point is that "accountability of an algorithm" is a category error.
replies(1): >>41893566 #
11. closeparen ◴[] No.41893184{3}[source]
Interesting that you mention jail… the rule of law is kind of the ultimate accountability sink.
12. lazide ◴[] No.41893289{4}[source]
The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences for who are responsible for them. Those consequences can be good or bad, though it is often used to refer to bad.

An algorithm has no concept of consequences (unless programmed to be aware of such), and the more plausibly whoever wrote it can deny knowledge of the resulting consequences, also the more whoever wrote it can avoid consequences/accountability themselves. After all, we can tell Soldiers or Clerks that ‘just following orders’ is no excuse. But computers don’t do anything but follow orders.

Most people/organizations/etc have strong incentives to be able to avoid negative consequences, regardless of their actions or the results of their actions.

Everyone around them has strong incentives to ensure negative consequences for actions with foreseeable negative outcomes are applied to them.

Sometimes, organizations and people will find a way for the consequences of their actions to be borne by other people that have no actual control or ability to change actions being performed (scapegoat). Accountability ideally should not refer to that situation, but sometimes is abused to mean that.

That tends to result in particularly nasty outcomes.

replies(1): >>41893522 #
13. k1t ◴[] No.41893321[source]
"A computer can never be held accountable, therefore all Management Decisions shall be made by a computer." - Management, 2 seconds later.
replies(1): >>41893758 #
14. rini17 ◴[] No.41893364[source]
You now have to not only find someone responsible for the algorithm but also competent and with permission to do it. Isn't it clear that this is very hard?
15. melagonster ◴[] No.41893485{4}[source]
If algorithm can do some thing wrong, but nobody should be responsible for it, everyone will just hide their crimes under algorithm and replace it when someone find problem.
replies(1): >>41893561 #
16. lucianbr ◴[] No.41893522{5}[source]
> The point of accountability is to deter harmful activity by ensuring actions/decisions somewhere result in consequences

What I read is yes, the point is revenge. If I can offer you a different way of preventing harmful activity, apparently you're not interested. There has to be some unpleasant consequences inflicted, you insist on it.

I think you should reconsider.

replies(3): >>41893612 #>>41893755 #>>41894551 #
17. lucianbr ◴[] No.41893561{5}[source]
If a mechanical device does something wrong, are we in the same conundrum?

I don't see what the problem is. There's malice, there's negligence, and there's accident. We can figure out which it was, and act accordingly. Must we collapse these to a single situation with a single solution?

replies(1): >>41900061 #
18. lucianbr ◴[] No.41893566{5}[source]
That's reasonable. Let's just call it root cause analysis in this case.

The original point seemed to me to be "we can't use computers because they're not accountable". I say, we can, because we can do fault analysis and fix what is wrong. I won't say "we can hold them accountable", to avoid the category error.

replies(1): >>41894522 #
19. lazide ◴[] No.41893612{6}[source]
Every rule/boundary/structure needs both a carrot, and a stick, to continue to exist long term.

Ideally, the stick never gets used. We aren’t dealing with ideals, however, we have to deal with reality.

On any sufficiently large scale, an inability/lack of will to use the stick, results in wide scale malfeasance. Because other constrains elsewhere result in wide scale push to break those rules/boundaries/structures for competitive reasons.

No carrot, magnifies the need to use the stick, eh? And turns it into nothing but beatings. Which is not sustainable either.

It has nothing to do with revenge. But if it makes you feel more comfortable, go ahead and call it that.

It’s ensuring cause and effect get coupled usefully. And is necessary for proper conditioning, and learning. One cannot learn properly if there is no ‘failure’ consequence correct?

All you need to do to verify this is, literally, look around at the structures you see everywhere, and what happens when they are or are not enforced. (Aka accountability vs a lack of it).

20. lifeisstillgood ◴[] No.41893743[source]
Admittedly the context matters “we are trying to sell to Management, therefore let’s butter them up and tell them they make great decisions and they won’t get automated away” while the next page of the presentation says “we will Automate away 50% of the people Working for you saving globs of money for your next bonus”

IBM in 1979 was not doing anything different to 2024. They were just more relevant

21. KronisLV ◴[] No.41893755{6}[source]
I think they’re just right in this case.

Suppose I’m a bad actor that creates an unfair algorithm that overcharges the clients of my company. Eventually it’s discovered. The algorithm could be fixed, the servers decommissioned, whatever, but I’ve already won. If the people who requested the algorithm be made in that way, if the people who implemented it or ran it see no consequences, there’s absolutely nothing preventing me from doing the same thing another time, elsewhere.

Punishment for fraud seems sane, regardless of whether it’s enabled by code or me cooking some books by hand.

replies(1): >>41894125 #
22. lifeisstillgood ◴[] No.41893758{3}[source]
Therefore all management decisions are made by the people writing the code

Hence coders are the new managers, managers just funnel the money around, a job which can be automated

replies(1): >>41895620 #
23. lazide ◴[] No.41894125{7}[source]
One could even argue (from a raw individual utility perspective - aka selfish) that if the person/people who did that suffered no negative consequences, they’d be fools to not do it again elsewhere.

The evolutionary function certainly encourages it, correct?

Ignoring that means that not applying consequences makes one actually culpable in the bad behavior occurring.

Especially if nothing changed re: rules or enforcement, etc.

24. sethammons ◴[] No.41894522{6}[source]
I think folks may have different interpretations of accountability.

If your algorithm kills someone, is the accountability an improvement to the algorithm? A fine and no change to the algorithm? Imprisonment for related humans? Dissolution of some legal entity?

25. ◴[] No.41894551{6}[source]
26. heresie-dabord ◴[] No.41894623[source]
> presentation, 1979

= Presentation, 21st Century

A computer is not alive. A computer system is a tool that can do harm. It can be disconnected or unplugged like any tool in a machine shop that begins to do harm or damage. But a tool is not responsible. Only people are responsible. Accountability is anchored in reality by personal cost.

= Notes

Management calculates the cost of not unplugging the computer that is doing harm. Management often calculates that it is possible to pay the monetary cost for the harm done.

People in management will abdicate personal responsibility. People try to avoid paying personal cost.

We often hold people accountable by forcing them to give back (e.g. community service, monetary fines, return of property), by sacrificing their reputation in one or more domains, by putting them in jail (they pay with their time), or in some societies, by putting them to death ("pay" with their lives).

Accountability is anchored in reality by personal cost.

27. intelVISA ◴[] No.41895620{4}[source]
Soon(tm)
28. RobotToaster ◴[] No.41895644[source]
See also: “To err is human, but to really foul things up requires a computer.” —Paul Ehrlich
replies(1): >>41897792 #
29. chgs ◴[] No.41897792{3}[source]
To err requires a computer

To really foul things up requires scalability

30. melagonster ◴[] No.41900061{6}[source]
>If a mechanical device does something wrong, are we in the same conundrum?

Sure! But oop mentioned a phenomenon that some companies can hide after algorithms and reject taking responsibility for it. If machines cause damage, people can easily find who's fault, but sometimes the same way does not work on software.

31. stavros ◴[] No.41900231[source]
I want to note here that this is illegal in the EU. Any company that makes decisions algorithmically (EDIT: actually, by an AI, so maybe not entirely applicable here) must give people the ability to escalate to a human, and be able to give the user information for why that decision was made the way it was made.