←back to thread

Death by AI

(davebarry.substack.com)
583 points ano-ther | 1 comments | | HN request time: 0s | source
Show context
rf15 ◴[] No.44619135[source]
So many reports like this, it's not a question of working out the kinks. Are we getting close to our very own Stop the Slop campaign?
replies(3): >>44619360 #>>44619459 #>>44621943 #
trod1234 ◴[] No.44619459[source]
Regulation with active enforcement is the only civil way.

The whole point of regulation is for when the profit motive forces companies towards destructive ends for the majority of society. The companies are legally obligated to seek profit above all else, absent regulation.

replies(1): >>44619938 #
Aurornis ◴[] No.44619938[source]
> Regulation with active enforcement is the only civil way.

What regulation? What enforcement?

These terms are useless without details. Are we going to fine LLM providers every time their output is wrong? That’s the kind of proposition that sounds good as a passing angry comment but obviously has zero chance of becoming a real regulation.

Any country who instituted a regulation like that would see all of the LLM advancements and research instantly leave and move to other countries. People who use LLMs would sign up for VPNs and carry on with their lives.

replies(2): >>44620395 #>>44621394 #
trod1234 ◴[] No.44620395{3}[source]
Regulations exist to override profit motive when corporations are unable to police themselves.

Enforcement ensures accountability.

Fines don't do much in a fiat money-printing environment.

Enforcement is accountability, the kind that stakeholders pay attention to.

Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal; where that person and decision must be documented ahead of time.

> Any country who instituted a regulation like that would see all of the LLM advances and research instantly leave and move to other countries.

This is fallacy. Its a spectrum, research would still occur, it would be tempered by the law and accountability, instead of the wild-west where its much more profitable to destroy everything through chaos. Chaos is quite profitable until it spread systemically and ends everything.

AI integration at a point where it can impact the operation of nuclear power plants through interference (perceptual or otherwise) is just asking for a short path to extinction.

Its quite reasonable that the needs for national security trump private business making profit in a destructive way.

replies(1): >>44621072 #
Ukv ◴[] No.44621072{4}[source]
> Something appropriate would be where if AI was used in a safety-critical or life-sustaining environment and harm or loss was caused; those who chose to use it are guilty until they prove they are innocent I think would be sufficient, not just civil but also criminal

Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions? If not, I feel it's kind of arbitrarily deterring certain approaches potentially at the cost of safety ("sure this CNN blows traditional methods out of the water in terms of accuracy, but the legal risk isn't worth it").

In most cases I think it'd make more sense to have fines and incentives for above-average and below-average incident rates (and liability for negligence in the worse cases), then let methods win/fail on their own merit.

replies(1): >>44621652 #
trod1234 ◴[] No.44621652{5}[source]
> Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

I would say yes because the person deciding must be the one making the entire decision but there are many examples where someone might be paid to just rubberstamp decisions already made. Letting the person who decided to implement the solution off scot-free.

The mere presence of AI (anything based on underlying work of perceptrons) being used accompanied by a loss should prompt a thorough review which corporations currently are incapable of performing for themselves due to lack of consequences/accountability. Lack of disclosure, and the limits of current standing, is another issue that really requires this approach.

The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts. Its also far easier to employ corruption to work around the fine later for these entities as market leaders. We've seen this a number of times in various markets/sectors like JPM and the 10+ year silver price fixing scandal.

Merit of subjective rates isn't something that can be enforced, because it is so easily manipulated. Gross negligence already exists and occurs frighteningly common but never makes it to court because proof often requires showing standing to get discovery which isn't generally granted absent a smoking gun or the whim of a judge.

Bad things happen certainly where no one is at fault, but most business structure today is given far too much lee-way and have promoted the 3Ds. Its all about: deny, defend, depose.

replies(1): >>44621969 #
Ukv ◴[] No.44621969{6}[source]
> > Would this guilty-until-proven-innocent rule apply also to non-ML code and manual decisions?

> I would say yes [...]

So if you're a doctor making manual decisions about how to treat a patient, and some harm/loss occurs, you'd be criminally guilty-until-proven-innocent? I feel it should require evidence of negligence (or malice), and be done under standard innocent-until-proven-guilty rules.

> The mere presence of AI (anything based on underlying work of perceptrons) [...]

Why single out based on underlying technology? If for instance we're choosing a tumor detector, I'd claim what's relevant is "Method A has been tested to achieve 95% AUROC, method B has been tested to achieve 90% AUROC" - there shouldn't be an extra burden in the way of choosing method A.

And it may well be that the perceptron-based method is the one with lower AUROC - just that it should then be discouraged because it's worse than the other methods, not because a special case puts it at a unique legal disadvantage even when safer.

> The problem of fines is that they don't provide the needed incentives to large entities as a result of money-printing through debt-issuance, or indirectly through government contracts.

Large enough fines/rewards should provide large enough incentive (and there would still be liability for criminal negligence where there is sufficient evidence of criminal negligence). Those government contracts can also be conditioned on meeting certain safety standards.

> Merit of subjective rates isn't something that can be enforced

We can/do measure things like incident rates, and have government agencies that perform/require safety testing and can block products from market. Not always perfect, but seems better to me than the company just picking a scape-goat.

replies(2): >>44622804 #>>44622996 #
1. ◴[] No.44622996{7}[source]