Most active commenters
  • jqpabc123(3)

←back to thread

504 points puttycat | 12 comments | | HN request time: 0.001s | source | bottom
1. jqpabc123 ◴[] No.46181580[source]
The legal system has a word to describe AI "slop" --- it is called "negligence".

And as the remedy starts being applied (aka "liability"), the enthusiasm for AI will start to wane.

I wouldn't be surprised if some businesses ban the use of AI --- starting with law firms.

replies(2): >>46181948 #>>46183051 #
2. loloquwowndueo ◴[] No.46181948[source]
I applaud your use of triple dashes to avoid automatic conversion to em dashes and being labeled an AI. Kudos!
replies(1): >>46182247 #
3. ghaff ◴[] No.46182247[source]
This is a particular meme that I really don't like. I've used em-dashes routinely for years. Do I need to stop using them because various people assume they're an AI flag?
replies(1): >>46182716 #
4. TimedToasts ◴[] No.46182716{3}[source]
No, but you should be prepared to have people suspect you are using AI to create your responses.

C'est la vie.

The good news is that it will rectify itself and soon the output will lack even these signals.

replies(1): >>46183231 #
5. ls612 ◴[] No.46183051[source]
The legal system has a word to describe software bugs --- it is called "negligence".

And as the remedy starts being applied (aka "liability"), the enthusiasm for software will start to wane.

What if anything do you think is wrong with my analogy? I doubt most people here support strict liability for bugs in code.

replies(3): >>46183588 #>>46184397 #>>46186645 #
6. ghaff ◴[] No.46183231{4}[source]
Well, I work for myself and people can either judge my work on its own merits or not. Don't care all that much.
7. hnfong ◴[] No.46183588[source]
I don't even think GP knows what negligence is.

Generally the law allows people to make mistakes, as long as a reasonable level of care is taken to avoid them (and also you can get away with carelessness if you don't owe any duty of care to the party). The law regarding what level of care is needed to verify genAI output is probably not very well defined, but it definitely isn't going to be strict liability.

The emotionally-driven hate for AI, in a tech-centric forum even, to the extent that so many commenters seem to be off-balance in their rational thinking, is kinda wild to me.

replies(1): >>46183730 #
8. ls612 ◴[] No.46183730{3}[source]
I don’t get it, tech people clearly have the most to gain from AI like Claude Code.
replies(1): >>46192481 #
9. senshan ◴[] No.46184397[source]
Very good analogy indeed. With one modification it makes perfect sense:

> And as the remedy starts being applied (aka "liability"), the enthusiasm for sloppy and poorly tested software will start to wane.

Many of us use AI to write code these days, but the burden is still on us to design and run all the tests.

10. jqpabc123 ◴[] No.46186645[source]
What if anything do you think is wrong with my analogy?

I think what is clearly wrong with your analogy is assuming that AI applies mostly to software and code production. This is actually a minor use-case for AI.

Government and businesses of all types ---doctors, lawyers, airlines, delivery companies, etc. are attempting to apply AI to uses and situations that can't be tested in advance the same way "vibe" code can. And some of the adverse results have already been ruled on in court.

https://www.evidentlyai.com/blog/ai-failures-examples

11. jqpabc123 ◴[] No.46192481{4}[source]
Computer code is highly deterministic. This allows it to be tested fairly easily. Unfortunately, code productionn is not the only use-case for AI.

Most things in life are not as well defined --- a matter of judgment.

AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice.

Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case.

replies(1): >>46206240 #
12. hnfong ◴[] No.46206240{5}[source]
Why do you think AI is inherently worse than humans in judging whether a patient has cancer, assuming they are given the same information as the human doctor? Is there some fundamental assumption that makes AI worse, or are you simply projecting your personal belief (trust) in human doctors? (Note that given the speed of progress of AI and that we're talking about what the law ought to be, not what it was in the past, the past performance of AI on cancer cases do not have much relevance unless a fundamental issue with AI is identified)

Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years.