And as the remedy starts being applied (aka "liability"), the enthusiasm for AI will start to wane.
I wouldn't be surprised if some businesses ban the use of AI --- starting with law firms.
And as the remedy starts being applied (aka "liability"), the enthusiasm for AI will start to wane.
I wouldn't be surprised if some businesses ban the use of AI --- starting with law firms.
And as the remedy starts being applied (aka "liability"), the enthusiasm for software will start to wane.
What if anything do you think is wrong with my analogy? I doubt most people here support strict liability for bugs in code.
Generally the law allows people to make mistakes, as long as a reasonable level of care is taken to avoid them (and also you can get away with carelessness if you don't owe any duty of care to the party). The law regarding what level of care is needed to verify genAI output is probably not very well defined, but it definitely isn't going to be strict liability.
The emotionally-driven hate for AI, in a tech-centric forum even, to the extent that so many commenters seem to be off-balance in their rational thinking, is kinda wild to me.
Most things in life are not as well defined --- a matter of judgment.
AI is being applied in lots of real world cases where judgment is required to interpret results. For example, "Does this patient have cancer". And it is fairly easy to show that AI's judgment can be highly suspect. There are often legal implications for poor judgment --- i.e. medical malpractice.
Maybe you can argue that this is a mis-application of AI --- and I don't necessarily disagree --- but the point is, once the legal system makes this abundantly clear, the practical business case for AI is going to be severely reduced if humans still have to vet the results in every case.
Note that whether a person has cancer is generally well-defined, although it may not be obvious at first. If you just let the patient go untreated, you'll know the answer quite definitely in a couple years.