←back to thread

693 points jsheard | 2 comments | | HN request time: 0s | source
Show context
AnEro ◴[] No.45093447[source]
I really hope this stays up, despite the politics involvement to a degree. I think this is a situation that is a perfect example of how AI hallucinations/lack of accuracy could significantly impact our lives going forward. A very nuanced and serious topic with lots of back and forth being distilled down to headlines by any source, it is a terrifying reality. Especially if we aren't able to communicate how these tools work to the public. (if they even will care to learn it) At least when humans did this they knew at some level at least they skimmed the information on the person/topic.
replies(8): >>45093755 #>>45093831 #>>45094062 #>>45094915 #>>45095210 #>>45095704 #>>45097171 #>>45097177 #
haswell ◴[] No.45095704[source]
One of the arguments used to justify the mass-ingestion of copyrighted content to build these models is that the resulting model is a transformative work, and thus fair use.

If this is indeed true, it seems like Google et al must be liable for output like this according to their own argument, i.e. if the work is transformative, they can’t claim someone else is liable.

These companies can’t have their cake and eat it too. It’ll be interesting to see how this plays out.

replies(2): >>45100834 #>>45100992 #
1. pjc50 ◴[] No.45100992[source]
> These companies can’t have their cake and eat it too

I think you're underestimating the effect of billions of dollars on the legal system, and the likely impact of the Have Your Cake And Eat It Act 2026.

replies(1): >>45107727 #
2. fennecbutt ◴[] No.45107727[source]
Yup, corporations have owned us and the government for a long time. Idk why people still act surprised about this.