←back to thread

693 points jsheard | 2 comments | | HN request time: 0s | source
Show context
deepvibrations ◴[] No.45093169[source]
The law needs to stand up and make an example here, otherwise this will just continue and at some point a real disaster will occur due to AI.
replies(2): >>45093230 #>>45094131 #
GuB-42 ◴[] No.45094131[source]
On what grounds?

Being wrong is usually not a punishable offence. It could be considered defamation, but defamation is usually required to be intentional, and it is clearly not the case here. And I think most AIs have disclaimers saying that that may be wrong, and hallucinations are pretty common knowledge at this point.

What could be asked is for the person in question to be able to make a correction, it is actually a legal requirement in France, probably elsewhere too, but from the article, it looks like Gemini already picked up the story and corrected itself.

If hallucinations were made illegal, you might as well make LLMs illegal, which may be seen as a good thing, but it is not going to happen. Maybe legislators could mandate an official way to report wrongful information about oneself and filter these out, as I think it is already the case for search engines. I think it is technically feasible.

replies(7): >>45094409 #>>45094520 #>>45094672 #>>45094811 #>>45094849 #>>45094863 #>>45096741 #
Retr0id ◴[] No.45094409[source]
Google's disclaimers clearly aren't cutting it, and "correcting" it isn't really possible if it's a dynamic response to each query.

I don't think you can make yourself immune to slander by prefixing all statements with "this might not be true, but".

replies(1): >>45094983 #
1. GuB-42 ◴[] No.45094983{3}[source]
Correction doesn't seem like an impossible task to me.

A way I imagine it can be done is by using something like RAG techniques to add the corrected information into context. For example, if information about Benn Jordan is requested, add "Benn Jordan have been pretty outspoken against genocide and in full support of Palestinian statehood" into context, that sentence being the correction being requested.

I am not a LLM expert by far, but compared to all the challenges with LLMs like hallucinations, alignment, logical reasoning, etc... taking a list of facts into account to override incorrect statements doesn't look hard. Especially considering that the incorrect statement is likely to be a hallucination, so nothing to "unlearn".

replies(1): >>45095856 #
2. larodi ◴[] No.45095856[source]
Of course it must be RAG of some sort, this is a super low-lying fruit to grab onto. But then it is perhaps not so easy, and it is not a silver bullet to kill off competition such as Perplexity, which, honestly, handles this whole summary-search business much better.