←back to thread

504 points puttycat | 3 comments | | HN request time: 0s | source
Show context
theoldgreybeard ◴[] No.46182214[source]
If a carpenter builds a crappy shelf “because” his power tools are not calibrated correctly - that’s a crappy carpenter, not a crappy tool.

If a scientist uses an LLM to write a paper with fabricated citations - that’s a crappy scientist.

AI is not the problem, laziness and negligence is. There needs to be serious social consequences to this kind of thing, otherwise we are tacitly endorsing it.

replies(37): >>46182289 #>>46182330 #>>46182334 #>>46182385 #>>46182388 #>>46182401 #>>46182463 #>>46182527 #>>46182613 #>>46182714 #>>46182766 #>>46182839 #>>46182944 #>>46183118 #>>46183119 #>>46183265 #>>46183341 #>>46183343 #>>46183387 #>>46183435 #>>46183436 #>>46183490 #>>46183571 #>>46183613 #>>46183846 #>>46183911 #>>46183917 #>>46183923 #>>46183940 #>>46184450 #>>46184551 #>>46184653 #>>46184796 #>>46185025 #>>46185817 #>>46185849 #>>46189343 #
SubiculumCode ◴[] No.46183490[source]
Yeah seriously. Using an LLM to help find papers is fine. Then you read them. Then you use a tool like Zotero or manually add citations. I use Gemini Pro to identify useful papers that I might not yet have encountered before. But, even when asking to restrict itself to Pubmed resources, it's citations are wonky, citing three different version sources of the same paper (citations that don't say what they said they'd discuss).

That said, these tools have substantially reduced hallucinations over the last year, and will just get better. It also helps if you can restrict it to reference already screened papers.

Finally, I'd lke to say tthat if we want scientists to engage in good science, stop forcing them to spend a third of their time in a rat race for funding...it is ridiculously time consuming and wasteful of expertise.

replies(1): >>46183805 #
bossyTeacher ◴[] No.46183805[source]
The problem isn't whether they have more or less hallucinations. The problem is that they have them. And as long as they hallucinate, you have to deal with that. It doesn't really matter how you prompt, you can't prevent hallucinations from happening and without manual checking, eventually hallucinations will slip under the radar because the only difference between a real pattern and a hallucinated one is that one exists in the world and the other one doesn't. This is not something you can really counter with more LLMs either as it is a problem intrinsic to LLMs
replies(1): >>46194746 #
1. SubiculumCode ◴[] No.46194746{3}[source]
Humans also hallucinate. We have an error rate. Your argument makes little sense in absolutist terms.
replies(1): >>46201865 #
2. bossyTeacher ◴[] No.46201865[source]
> Humans also hallucinate

"LLM hallucinations" and hallucinations are essentially different. Human hallucinations are related to perceptual experiences not memory errors like in the case of LLMs. Humans with certain neurological conditions hallucinate. Humans with healthy brains don't.

This habit of misapplying terms needs to stop. Humans are not backpropagation algorithms nor whatever random concept you read about in a comp sci book.

replies(1): >>46206586 #
3. SubiculumCode ◴[] No.46206586[source]
The more appropriate term is confabulate, and healthy humans do it all the time. I merely used the common, but technically incorrect term for the phenomenon in LLMs. FYI, my PhD focused on human memory.