←back to thread

65 points appwiz | 1 comments | | HN request time: 0.204s | source
Show context
dzikibaz ◴[] No.44384926[source]
How are "LLM hallucinations" different from a low-quality training dataset or randomly picked tokens due to overly random sampling settings?
replies(1): >>44385572 #
1. citrin_ru ◴[] No.44385572[source]
What I see even in good models is that when you ask something hard or impossible (but looking routine) instead of replying “I cannot” they hallucinate. A better dataset would help only to solve problems which can be solved (based on this dataset).