←back to thread

66 points appwiz | 2 comments | | HN request time: 0.523s | source
1. dzikibaz ◴[] No.44384926[source]
How are "LLM hallucinations" different from a low-quality training dataset or randomly picked tokens due to overly random sampling settings?
replies(1): >>44385572 #
2. citrin_ru ◴[] No.44385572[source]
What I see even in good models is that when you ask something hard or impossible (but looking routine) instead of replying “I cannot” they hallucinate. A better dataset would help only to solve problems which can be solved (based on this dataset).