/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
LLM Hallucinations in Practical Code Generation
(dl.acm.org)
65 points
appwiz
| 1 comments |
23 Jun 25 07:14 UTC
|
HN request time: 0.204s
|
source
Show context
dzikibaz
◴[
26 Jun 25 07:03 UTC
]
No.
44384926
[source]
▶
>>44353241 (OP)
#
How are "LLM hallucinations" different from a low-quality training dataset or randomly picked tokens due to overly random sampling settings?
replies(1):
>>44385572
#
1.
citrin_ru
◴[
26 Jun 25 09:10 UTC
]
No.
44385572
[source]
▶
>>44384926
#
What I see even in good models is that when you ask something hard or impossible (but looking routine) instead of replying “I cannot” they hallucinate. A better dataset would help only to solve problems which can be solved (based on this dataset).
ID:
GO
↑