/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Why language models hallucinate
(openai.com)
277 points
simianwords
| 1 comments |
06 Sep 25 07:41 UTC
|
HN request time: 0.208s
|
source
1.
parentheses
◴[
09 Sep 25 16:33 UTC
]
No.
45184478
[source]
▶
>>45147385 (OP)
#
I think a large issue at play here is post training. Pre training models the original distribution of input data. RL techniques tweak the models to "behave". This step changes how the models "think" in a fundamental way .
ID:
GO
↑