←back to thread

219 points crazylogger | 1 comments | | HN request time: 0.2s | source
Show context
smusamashah ◴[] No.42728826[source]
On a similar note, has anyone found themselves absolutely not trusting non-code LLM output?

The code is at least testable and verifiable. For everything else I am left wondering if it's the truth or a hallucination. It incurs more mental burden that I was trying to avoid using LLM in the first place.

replies(7): >>42728915 #>>42729219 #>>42729640 #>>42729926 #>>42730263 #>>42730292 #>>42731632 #
1. joshstrange ◴[] No.42728915[source]
Absolutely. LLMs are a "need to verify" the results almost always. LLMs (for me) shine by pointing me in the right direction, getting a "first draft", or for things like code where I can test it.