←back to thread

219 points crazylogger | 1 comments | | HN request time: 0.237s | source
Show context
smusamashah ◴[] No.42728826[source]
On a similar note, has anyone found themselves absolutely not trusting non-code LLM output?

The code is at least testable and verifiable. For everything else I am left wondering if it's the truth or a hallucination. It incurs more mental burden that I was trying to avoid using LLM in the first place.

replies(7): >>42728915 #>>42729219 #>>42729640 #>>42729926 #>>42730263 #>>42730292 #>>42731632 #
1. iamnotagenius ◴[] No.42730292[source]
Yes, it is good for suumarizing existing text, explaining something or coding; in short any generative/transformative tasks. Not good for information retrieval. Having said that even tiny Qwen 3b/7b coding llms turned out to be very useful in my use experience.