←back to thread

219 points crazylogger | 1 comments | | HN request time: 0.25s | source
Show context
smusamashah ◴[] No.42728826[source]
On a similar note, has anyone found themselves absolutely not trusting non-code LLM output?

The code is at least testable and verifiable. For everything else I am left wondering if it's the truth or a hallucination. It incurs more mental burden that I was trying to avoid using LLM in the first place.

replies(7): >>42728915 #>>42729219 #>>42729640 #>>42729926 #>>42730263 #>>42730292 #>>42731632 #
redcobra762 ◴[] No.42730263[source]
You're going to fall behind eventually, if you continue to treat LLMs with this level of skepticism, as others won't, and the output is accurate enough that it can be useful to improve the efficiency of work in a great many situations.

Rarely are day-to-day written documents (e.g. an email asking for clarification on an issue or to schedule an appointment) of such importance that the occasional error is unforgivable. In situations where a mistake is fatal, yes I would not trust GenAI. But how many of us really work in that kind of a field?

Besides, AI shines when used for creative purposes. Coming up with new ideas or rewording a paragraph for clarity isn't something one does blindly. GenAI is a coworker, not an authority. It'll generate a draft, I may edit that draft or rewrite it significantly, but to preclude it because it could error will eventually slow you down in your field.

replies(1): >>42732131 #
thuuuomas ◴[] No.42732131[source]
You’re narrowly addressing LLM use cases & omitting the most problematic one - LLMs as search engine replacements.
replies(1): >>42740963 #
1. redcobra762 ◴[] No.42740963[source]
That's the opposite of problematic, that's where an LLM shines. And before you say hallucination, when was the last time you didn't click the link in a Google search result? It's user error if you don't follow up with additional validation, exactly as you would with Google. With GenAI it's simply easier to craft specific queries.