←back to thread

277 points simianwords | 2 comments | | HN request time: 0.491s | source
Show context
roxolotl ◴[] No.45148981[source]
This seems inherently false to me. Or at least partly false. It’s reasonable to say LLMs hallucinate because they aren’t trained to say they don’t have a statistically significant answer. But there is no knowledge of correct vs incorrect in these systems. It’s all statistics so what OpenAI is describing sounds like a reasonable way to reduce hallucinations but not a way to eliminate them nor the root cause.
replies(4): >>45149040 #>>45149166 #>>45149458 #>>45149946 #
ACCount37 ◴[] No.45149166[source]
Is there any knowledge of "correct vs incorrect" inside you?

If "no", then clearly, you can hit general intelligence without that.

And if "yes", then I see no reason why an LLM can't have that knowledge crammed inside it too.

Would it be perfect? Hahahaha no. But I see no reason why "good enough" could not be attained.

replies(3): >>45149445 #>>45149581 #>>45155233 #
1. ninetyninenine ◴[] No.45155233[source]
I'm going to tell you straight up. I am a very intelligent man and I've been programming for a very long time. My identity is tied up with this concept that I am intelligent and I'm a great programmer so I'm not going to let some AI do my job for me. Anything that I can grasp to criticize the LLM I'm gonna do it because this is paramount to me maintaining my identity. So you and your rationality aren't going to make me budge. LLMs are stochastic parrots and EVERYONE on this thread agrees with me. They will never take over my job!

I will add they will never take over my job <in my lifetime> because it makes me sound more rational and it's easier to swallow that then to swallow the possibility that they will make me irrelevant once the hallucination problem is solved.

replies(1): >>45157397 #
2. simianwords ◴[] No.45157397[source]
Ha. I could have written this post myself.