←back to thread

215 points optimalsolver | 1 comments | | HN request time: 0.233s | source
Show context
My_Name ◴[] No.45770715[source]
I find that they know what they know fairly well, but if you move beyond that, into what can be reasoned from what they know, they have a profound lack of ability to do that. They are good at repeating their training data, not thinking about it.

The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.

replies(9): >>45770777 #>>45770879 #>>45771048 #>>45771093 #>>45771274 #>>45771331 #>>45771503 #>>45771840 #>>45778422 #
ftalbot ◴[] No.45770777[source]
Every token in a response has an element of randomness to it. This means they’re non-deterministic. Even if you set up something within their training data there is some chance that you could get a nonsense, opposite, and/or dangerous result. The chance of that may be low because of things being set up for it to review its result, but there is no way to make a non-deterministic answer fully bound to solving or reasoning anything assuredly, given enough iterations. It is designed to be imperfect.
replies(4): >>45770905 #>>45771745 #>>45774081 #>>45775980 #
1. galaxyLogic ◴[] No.45775980[source]
> Every token in a response has an element of randomness to it.

I haven't tried this, but so if you ask the LLM the exact same question again, but in a different process, will you get a different answer?

Wouldn't that mean we should mosr of the time ask the LLM each question multiple times, to see if we get a better answer next time?

A bit like asking the same question from multiple different LLMs just to be sure.