←back to thread

214 points optimalsolver | 1 comments | | HN request time: 0s | source
Show context
My_Name ◴[] No.45770715[source]
I find that they know what they know fairly well, but if you move beyond that, into what can be reasoned from what they know, they have a profound lack of ability to do that. They are good at repeating their training data, not thinking about it.

The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.

replies(9): >>45770777 #>>45770879 #>>45771048 #>>45771093 #>>45771274 #>>45771331 #>>45771503 #>>45771840 #>>45778422 #
ftalbot ◴[] No.45770777[source]
Every token in a response has an element of randomness to it. This means they’re non-deterministic. Even if you set up something within their training data there is some chance that you could get a nonsense, opposite, and/or dangerous result. The chance of that may be low because of things being set up for it to review its result, but there is no way to make a non-deterministic answer fully bound to solving or reasoning anything assuredly, given enough iterations. It is designed to be imperfect.
replies(4): >>45770905 #>>45771745 #>>45774081 #>>45775980 #
1. squidproquo ◴[] No.45774081[source]
The non-determinism is part of the allure of these systems -- they operate like slot machines in a casino. The dopamine hit of getting an output that appears intelligent and the variable rewards keeps us coming back. We down-weight and ignore the bad outputs. I'm not saying these systems aren't useful to a degree, but one should understand the statistical implications on how we are collectively perceiving their usefulness.