←back to thread

214 points optimalsolver | 1 comments | | HN request time: 0.019s | source
Show context
My_Name ◴[] No.45770715[source]
I find that they know what they know fairly well, but if you move beyond that, into what can be reasoned from what they know, they have a profound lack of ability to do that. They are good at repeating their training data, not thinking about it.

The problem, I find, is that they then don't stop, or say they don't know (unless explicitly prompted to do so) they just make stuff up and express it with just as much confidence.

replies(9): >>45770777 #>>45770879 #>>45771048 #>>45771093 #>>45771274 #>>45771331 #>>45771503 #>>45771840 #>>45778422 #
usrbinbash ◴[] No.45771503[source]
> They are good at repeating their training data, not thinking about it.

Which shouldn't come as a surprise, considering that this is, at the core of things, what language models do: Generate sequences that are statistically likely according to their training data.

replies(1): >>45772607 #
dymk ◴[] No.45772607[source]
This is too large of an oversimplification of how an LLM works. I hope the meme that they are just next token predictors dies out soon, before it becomes a permanent fixture of incorrect but often stated “common sense”. They’re not Markov chains.
replies(3): >>45772668 #>>45772674 #>>45780675 #
gpderetta ◴[] No.45772674[source]
Indeed, they are next token predictors, but this is a vacuous statement because the predictor can be arbitrary complex.
replies(1): >>45776178 #
1. HarHarVeryFunny ◴[] No.45776178[source]
Sure, but a complex predictor is still a predictor. It would be a BAD predictor if everything it output was not based on "what would the training data say?".

If you ask it to innovate and come up with something not in it's training data, what do you think it will do .... it'll "look at" it's training data and regurgitate (predict) something labelled as innovative

You can put a reasoning cap on a predictor, but it's still a predictor.

replies(1): >>45776459 #