←back to thread

265 points ctoth | 4 comments | | HN request time: 0s | source
Show context
sejje ◴[] No.43744995[source]
In the last example (the riddle)--I generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

I would do the same thing, I think. It's too well-known.

The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.

I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.

replies(5): >>43745113 #>>43746264 #>>43747336 #>>43747621 #>>43751793 #
Jensson ◴[] No.43745113[source]
> generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.

replies(6): >>43745166 #>>43745289 #>>43745300 #>>43745301 #>>43745340 #>>43754148 #
simonw ◴[] No.43745166[source]
These newer "reasoning" LLMs really don't feel like pure text completers any more.
replies(3): >>43745252 #>>43745253 #>>43745266 #
Borealid ◴[] No.43745266{3}[source]
What your parent poster said is nonetheless true, regardless of how it feels to you. Getting text from an LLM is a process of iteratively attempting to find a likely next token given the preceding ones.

If you give an LLM "The rain in Spain falls" the single most likely next token is "mainly", and you'll see that one proportionately more than any other.

If you give an LLM "Find an unorthodox completion for the sentence 'The rain in Spain falls'", the most likely next token is something other than "mainly" because the tokens in "unorthodox" are more likely to appear before text that otherwise bucks statistical trends.

If you give the LLM "blarghl unorthodox babble The rain in Spain" it's likely the results are similar to the second one but less likely to be coherent (because text obeying grammatical rules is more likely to follow other text also obeying those same rules).

In any of the three cases, the LLM is predicting text, not "parsing" or "understanding" a prompt. The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.

It's theoretically possible to engineer a string of complete gibberish tokens that will prompt the LLM to recite song lyrics, or answer questions about mathemtical formulae. Those strings of gibberish are just difficult to discover.

replies(6): >>43745307 #>>43745309 #>>43745334 #>>43745371 #>>43746291 #>>43754473 #
1. simonw ◴[] No.43745334{4}[source]
No, I think the "reasoning" step really does make a difference here.

There's more than just next token prediction going on. Those reasoning chain of thoughts have undergone their own reinforcement learning training against a different category of samples.

They've seen countless examples of how a reasoning chain would look for calculating a mortgage, or searching a flight, or debugging a Python program.

So I don't think it is accurate to describe the eventual result as "just next token prediction". It is a combination of next token production that has been informed by a chain of thought that was based on a different set of specially chosen examples.

replies(1): >>43745368 #
2. Borealid ◴[] No.43745368[source]
Do you believe it's possible to produce a given set of model weights with an infinitely large number of different training examples?

If not, why not? Explain.

If so, how does your argument address the fact that this implies any given "reasoning" model can be trained without giving it a single example of something you would consider "reasoning"? (in fact, a "reasoning" model may be produced by random chance?)

replies(2): >>43745566 #>>43747251 #
3. simonw ◴[] No.43745566[source]
I'm afraid I don't understand your question.
4. ac29 ◴[] No.43747251[source]
> an infinitely large number of different training examples

Infinity is problematic because its impossible to process an infinite amount of data in a finite amount of time.