Most active commenters
  • Borealid(5)
  • simonw(3)

←back to thread

265 points ctoth | 24 comments | | HN request time: 0.001s | source | bottom
Show context
sejje ◴[] No.43744995[source]
In the last example (the riddle)--I generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

I would do the same thing, I think. It's too well-known.

The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.

I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.

replies(5): >>43745113 #>>43746264 #>>43747336 #>>43747621 #>>43751793 #
Jensson ◴[] No.43745113[source]
> generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.

replies(6): >>43745166 #>>43745289 #>>43745300 #>>43745301 #>>43745340 #>>43754148 #
simonw ◴[] No.43745166[source]
These newer "reasoning" LLMs really don't feel like pure text completers any more.
replies(3): >>43745252 #>>43745253 #>>43745266 #
1. Borealid ◴[] No.43745266{3}[source]
What your parent poster said is nonetheless true, regardless of how it feels to you. Getting text from an LLM is a process of iteratively attempting to find a likely next token given the preceding ones.

If you give an LLM "The rain in Spain falls" the single most likely next token is "mainly", and you'll see that one proportionately more than any other.

If you give an LLM "Find an unorthodox completion for the sentence 'The rain in Spain falls'", the most likely next token is something other than "mainly" because the tokens in "unorthodox" are more likely to appear before text that otherwise bucks statistical trends.

If you give the LLM "blarghl unorthodox babble The rain in Spain" it's likely the results are similar to the second one but less likely to be coherent (because text obeying grammatical rules is more likely to follow other text also obeying those same rules).

In any of the three cases, the LLM is predicting text, not "parsing" or "understanding" a prompt. The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.

It's theoretically possible to engineer a string of complete gibberish tokens that will prompt the LLM to recite song lyrics, or answer questions about mathemtical formulae. Those strings of gibberish are just difficult to discover.

replies(6): >>43745307 #>>43745309 #>>43745334 #>>43745371 #>>43746291 #>>43754473 #
2. Workaccount2 ◴[] No.43745307[source]
The problem is showing that humans aren't just doing next word prediction too.
replies(2): >>43745388 #>>43758748 #
3. dannyobrien ◴[] No.43745309[source]
So I just gave your blarghl line to Claude, and it replied "It seems like you included a mix of text including "blarghl unorthodox babble" followed by the phrase "The rain in Spain."

Did you mean to ask about the well-known phrase "The rain in Spain falls mainly on the plain"? This is a famous elocution exercise from the musical "My Fair Lady," where it's used to teach proper pronunciation.

Or was there something specific you wanted to discuss about Spain's rainfall patterns or perhaps something else entirely? I'd be happy to help with whatever you intended to ask. "

I think you have a point here, but maybe re-express it? Because right now your argument seems trivially falsifiable even under your own terms.

replies(1): >>43745400 #
4. simonw ◴[] No.43745334[source]
No, I think the "reasoning" step really does make a difference here.

There's more than just next token prediction going on. Those reasoning chain of thoughts have undergone their own reinforcement learning training against a different category of samples.

They've seen countless examples of how a reasoning chain would look for calculating a mortgage, or searching a flight, or debugging a Python program.

So I don't think it is accurate to describe the eventual result as "just next token prediction". It is a combination of next token production that has been informed by a chain of thought that was based on a different set of specially chosen examples.

replies(1): >>43745368 #
5. Borealid ◴[] No.43745368[source]
Do you believe it's possible to produce a given set of model weights with an infinitely large number of different training examples?

If not, why not? Explain.

If so, how does your argument address the fact that this implies any given "reasoning" model can be trained without giving it a single example of something you would consider "reasoning"? (in fact, a "reasoning" model may be produced by random chance?)

replies(2): >>43745566 #>>43747251 #
6. wongarsu ◴[] No.43745371[source]
> The fact it will respond similarly to a well-formed and unreasonably-formed prompt is evidence of this.

Don't humans do the same in conversation? How should an intelligent being (constrained to the same I/O system) respond here to show that it is in fact intelligent?

replies(1): >>43745500 #
7. Borealid ◴[] No.43745388[source]
I don't see that as a problem. I don't particularly care how human intelligence works; what matters is what an LLM is capable of doing and what a human is capable of doing.

If those two sets of accomplishments are the same there's no point arguing about differences in means or terms. Right now humans can build better LLMs but nobody has come up with an LLM that can build better LLMs.

replies(2): >>43746308 #>>43746612 #
8. Borealid ◴[] No.43745400[source]
If you feed Claude you're getting Claude's "system prompt" before the text you give it.

If you want to test convolution you have to use a raw model with no system prompt. You can do that with a Llama or similar. Otherwise your context window is full of words like "helpful" and "answer" and "question" that guide the response and make it harder (not impossible) to see the effect I'm talking about.

replies(3): >>43746165 #>>43747139 #>>43754494 #
9. Borealid ◴[] No.43745500[source]
Imagine a Rorschach Test of language, where a certain set of non-recognizable-language tokens invariably causes an LLM to talk about flowers. These strings exist by necessity due to how the LLM's layers are formed.

There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.

Artists like Lewis Carrol and Stanislaw Lem play with this by inserting non-words at certain points in sentences to get humans to infer the meaning of those words from surrounding context, but the truth remains that an LLM will gladly convolute a wholly non-language input into a response as if it were well-formed, but a human can't/won't do that.

I know this is hard to understand, but the current generation of LLMs are working directly with language. Their "brains" are built on language. Some day we might have some kind of AI system that's built on some kind of meaning divorced from language, but that's not what's happening here. They're engineering matrixes that repeatedly perform "context window times model => one more token" operations.

replies(2): >>43745659 #>>43745736 #
10. simonw ◴[] No.43745566{3}[source]
I'm afraid I don't understand your question.
11. og_kalu ◴[] No.43745659{3}[source]
I think you are begging the question here.

For one thing, LLMs absolutely form responses from conceptual meanings. This has been demonstrated empirically multiple times now including again by anthropic only a few weeks ago. 'Language' is just the input and output, the first and last few layers of the model.

So okay, there exists some set of 'gibberish' tokens that will elicit meaningful responses from LLMs. How does your conclusion - "Therefore, LLMs don't understand" fit the bill here? You would also conclude that humans have no understanding of what they see because of the Rorschach test ?

>There exists no similar set of tokens for humans, because our process is to parse the incoming sounds into words, use grammar to extract conceptual meaning from those words, and then shape a response from that conceptual meaning.

Grammar is useful fiction, an incomplete model of a demonstrably probabilistic process. We don't use 'grammar' to do anything.

12. wongarsu ◴[] No.43745736{3}[source]
> Imagine a Rorschach Test of language, where a certain set of non-recognizable-language tokens invariably causes an LLM to talk about flowers. These strings exist by necessity due to how the LLM's layers are formed.

Maybe not for humanity as a species, but for individual humans there are absolutely token sequences that lead them to talk about certain topics, and nobody being able to bring them back to topic. Now you'd probably say those are recognizable token sequences, but do we have a fair process to decide what's recognizable that isn't inherently biased towards making humans the only rational actor?

I'm not contending at all that LLMs are only built on language. Their lack of physical reference point is sometimes laughably obvious. We could argue whether there are signs they also form a world model and reasoning that abstracts from language alone, but that's not even my point. My point is rather that any test or argument that attempts to say that LLMs can't "reason" or "assume" or whatever has to be a test a human could pass. Preferably a test a random human would pass with flying colors.

13. itchyjunk ◴[] No.43746165{3}[source]
At this point, you might as well be claiming completions model behaves differently than a fine-tuned model. Which is true but the prompt in API without any systems message seems to also not match your prediction.
replies(1): >>43746827 #
14. baq ◴[] No.43746291[source]
This again.

It’s predicting text. Yes. Nobody argues about that. (You’re also predicting text when you’re typing it. Big deal.)

How it is predicting the text is the question to ask and indeed it’s being asked and we’re getting glimpses of understanding and lo and behold it’s a damn complex process. See the recent anthropic research paper for details.

15. baq ◴[] No.43746308{3}[source]
That’s literally the definition of takeoff, when it starts it gets us to singularity in a decade and there’s no publicly available evidence that it’s started… emphasis on publicly available.
replies(1): >>43746658 #
16. johnisgood ◴[] No.43746612{3}[source]
> but nobody has come up with an LLM that can build better LLMs.

Yet. Not that we know of, anyway.

replies(1): >>43769194 #
17. myk9001 ◴[] No.43746658{4}[source]
> it gets us to singularity

Are we sure it's actually taking us along?

18. tough ◴[] No.43746827{4}[source]
the point is when there’s a system prompt you didnt write you get autocomplete of your input + said dystem prompt, and as such biasing all outputs
19. dannyobrien ◴[] No.43747139{3}[source]
I'm a bit confused here. Are you saying that if I zero out the system prompt on any LLM, including those fine-tuned to give answers in an instructional form, they will follow your effect -- that nonsense prompts will get similar results to coherent prompts if they contain many of the same words?

Because I've tried it on a few local models I have handy, and I don't see that happening at all. As someone else says, some of that difference is almost certainly due to supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF) -- but it's weird to me, given the confidence you made your prediction, that you didn't exclude those from your original statement.

I guess, maybe the real question here is: could you give me a more explicit example of how to show what you are trying to show? And explain why I'm not seeing it while running local models without system prompts?

20. ac29 ◴[] No.43747251{3}[source]
> an infinitely large number of different training examples

Infinity is problematic because its impossible to process an infinite amount of data in a finite amount of time.

21. int_19h ◴[] No.43754473[source]
It's not an either-or. The fact that LLM completes text does not preclude it from meaningfully reasoning, which anyone who used reasoning models on real-world tasks is well-aware of.
22. int_19h ◴[] No.43754494{3}[source]
True but also irrelevant. The "AI" is the entirety of the system, which includes the model itself as well as any prompts and other machinery around it.

I mean, if you dig down enough, the LLM doesn't even generate tokens - it merely gives you a probability distribution, and you still need to explicitly pick the next token based on those probabilities, append it to the input, and start next iteration of the loop.

23. joquarky ◴[] No.43758748[source]
I feel like people are going to find it hard to accept that this is how most of us think (at least when thinking in language). They will resist this like heliocentrism.

I'm curious what others who are familiar with LLMs and have practiced open monitoring meditation might say.

24. Aeolos ◴[] No.43769194{4}[source]
Given the dramatic uptake of Cursor / Windsurf / Claude Code etc, we can be 100% certain that LLM companies are using LLMs to improve their products.

The improvement loop is likely not fully autonomous yet - it is currently more efficient to have a human-in-the-loop - but there is certainly a lot of LLMs improving LLMs going on today.