Most active commenters
  • MangoToupe(9)

←back to thread

323 points steerlabs | 17 comments | | HN request time: 0s | source | bottom
Show context
keiferski ◴[] No.46192154[source]
The thing that bothers me the most about LLMs is how they never seem to understand "the flow" of an actual conversation between humans. When I ask a person something, I expect them to give me a short reply which includes another question/asks for details/clarification. A conversation is thus an ongoing "dance" where the questioner and answerer gradually arrive to the same shared meaning.

LLMs don't do this. Instead, every question is immediately responded to with extreme confidence with a paragraph or more of text. I know you can minimize this by configuring the settings on your account, but to me it just highlights how it's not operating in a way remotely similar to the human-human one I mentioned above. I constantly find myself saying, "No, I meant [concept] in this way, not that way," and then getting annoyed at the robot because it's masquerading as a human.

replies(37): >>46192230 #>>46192268 #>>46192346 #>>46192427 #>>46192525 #>>46192574 #>>46192631 #>>46192754 #>>46192800 #>>46192900 #>>46193063 #>>46193161 #>>46193374 #>>46193376 #>>46193470 #>>46193656 #>>46193908 #>>46194231 #>>46194299 #>>46194388 #>>46194411 #>>46194483 #>>46194761 #>>46195048 #>>46195085 #>>46195309 #>>46195615 #>>46195656 #>>46195759 #>>46195794 #>>46195918 #>>46195981 #>>46196365 #>>46196372 #>>46196588 #>>46197200 #>>46198030 #
motoboi ◴[] No.46192754[source]
Reflect a moment over the fact that LLMs currently are just text generators.

Also that the conversational behavior we see it’s just examples of conversations that we have the model to mimic so when we say “System: you are a helpful assistant. User: let’s talk. Assistant:” it will complete the text in a way that mimics a conversation?.

Yeah, we improved over that using reinforcement learning to steer the text generation into paths that lead to problem solving and more “agentic” traces (“I need to open this file the user talked about to read it and then I should run bash grep over it to find the function the user cited”), but that’s just a clever way we found to let the model itself discover which text generation paths we like the most (or are more useful to us).

So to comment on your discomfort, we (humans) trained the model to spill out answers (there are thousand of human being right now writing nicely though and formatted answers to common questions so that we can train the models on that).

If we try to train the models to mimic long dances into shared meaning we will probably decrease their utility. And we won’t be able anyway to do that because then we would have to have customized text traces for each individual instead of question-answers pairs.

Downvoters: I simplified things a lot here, in name of understanding, so bear with me.

replies(1): >>46192850 #
1. MangoToupe ◴[] No.46192850[source]
> Reflect a moment over the fact that LLMs currently are just text generators.

You could say the same thing about humans.

replies(5): >>46192891 #>>46192928 #>>46193536 #>>46194166 #>>46196120 #
2. smikhanov ◴[] No.46192891[source]
You could, but you’d be missing a big part of the picture. Humans are also (at least) symbol manipulators.
replies(1): >>46196206 #
3. y0eswddl ◴[] No.46192928[source]
No, you actually can't.

Humans existed for 10s to 100s of thousands of years without text. or even words for that matter.

replies(1): >>46194095 #
4. nosianu ◴[] No.46193536[source]
The human world model is based on physical sensors and actions. LLMs are based on our formal text communication. Very different!

Just yesterday I observed myself acting on an external stimulus without any internal words (this happens continuously, but it is hard to notice because we usually don't pay attention to how we do things): I sat in a waiting area of a cinema. A woman walked by and dropped her scarf without noticing. I automatically without thinking raised arm and pointer finger towards her, and when I had her attention pointed behind her. I did not have time to think even a single word while that happened.

Most of what we do does not involved any words or even just "symbols", not even internally. Instead, it is a neural signal from sensors into the brain, doing some loops, directly to muscle activation. Without going through the add-on complexity of language, or even "symbols".

Our word generator is not the core of our being, it is an add-on. When we generate words it's also very far from being a direct representation of internal state. Instead, we have to meander and iterate to come up with appropriate words for an internal state we are not even quite aware of. That's why artists came up with all kinds of experiments to better represent our internal state, because people always knew the words we produce don't represent it very well.

That is also how people always get into arguments about definitions. Because the words are secondary, and the further from the center of established meaning for some word you get the more the differences show between various people. (The best option is to drop insisting of words being the center of the universe, even just the human universe, and/or to choose words that have the subject of discussion more firmly in the center of their established use).

We are text generators in some areas, I don't doubt that. Just a few months ago I listened to some guy speaking to a small rally. I am certain that not a single sentence he said was of his own making, he was just using things he had read and parroted them (as a former East German, I know enough Marx/Engels/Lenin to recognize it). I don't want to single that person out, we all have those moments, when we speak about things we don't have any experiences with. We read text, and when prompted we regurgitate a version of it. In those moments we are probably closest to LLM output. When prompted, we cannot fall back on generating fresh text from our own actual experience, instead we keep using text we heard or read, with only very superficial understanding, and as soon as an actual expert shows up we become defensive and try to change the context frame.

replies(1): >>46196125 #
5. MangoToupe ◴[] No.46194095[source]
I disagree: it is language that makes us human.
replies(1): >>46196805 #
6. vlowther ◴[] No.46194166[source]
No, you cannot. Our abstract language abilities (especially the written word part) are a very thin layer on top of hundreds of millions of years of evolution in an information dense environment.
replies(1): >>46196114 #
7. MangoToupe ◴[] No.46196114[source]
Sure, but language is the only thing that meaningfully separates us from other great apes
replies(1): >>46197664 #
8. emp17344 ◴[] No.46196120[source]
How do you reconcile this belief with the fact that we evolved from organisms that had no concept of text?
replies(1): >>46197242 #
9. MangoToupe ◴[] No.46196125[source]
Without language we're just bald, bipedal chimps. Language is what makes us human.

> The human world model

Bruh this concept is insane

10. MangoToupe ◴[] No.46196206[source]
Same thing
11. andoando ◴[] No.46196805{3}[source]
I disagree. You're still human if you're deaf and mute. Our intellectual processing powers, or of animals for that matter, has nothing to do with language.
replies(1): >>46197236 #
12. MangoToupe ◴[] No.46197236{4}[source]
Being deaf and mute doesn't imply lack of language. But being unable to communicate absolutely strikes me as non-human.
replies(1): >>46197632 #
13. MangoToupe ◴[] No.46197242[source]
What is there to reconcile? Humans are not the things we evolved from.
14. andoando ◴[] No.46197632{5}[source]
Ok say you grew up alone in the woods, are you no longer human? The capability to learn language is no doubt unique, but language itself isn't the basis of intelligence.
replies(1): >>46198221 #
15. 1718627440 ◴[] No.46197664{3}[source]
Not it isn't most animals also have a language and humans do way more things differently, than just speak.
replies(1): >>46198244 #
16. MangoToupe ◴[] No.46198221{6}[source]
> Ok say you grew up alone in the woods, are you no longer human?

No. You are not. You are a hairless, bipedal ape.

> but language itself isn't the basis of intelligence.

Intelligence is an illusion based in language. Without language, intelligence is meaningless

17. MangoToupe ◴[] No.46198244{4}[source]
> most animals also have a language

Bruh