Most active commenters
  • flir(4)

←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 33 comments | | HN request time: 0.002s | source | bottom
Show context
Al-Khwarizmi ◴[] No.44487564[source]
I have the technical knowledge to know how LLMs work, but I still find it pointless to not anthropomorphize, at least to an extent.

The language of "generator that stochastically produces the next word" is just not very useful when you're talking about, e.g., an LLM that is answering complex world modeling questions or generating a creative story. It's at the wrong level of abstraction, just as if you were discussing an UI events API and you were talking about zeros and ones, or voltages in transistors. Technically fine but totally useless to reach any conclusion about the high-level system.

We need a higher abstraction level to talk about higher level phenomena in LLMs as well, and the problem is that we have no idea what happens internally at those higher abstraction levels. So, considering that LLMs somehow imitate humans (at least in terms of output), anthropomorphization is the best abstraction we have, hence people naturally resort to it when discussing what LLMs can do.

replies(18): >>44487608 #>>44488300 #>>44488365 #>>44488371 #>>44488604 #>>44489139 #>>44489395 #>>44489588 #>>44490039 #>>44491378 #>>44491959 #>>44492492 #>>44493555 #>>44493572 #>>44494027 #>>44494120 #>>44497425 #>>44500290 #
grey-area ◴[] No.44487608[source]
On the contrary, anthropomorphism IMO is the main problem with narratives around LLMs - people are genuinely talking about them thinking and reasoning when they are doing nothing of that sort (actively encouraged by the companies selling them) and it is completely distorting discussions on their use and perceptions of their utility.
replies(13): >>44487706 #>>44487747 #>>44488024 #>>44488109 #>>44489358 #>>44490100 #>>44491745 #>>44493260 #>>44494551 #>>44494981 #>>44494983 #>>44495236 #>>44496260 #
cmenge ◴[] No.44487706[source]
I kinda agree with both of you. It might be a required abstraction, but it's a leaky one.

Long before LLMs, I would talk about classes / functions / modules like "it then does this, decides the epsilon is too low, chops it up and adds it to the list".

The difference I guess it was only to a technical crowd and nobody would mistake this for anything it wasn't. Everybody know that "it" didn't "decide" anything.

With AI being so mainstream and the math being much more elusive than a simple if..then I guess it's just too easy to take this simple speaking convention at face value.

EDIT: some clarifications / wording

replies(4): >>44488265 #>>44488849 #>>44489378 #>>44489702 #
1. flir ◴[] No.44488265[source]
Agreeing with you, this is a "can a submarine swim" problem IMO. We need a new word for what LLMs are doing. Calling it "thinking" is stretching the word to breaking point, but "selecting the next word based on a complex statistical model" doesn't begin to capture what they're capable of.

Maybe it's cog-nition (emphasis on the cog).

replies(9): >>44488292 #>>44488690 #>>44489190 #>>44489381 #>>44489974 #>>44491127 #>>44491731 #>>44495034 #>>44497480 #
2. whilenot-dev ◴[] No.44488292[source]
"predirence" -> prediction meets inference and it sounds a bit like preference
replies(1): >>44489227 #
3. LeonardoTolstoy ◴[] No.44488690[source]
What does a submarine do? Submarine? I suppose you "drive" a submarine which is getting to the idea: submarines don't swim because ultimately they are "driven"? I guess the issue is we don't make up a new word for what submarines do, we just don't use human words.

I think the above poster gets a little distracted by suggesting the models are creative which itself is disputed. Perhaps a better term, like above, would be to just use "model". They are models after all. We don't make up a new portmanteau for submarines. They float, or drive, or submarine around.

So maybe an LLM doesn't "write" a poem, but instead "models a poem" which maybe indeed take away a little of the sketchy magic and fake humanness they tend to be imbued with.

replies(7): >>44488901 #>>44489424 #>>44489509 #>>44490723 #>>44490885 #>>44491594 #>>44492786 #
4. FeepingCreature ◴[] No.44488901[source]
Humans certainly model inputs. This is just using an awkward word and then making a point that it feels awkward.
5. psychoslave ◴[] No.44489190[source]
It does some kind of automatic inference (AI), and that's it.
6. psychoslave ◴[] No.44489227[source]
Except -ence is a regular morph, and you would rather suffix it to predict(at)-.

And prediction is already an hyponym of inference. Why not just use inference then?

replies(1): >>44489726 #
7. JimDabell ◴[] No.44489381[source]
> this is a "can a submarine swim" problem IMO. We need a new word for what LLMs are doing.

Why?

A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings. What are the downsides we encounter that are caused by using the word “fly” to describe a plane travelling through the air?

replies(4): >>44489398 #>>44490978 #>>44491449 #>>44495276 #
8. flir ◴[] No.44489398[source]
I was riffing on that famous Dijkstra quote.
9. flir ◴[] No.44489424[source]
I really like that, I think it has the right amount of distance. They don't write, they model writing.

We're very used to "all models are wrong, some are useful", "the map is not the territory", etc.

replies(2): >>44489602 #>>44499791 #
10. ◴[] No.44489509[source]
11. galangalalgol ◴[] No.44489602{3}[source]
No one was as bothered when we anthropomorphized crud apps simply for the purpose of conversing about "them". "Ack! The thing is corrupting tables again because it thinks we are still using api v3! Who approved that last MR?!" The fact that people are bothered by the same language now is indicative in itself. If you want to maintain distance, pre prompt models to structure all conversations to lack pronouns as between a non sentient language model and a non sentient agi. You can have the model call you out for referring to the model as existing. The language style that forces is interesting, and potentially more productive except that there are fewer conversations formed like that in the training dataset. Translation being a core function of language models makes it less important thought. As for confusing the map for the territory, that is precisely what philosophers like Metzinger say humans are doing by considering "self" to be a real thing and that they are conscious when they are just using the reasoning shortcut of narrating the meta model to be the model.
replies(1): >>44490309 #
12. whilenot-dev ◴[] No.44489726{3}[source]
I didn't think of prediction in the statistical sense here, but rather as a prophecy based on a vision, something that is inherently stored in a model without the knowledge of the modelers. I don't want to imply any magic or something supernatural here, it's just the juice that goes off the rails sometimes, and it gets overlooked due to the sheer quantity of the weights. Something like unknown bugs in production, but, because they still just represent a valid number in some computation that wouldn't cause any panic, these few bits can show a useful pattern under the right circumstances.

Inference would be the part that is deliberately learned and drawn from conclusions based on the training set, like in the "classic" sense of statistical learning.

13. intended ◴[] No.44489974[source]
It will help significantly, to realize that the only thinking happening is when the human looks at the output and attempts to verify if it is congruent with reality.

The rest of the time it’s generating content.

14. flir ◴[] No.44490309{4}[source]
> You can have the model call you out for referring to the model as existing.

This tickled me. "There ain't nobody here but us chickens".

I have other thoughts which are not quite crystalized, but I think UX might be having an outsized effect here.

replies(1): >>44491120 #
15. irthomasthomas ◴[] No.44490723[source]
Depends on if you are talking about an llm or to the llm. Talking to the llm, it would not understand that "model a poem" means to write a poem. Well, it will probably guess right in this case, but if you go out of band too much it won't understand you. The hard problem today is rewriting out of band tasks to be in band, and that requires anthropomorphizing.
replies(1): >>44493763 #
16. thinkmassive ◴[] No.44490885[source]
GenAI _generates_ output
17. dotancohen ◴[] No.44490978[source]
For what it's worth, in my language the motion of birds and the motion of aircraft _are_ two different words.
18. galangalalgol ◴[] No.44491120{5}[source]
In addition to he/she etc. there is a need for a button for no pronouns. "Stop confusing metacognition for conscious experience or qualia!" doesn't fit well. The UX for these models is extremely malleable. The responses are misleading mostly to the extent the prompts were already misled. The sorts of responses that arise from ignorant prompts are those found within the training data in the context of ignorant questions. This tends to make them ignorant as well. There are absolutely stupid questions.
19. Atlas667 ◴[] No.44491127[source]
A machine that can imitate the products of thought is not the same as thinking.

All imitations require analogous mechanisms, but that is the extent of their similarities, in syntax. Thinking requires networks of billions of neurons, and then, not only that, but words can never exist on a plane because they do not belong to a plane. Words can only be stored on a plane, they are not useful on a plane.

Because of this LLMs have the potential to discover new aspects and implications of language that will be rarely useful to us because language is not useful within a computer, it is useful in the world.

Its like seeing loosely related patterns in a picture and keep derivating on those patterns that are real, but loosely related.

LLMs are not intelligence but its fine that we use that word to describe them.

20. Tijdreiziger ◴[] No.44491449[source]
Flying isn’t named after flies, they both come from the same root.

https://www.etymonline.com/search?q=fly

21. jorvi ◴[] No.44491594[source]
A submarine is propelled by a propellor and helmed by a controller (usually a human).

It would be swimming if it was propelled by drag (well, technically a propellor also uses drag via thrust, but you get the point). Imagine a submarine with a fish tail.

Likewise we can probably find an apt description in our current vocabulary to fittingly describe what LLMs do.

22. delusional ◴[] No.44491731[source]
> "selecting the next word based on a complex statistical model" doesn't begin to capture what they're capable of.

I personally find that description perfect. If you want it shorter you could say that an LLM generates.

23. j0057 ◴[] No.44492786[source]
A submarine is a boat and boats sail.
replies(2): >>44493077 #>>44496482 #
24. TimTheTinker ◴[] No.44493077{3}[source]
An LLM is a stochastic generative model and stochastic generative models ... generate?
replies(1): >>44493630 #
25. LeonardoTolstoy ◴[] No.44493630{4}[source]
And we are there. A boat sails, and a submarine sails. A model generates makes perfect sense to me. And saying chatgpt generated a poem feels correct personally. Indeed a model (e.g. a linear regression) generates predictions for the most part.
26. dcookie ◴[] No.44493763{3}[source]
> it won't understand you

Oops.

replies(1): >>44493995 #
27. irthomasthomas ◴[] No.44493995{4}[source]
That's consistent with my distinction when talking about them vs too them.
28. ryeats ◴[] No.44495034[source]
It's more like muscle memory than cognition. So maybe procedural memory but that isn't catchy.
replies(1): >>44495379 #
29. lelanthran ◴[] No.44495276[source]
> A plane is not a fly and does not stay aloft like a fly, yet we describe what it does as flying despite the fact that it does not flap its wings.

Flying doesn't mean flapping, and the word has a long history of being used to describe inanimate objects moving through the air.

"A rock flies through the window, shattering it and spilling shards everywhere" - see?

OTOH, we have never used to word "swim" in the same way - "The rock hit the surface and swam to the bottom" is wrong!

30. 01HNNWZ0MV43FF ◴[] No.44495379[source]
They certainly do act like a thing which has a very strong "System 1" but no "System 2" (per Thinking, Fast And Slow)
31. floam ◴[] No.44496482{3}[source]
Submarines dive.
32. seanhunter ◴[] No.44497480[source]
This is a total non-problem that has been invented by people so they have something new and exciting to be pedantic about.

When we need to speak precisely about a model and how it works, we have a formal language (mathematics) which allows us to be absolutely specific. When we need to empirically observe how the model behaves, we have a completely precise method of doing this (running an eval).

Any other time, we use language in a purposefully intuitive and imprecise way, and that is a deliberate tradeoff which sacrifices precision for expressiveness.

33. seyebermancer ◴[] No.44499791{3}[source]
What about they synthesize?

Ties in with creation from many and synthetic/artificial data. I usually prompt instruct my coding models more with “synthesize” than “generate”.