Most active commenters
  • szvsw(3)

←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 21 comments | | HN request time: 1.289s | source | bottom
1. szvsw ◴[] No.44484909[source]
So the author’s core view is ultimately a Searle-like view: a computational, functional, syntactic rules based system cannot reproduce a mind. Plenty of people will agree, plenty of people will disagree, and the answer is probably unknowable and just comes down to whatever axioms you subscribe to in re: consciousness.

The author largely takes the view that it is more productive for us to ignore any anthropomorphic representations and focus on the more concrete, material, technical systems - I’m with them there… but only to a point. The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like. So even if it is a stochastic system following rules, clearly the rules are complex enough (to the tune of billions of operations, with signals propagating through some sort of resonant structure, if you take a more filter impulse response like view of a sequential matmuls) to result in emergent properties. Even if we (people interested in LLMs with at least some level of knowledge of ML mathematics and systems) “know better” than to believe these systems to possess morals, ethics, feelings, personalities, etc, the vast majority of people do not have any access to meaningful understanding of the mathematical, functional representation of an LLM and will not take that view, and for all intents and purposes the systems will at least seem to have those anthropomorphic properties, and so it seems like it is in fact useful to ask questions from that lens as well.

In other words, just as it’s useful to analyze and study these things as the purely technical systems they ultimately are, it is also, probably, useful to analyze them from the qualitative, ephemeral, experiential perspective that most people engage with them from, no?

replies(5): >>44485119 #>>44485130 #>>44485421 #>>44487589 #>>44488863 #
2. gtsop ◴[] No.44485119[source]
No.

Why would you ever want to amplify a false understanding that has the potential to affect serious decisions across various topics?

LLMs reflect (and badly I may add) aspects of the human thought process. If you take a leap and say they are anything more than that, you might as well start considering the person appearing in your mirror as a living being.

Literally (and I literally mean it) there is no difference. The fact that a human image comes out of a mirror has no relation what so ever with the mirror's physical attributes and functional properties. It has to do just with the fact that a man is standing in front of it. Stop feeding the LLM with data artifacts of human thought and will imediatelly stop reflecting back anything resembling a human.

replies(2): >>44485259 #>>44485325 #
3. CharlesW ◴[] No.44485130[source]
> The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like.

For people who have only a surface-level understanding of how they work, yes. A nuance of Clarke's law that "any sufficiently advanced technology is indistinguishable from magic" is that the bar is different for everybody and the depth of their understanding of the technology in question. That bar is so low for our largely technologically-illiterate public that a bothersome percentage of us have started to augment and even replace religious/mystical systems with AI powered godbots (LLMs fed "God Mode"/divination/manifestation prompts).

(1) https://www.spectator.co.uk/article/deus-ex-machina-the-dang... (2) https://arxiv.org/html/2411.13223v1 (3) https://www.theguardian.com/world/2025/jun/05/in-thailand-wh...

replies(3): >>44486372 #>>44490927 #>>44496159 #
4. szvsw ◴[] No.44485259[source]
I don’t mean to amplify a false understanding at all. I probably did not articulate myself well enough, so I’ll try again.

I think it is inevitable that some - many - people will come to the conclusion that these systems have “ethics”, “morals,” etc, even if I or you personally do not think they do. Given that many people may come to that conclusion though, regardless of if the systems do or do not “actually” have such properties, I think it is useful and even necessary to ask questions like the following: “if someone engages with this system, and comes to the conclusion that it has ethics, what sort of ethics will they be likely to believe the system has? If they come to the conclusion that it has ‘world views,’ what ‘world views’ are they likely to conclude the system has, even if other people think it’s nonsensical to say it has world views?”

> The fact that a human image comes out of a mirror has no relation what so ever with the mirror's physical attributes and functional properties. It has to do just with the fact that a man is standing in front of it.

Surely this is not quite accurate - the material properties - surface roughness, reflectivity, geometry, etc - all influence the appearance of a perceptible image of a person. Look at yourself in a dirty mirror, a new mirror, a shattered mirror, a funhouse distortion mirror, a puddle of water, a window… all of these produce different images of a person with different attendant phenomenological experiences of the person seeing their reflection. To take that a step further - the entire practice of portrait photography is predicated on the idea that the collision of different technical systems with the real world can produce different semantic experiences, and it’s the photographer’s role to tune and guide the system to produce some sort of contingent affect on the person viewing the photograph at some point in the future. No, there is no “real” person in the photograph, and yet, that photograph can still convey something of person-ness, emotion, memory, etc etc. This contingent intersection of optics, chemical reactions, lighting, posture, etc all have the capacity to transmit something through time and space to another person. It’s not just a meaningless arrangement of chemical structures on paper.

> Stop feeding the LLM with data artifacts of human thought and will imediatelly stop reflecting back anything resembling a human.

But, we are feeding it with such data artifacts and will likely continue to do so for a while, and so it seems reasonable to ask what it is “reflecting” back…

replies(1): >>44491529 #
5. degamad ◴[] No.44485325[source]
> Why would you ever want to amplify a false understanding that has the potential to affect serious decisions across various topics?

We know that Newton's laws are wrong, and that you have to take special and general relativity into account. Why would we ever teach anyone Newton's laws any more?

replies(1): >>44485627 #
6. brookst ◴[] No.44485421[source]
Thank you for a well thought out and nuanced view in a discussion where so many are clearly fitting arguments to foregone, largely absolutist, conclusions.

It’s astounding to me that so much of HN reacts so emotionally to LLMs, to the point of denying there is anything at all interesting or useful about them. And don’t get me started on the “I am choosing to believe falsehoods as a way to spite overzealous marketing” crowd.

7. ifdefdebug ◴[] No.44485627{3}[source]
Newton's laws are a good enough approximation for many tasks so it's not a "false understanding" as long as their limits are taken into account.
8. lostmsu ◴[] No.44486372[source]
Nah, as a person that knows in detail how LLMs work with probably unique alternative perspective in addition to the commonplace one, I found any claims of them not having emergent behaviors to be of the same fallacy as claiming that crows can't be black because they have DNA of a bird.
replies(1): >>44488448 #
9. tomhow ◴[] No.44487882[source]
Please don't do this here. If a comment seems unfit for HN, please flag it and email us at hn@ycombinator.com so we can have a look.
replies(1): >>44513878 #
10. latexr ◴[] No.44488448{3}[source]
> the same fallacy as claiming that crows can't be black because they have DNA of a bird.

What fallacy is that? I’m a fan of logical fallacies and never heard that claim before nor am I finding any reference with a quick search.

replies(3): >>44488627 #>>44489014 #>>44490632 #
11. quantumgarbage ◴[] No.44488627{4}[source]
I think s/he meant swans instead (in ref. to Popperian epistemology).

Not sure though, the point s/he is making isn't really clear to me as well

replies(1): >>44488674 #
12. latexr ◴[] No.44488674{5}[source]
I was thinking of the black swan fallacy as well. But it doesn’t really support their argument, so I remained confused.
13. imiric ◴[] No.44488863[source]
> The flip side of all this is of course the idea that there is still something emergent, unplanned, and mind-like.

What you identify as emergent and mind-like is a direct result of these tools being able to mimic human communication patterns unlike anything we've ever seen before. This capability is very impressive and has a wide range of practical applications that can improve our lives, and also cause great harm if we're not careful, but any semblance of intelligence is an illusion. An illusion that many people in this industry obsessively wish to propagate, because thar be gold in them hills.

14. FeepingCreature ◴[] No.44489014{4}[source]
(Not the parent)

It doesn't have a name, but I have repeatedly noticed arguments of the form "X cannot have Y, because <explains in detail the mechanism that makes X have Y>". I wanna call it "fallacy of reduction" maybe: the idea that because a trait can be explained with a process, that this proves the trait absent.

(Ie. in this case, "LLMs cannot think, because they just predict tokens." Yes, inasmuch as they think, they do so by predicting tokens. You have to actually show why predicting tokens is insufficient to produce thought.)

replies(1): >>44502172 #
15. Xss3 ◴[] No.44489394[source]
Ok. How do you know?
16. iluvlawyering ◴[] No.44490632{4}[source]
Good catch. No such fallacy exists. Contextually, the implied reasoning (though faulty) relies on the fallacy of denying the antecedent. The mons ponus - if A then B - does NOT imply not A then not B. So if you see B, that doesn't mean A any more than not seeing A means not B. It's the difference between a necessary and sufficient condition - A is a sufficient condition for B, but the mons ponus alone is not sufficient for determining whether either A or B is a necessary condition of the other.
17. naasking ◴[] No.44490927[source]
> For people who have only a surface-level understanding of how they work, yes.

This is too dismissive because it's based on an assumption that we have a sufficiently accurate mechanistic model of the brain that we can know when something is or is not mind-like. This just isn't the case.

18. gtsop ◴[] No.44491529{3}[source]
> I think it is useful and even necessary to ask questions like the following: “if someone engages with this system, and comes to the conclusion that it has ethics, what sort of ethics will they be likely to believe the system has? If they come to the conclusion that it has ‘world views,’ what ‘world views’ are they likely to conclude the system has, even if other people think it’s nonsensical to say it has world views?”

Maybe there is some scientific aspect of interest here that i do not grasp, i would assume it can make sense in some context of psychological study. My point is that if you go that route you accept the premise that "something human-like is there", which, by that person's understanding, will have tremendous consequences. Them seeing you accepting their premise (even for study) amplifies their wrong conclusions, that's all I'm saying.

> Surely this is not quite accurate - the material properties - surface roughness, reflectivity, geometry, etc - all influence the appearance of a perceptible image of a person.

These properties are completely irrelevant to the image of the person. They will reflect a rock, a star, a chair, a goose, a human. Similar is my point of LLM, they reflect what you put in there.

It is like puting vegies in the fridge and then opening it up the next day and saying "Woah! There are vegies in my fridge, just like my farm! My friege is farm-like because vegies come out of it."

19. DennisP ◴[] No.44496159[source]
I've seen some of the world's top AI researchers talk about the emergent behaviors of LLMs. It's been a major topic over the past couple years, ever since Microsoft's famous paper on the unexpected capabilities of GPT4. And they still have little understanding of how it happens.
20. lostmsu ◴[] No.44502172{5}[source]
It's much simpler than that. X is in B therefore X is not in A is what being said, and this statement simply doesn't make sense unless you have a separate proof that A and B don't intersect.
21. szvsw ◴[] No.44513878{3}[source]
Hey out of curiosity were there any issues with my top level comment? Seemed pretty innocuous, curious what the problem was. Feel free to email me if it’s better suited for discussion outside of post context.