I am baffled by seriously intelligent people imbuing almost magical powers that can never be replicated to to something that - in my mind - is just a biological robot driven by a SNN with a bunch of hardwired stuff. Let alone attributing "human intelligence" to a single individual, when it's clearly distributed between biological evolution, social processes, and individuals.
>something that - in my mind - is just MatMul with interspersed nonlinearities
Processes in all huge models (not necessarily LLMs) can be described using very different formalisms, just like Newtonian and Lagrangian mechanics describe the same stuff in physics. You can say that an autoregressive model is a stochastic parrot that learned the input distribution, next token predictor, or that it does progressive pathfinding in a hugely multidimensional space, or pattern matching, or implicit planning, or, or, or... All of these definitions are true, but only some are useful to predict their behavior.
Given all that, I see absolutely no problem with anthropomorphizing an LLM to a certain degree, if it makes it easier to convey the meaning, and do not understand the nitpicking. Yeah, it's not an exact copy of a single Homo Sapiens specimen. Who cares.