←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 2 comments | | HN request time: 0.411s | source
Show context
NetRunnerSu ◴[] No.44487629[source]
The author's critique of naive anthropomorphism is salient. However, the reduction to "just MatMul" falls into the same trap it seeks to avoid: it mistakes the implementation for the function. A brain is also "just proteins and currents," but this description offers no explanatory power.

The correct level of analysis is not the substrate (silicon vs. wetware) but the computational principles being executed. A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).

To dismiss these systems as incomparable to human cognition because their form is different is to miss the point. We should not be comparing a function to a soul, but comparing the functional architectures of two different information processing systems. The debate should move beyond the sterile dichotomy of "human vs. machine" to a more productive discussion of "function over form."

I elaborate on this here: https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...

replies(3): >>44488183 #>>44488211 #>>44488682 #
1. quantumgarbage ◴[] No.44488682[source]
> A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).

Could you suggest some literature supporting this claim? Went through your blog post but couldn't find any.

replies(1): >>44488895 #
2. NetRunnerSu ◴[] No.44488895[source]
Sorry, I didn't have time to find the relevant references at the time, so I'm attaching some now

https://www.frontiersin.org/journals/computational-neuroscie...

https://arxiv.org/abs/2305.15775