←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 3 comments | | HN request time: 0.641s | source
Show context
NetRunnerSu ◴[] No.44487629[source]
The author's critique of naive anthropomorphism is salient. However, the reduction to "just MatMul" falls into the same trap it seeks to avoid: it mistakes the implementation for the function. A brain is also "just proteins and currents," but this description offers no explanatory power.

The correct level of analysis is not the substrate (silicon vs. wetware) but the computational principles being executed. A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).

To dismiss these systems as incomparable to human cognition because their form is different is to miss the point. We should not be comparing a function to a soul, but comparing the functional architectures of two different information processing systems. The debate should move beyond the sterile dichotomy of "human vs. machine" to a more productive discussion of "function over form."

I elaborate on this here: https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...

replies(3): >>44488183 #>>44488211 #>>44488682 #
1. ACCount36 ◴[] No.44488211[source]
"Not conscious" is a silly claim.

We have no agreed-upon definition of "consciousness", no accepted understanding of what gives rise to "consciousness", no way to measure or compare "consciousness", and no test we could administer to either confirm presence of "consciousness" in something or rule it out.

The only answer to "are LLMs conscious?" is "we don't know".

It helps that the whole question is rather meaningless to practical AI development, which is far more concerned with (measurable and comparable) system performance.

replies(1): >>44488625 #
2. NetRunnerSu ◴[] No.44488625[source]
Now we have.

https://github.com/dmf-archive/IPWT

https://dmf-archive.github.io/docs/posts/backpropagation-as-...

But you're right, capital only cares about performance.

https://dmf-archive.github.io/docs/posts/PoIQ-v2/

replies(1): >>44498421 #
3. ACCount36 ◴[] No.44498421[source]
This looks to me like the usual "internet schizophrenics inventing brand new theories of everything".