←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 8 comments | | HN request time: 0.937s | source | bottom
1. NetRunnerSu ◴[] No.44487629[source]
The author's critique of naive anthropomorphism is salient. However, the reduction to "just MatMul" falls into the same trap it seeks to avoid: it mistakes the implementation for the function. A brain is also "just proteins and currents," but this description offers no explanatory power.

The correct level of analysis is not the substrate (silicon vs. wetware) but the computational principles being executed. A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).

To dismiss these systems as incomparable to human cognition because their form is different is to miss the point. We should not be comparing a function to a soul, but comparing the functional architectures of two different information processing systems. The debate should move beyond the sterile dichotomy of "human vs. machine" to a more productive discussion of "function over form."

I elaborate on this here: https://dmf-archive.github.io/docs/posts/beyond-snn-plausibl...

replies(3): >>44488183 #>>44488211 #>>44488682 #
2. quonn ◴[] No.44488183[source]
> A brain is also "just proteins and currents,"

This is actually not comparable, because the brain has a much more complex structure that is _not_ learned, even at that level. The proteins and their structure are not a result of training. The fixed part for LMMs is rather trivial and is, in fact, not much for than MatMul which is very easy to understand - and we do. The fixed part of the brain, including the structure of all the proteins is enormously complex which is very difficult to understand - and we don't.

replies(1): >>44488661 #
3. ACCount36 ◴[] No.44488211[source]
"Not conscious" is a silly claim.

We have no agreed-upon definition of "consciousness", no accepted understanding of what gives rise to "consciousness", no way to measure or compare "consciousness", and no test we could administer to either confirm presence of "consciousness" in something or rule it out.

The only answer to "are LLMs conscious?" is "we don't know".

It helps that the whole question is rather meaningless to practical AI development, which is far more concerned with (measurable and comparable) system performance.

replies(1): >>44488625 #
4. NetRunnerSu ◴[] No.44488625[source]
Now we have.

https://github.com/dmf-archive/IPWT

https://dmf-archive.github.io/docs/posts/backpropagation-as-...

But you're right, capital only cares about performance.

https://dmf-archive.github.io/docs/posts/PoIQ-v2/

replies(1): >>44498421 #
5. NetRunnerSu ◴[] No.44488661[source]
The brain is trained to perform supervised and unsupervised hybrid learning from the environment's uninterrupted multimodal input.

Please do not ignore your childhood.

6. quantumgarbage ◴[] No.44488682[source]
> A modern sparse Transformer, for instance, is not "conscious," but it is an excellent engineering approximation of two core brain functions: the Global Workspace (via self-attention) and Dynamic Sparsity (via MoE).

Could you suggest some literature supporting this claim? Went through your blog post but couldn't find any.

replies(1): >>44488895 #
7. NetRunnerSu ◴[] No.44488895[source]
Sorry, I didn't have time to find the relevant references at the time, so I'm attaching some now

https://www.frontiersin.org/journals/computational-neuroscie...

https://arxiv.org/abs/2305.15775

8. ACCount36 ◴[] No.44498421{3}[source]
This looks to me like the usual "internet schizophrenics inventing brand new theories of everything".