←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 2 comments | | HN request time: 0.412s | source
Show context
dtj1123 ◴[] No.44488004[source]
It's possible to construct a similar description of whatever it is that human brain is doing that clearly fails to capture the fact that we're conscious. If you take a cross section of every nerve feeding into the human brain at a given time T, the action potentials across those cross sections can be embedded in R^n. If you take the history of those action potentials across the lifetime of the brain, you get a path through R^n that is continuous, and maps roughly onto your subjectively experienced personal history, since your brain neccesarily builds your experienced reality from this signal data moment to moment. If you then take the cross sections of every nerve feeding OUT of your brain at time T, you have another set of action potentials that can be embedded in R^m which partially determines the state of the R^n embedding at time T + delta. This is not meaningfully different from the higher dimensional game of snake described in the article, more or less reducing the experience of being a human to 'next nerve impulse prediction', but it obviously fails to capture the significance of the computation which determines what that next output should be.
replies(2): >>44488152 #>>44488197 #
Voloskaya ◴[] No.44488197[source]
I don’t see how your description “clearly fails to capture the fact that we're conscious” though. There are many example in nature of emergent phenomena that would be very hard to predict just by looking at its components.

This is the crux of the disagreement between those that believe AGI is possible and those that don’t. Some are convinced that we “obviously” more than the sum of our parts, and thus an LLM can’t achieve consciousness because it’s missing this magic ingredient, and those that believe consciousness is just an emergent behaviour from a complex device (the brain). And thus we might be able to recreate it simply by scaling the complexity of another system.

replies(1): >>44488579 #
dtj1123 ◴[] No.44488579[source]
Where exactly in my description do I invoke consciousness?

Where does the description given imply that consciousness is required in any way?

The fact that there's a non-obvious emergent phenomena which is apparently responsible for your subjective experience, and that it's possible to provide a superficially accurate description of you as a system without referencing that phenomena in any way, is my entire point. The fact that we can provide such a reductive description of LLMs without referencing consciousness has literally no bearing on whether or not they're conscious.

To be clear, I'm not making a claim as to whether they are or aren't, I'm simply pointing out that the argument in the article is fallacious.

replies(1): >>44489520 #
1. Voloskaya ◴[] No.44489520[source]
My bad, we are saying the same thing. I misinterpreted your last sentence as saying this simplistic view of the brain you described does not account for consciousness.
replies(1): >>44489643 #
2. dtj1123 ◴[] No.44489643[source]
Ultimately my bad for letting my original comment turn into a word salad. Glad we've ended up on the same page though.