Most active commenters
  • elliotto(5)
  • hiAndrewQuinn(3)

←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
477 points zdw | 13 comments | | HN request time: 1.053s | source | bottom
1. elliotto ◴[] No.44485762[source]
To claim that LLMs do not experience consciousness requires a model of how consciousness works. The author has not presented a model, and instead relied on emotive language leaning on the absurdity of the claim. I would say that any model one presents of consciousness often comes off as just as absurd as the claim that LLMs experience it. It's a great exercise to sit down and write out your own perspective on how consciousness works, to feel out where the holes are.

The author also claims that a function (R^n)^c -> (R^n)^c is dramatically different to the human experience of consciousness. Yet the author's text I am reading, and any information they can communicate to me, exists entirely in (R^n)^c.

replies(4): >>44485798 #>>44487957 #>>44488208 #>>44490162 #
2. shevis ◴[] No.44485798[source]
> requires a model of how consciousness works.

Not necessarily an entire model, just a single defining characteristic that can serve as a falsifying example.

> any information they can communicate to me, exists entirely in (R^n)^c

Also no. This is just a result of the digital medium we are currently communicating over. Merely standing in the same room as them would communicate information outside (R^n)^c.

3. seadan83 ◴[] No.44487957[source]
I believe the author is rather drawing this distinction:

LLMs: (R^n)^c -> (R^n)^c

Humans: [set of potentially many and complicated inputs that we effectively do not understand at all] -> (R^n)^c

The point is that the model of how consciousness works is unknown. Thus the author would not present such a model, it is the point.

4. quonn ◴[] No.44488208[source]
> To claim that LLMs do not experience consciousness requires a model of how consciousness works.

Nope. What can be asserted without evidence can also be dismissed without evidence. Hitchens's razor.

You know you have consciousness (by the very definition that you can observe it in yourself) and that's evidence. Because other humans are genetically and in every other way identical, you can infer it for them as well. Because mammals are very similar many people (but not everyone) infers it for them as well. There is zero evidence for LLMs and their _very_ construction suggests that they are like a calculator or like Excel or like any other piece of software no matter how smart they may be or how many tasks they can do in the future.

Additionally I am really surprised by how many people here confuse consciousness with intelligence. Have you never paused for a second in your life to "just be". Done any meditation? Or even just existed at least for a few seconds without a train of thought? It is very obvious that language and consciousness are completely unrelated and there is no need for language and I doubt there is even a need for intelligence to be conscious.

Consider this:

In the end an LLM could be executed (slowly) on a CPU that accepts very basic _discrete_ instructions, such as ADD and MOV. We know this for a fact. Those instructions can be executed arbitrarily slowly. There is no reason whatsoever to suppose that it should feel like anything to be the CPU to say nothing of how it would subjectively feel to be a MOV instruction. It's ridiculous. It's unscientific. It's like believing that there's a spirit in the tree you see outside, just because - why not? - why wouldn't there be a spirit in the tree?

replies(1): >>44495735 #
5. tdullien ◴[] No.44490162[source]
Author here. What's the difference, in your perception, between an LLM and a large-scale meteorological simulation, if there is any?

If you're willing to ascribe the possibility of consciousness to any complex-enough computation of a recurrence equation (and hence to something like ... "earth"), I'm willing to agree that under that definition LLMs might be conscious. :)

replies(1): >>44495716 #
6. elliotto ◴[] No.44495716[source]
My personal views are an animist / panpsychist / pancomputationalist combination drawing most of my inspiration from the works of Joscha Bach and Stephen Wolfram (https://writings.stephenwolfram.com/2021/03/what-is-consciou...). I think that the underlying substrate of the universe is consciousness, and human and animal and computer minds result in structures that are able to present and tell narratives about themselves, isolating themselves from the other (avidya in Buddhism). I certainly don't claim to be correct, but I present a model that others can interrogate and look for holes in.

Under my model, these systems you have described are conscious, but not in a way that they can communicate or experience time or memory the way human beings do.

My general list of questions for those presenting a model of consciousness are: 1) Are you conscious? (hopefully you say yes or our friend Descartes would like a word with you!) 2) Am I conscious? How do you know? 3) Is a dog conscious? 4) Is a worm conscious? 5) Is a bacterium conscious? 6) Is a human embryo / baby consious? And if so, was there a point that it was not conscious, and what does it mean for that switch to occur?

What is your view of consciousness?

replies(1): >>44495851 #
7. elliotto ◴[] No.44495735[source]
It seems like you are doing a lot of inferring about mammals experiencing consciousness, and you have drawn a line somewhere beyond these, and made the claim that your process is scientific. Could I present you my list of questions I presented to the OP and ask where you draw the line, and why here?

My general list of questions for those presenting a model of consciousness are: 1) Are you conscious? (hopefully you say yes or our friend Descartes would like a word with you!) 2) Am I conscious? How do you know? 3) Is a dog conscious? 4) Is a worm conscious? 5) Is a bacterium conscious? 6) Is a human embryo / baby consious? And if so, was there a point that it was not conscious, and what does it mean for that switch to occur?

I agree about the confusion of consciousness with intelligence, but these are complicated terms that aren't well suited to a forum where most people are interested in javscript type errors and RSUs. I usually use the term qualia. But to your example about existing for a few seconds without a train of thought; the Buddhists call this nirvana, and it's quite difficult to actually achieve.

replies(1): >>44526140 #
8. hiAndrewQuinn ◴[] No.44495851{3}[source]
I'm a mind-body dualist and just happened to come across this list, and I think it's an interesting one. #1 we can answer Yes to, #2 through #6 are all strictly unknowable. The best we might be able to claim is some probability distribution that these things may or may not be conscious.

The intuitive one looks like 100% chance > P(#2 is conscious) > P(#6) > P(#3) > P(#4) > P(#5) > 0% chance, but the problem is solipsism is a real motherfucker and it's entirely possible qualia is meted out based on some wacko distance metric that couldn't possibly feel intuitive. There are many more such metrics out there than there are intuitive ones, so a prior of indifference doesn't help us much. Any ordering is theoretically possible to be ontologically privileged, we simply have no way of knowing.

replies(1): >>44496099 #
9. elliotto ◴[] No.44496099{4}[source]
I think you've fallen into the trap of Descartes' Deus deceptor! Not only is #1 the only question from my list we can definitely answer yes to, but due to this demon this question is actually the only postulate of anything at all that we can answer yes to. All else could be an illusion.

Assuming we escape the null space of solipsism, and can reason about anything at all, we can think about what a model might look like that generates some ordering of P(#). Of course, without a hypothetical consciousness detector (one might believe or not believe that this could exist) P(#) cannot be measured, and therefore will fall outside of the realm of a scientific hypothesis deduction model. This is often a point of contention for rationality-pilled science-cels.

Some of these models might be incoherent - a model that denies P(#1) doesn't seem very good. A model that denies P(#2) but accepts P(#3) is a bit strange. We can't verify these, but we do need to operate under one (or in your suggestion, operate under a probability distribution of these models) if we want to make coherent statements about what is and isn't conscious.

replies(1): >>44498474 #
10. hiAndrewQuinn ◴[] No.44498474{5}[source]
To be explicit my P(#) is meant to be the Bayesian probability an observer gives to # being conscious, not the proposition P that # is conscious. It's meant to model Descartes's receptor, as well as disagreement of the kind, "My friend things week 28 fetuses are probably (~% 80%) conscious, and I think they're probably (~20%) not". P(week 28 fetuses) itself is not true or false.

I don't think it's incoherent to make probabilistic claims like this. It might be incoherent to make deeper claims about what laws given the distribution itself. Either way, what I think is interesting is that, if we also think there is such a thing as an amount of consciousness a thing can have, as in the panpsychic view, these two things create an inverse-square law of moral consideration that matches the shape of most people's intuitions oddly well.

For example: Let's say rock is probably not conscious, P(rock) < 1%. Even if it is, it doesn't seem like it would be very conscious. A low percentage of a low amount multiplies to a very low expected value, and that matches our intuitions about how much value to give rocks.

replies(1): >>44498635 #
11. elliotto ◴[] No.44498635{6}[source]
Ah I understand, you're exactly right I misinterpreted the notation of P(#). I was considering each model as assigning binary truth values to the propositions (e.g., physicalism might reject all but Postulate #1, while an anthropocentric model might affirm only #1, #2, and #6), and modeling the probability distribution over those models instead. I think the expected value computation ends up with the same downstream result of distributions over propositions.

By incoherent I was referring to the internal inconsistencies of a model, not the probabilistic claims. Ie a model that denies your own consciousness but accepts the consciousness of others is a difficult one to defend. I agree with your statement here.

Thanks for your comment I enjoyed thinking about this. I learned the estimating distributions approach from the rationalist/betting/LessWrong folks and think it works really well, but I've never thought much about how it applies to something unfalsifiable.

replies(1): >>44499037 #
12. hiAndrewQuinn ◴[] No.44499037{7}[source]
You're welcome! Probability distributions over inherently unfalsifiable claims is exotic territory at first, but when I see actual philosophers in the wild debate things I often find a back-and-forth of such claims that definitely looks like two people shifting around likelihood values. I take this as evidence that such a process is what's "really" going on when we go one level removed from the arguments and their background assumptions themselves.
13. quonn ◴[] No.44526140{3}[source]
I think I already answered those above. I draw the line between 3 and 4, possibly between 4 and 5. I don't know for sure. But there are good reasons to hold this belief.

> the Buddhists call this nirvana, and it's quite difficult to actually achieve.

Not really. The zen buddhists call what I described kensho and it's not very hard to achieve. I specifically said a few seconds. Probably anyone who wholeheartedly meditated for some time has experienced this.

Nirvana, on the other hand, is just the other side of practice-and-enlightenment as a drawn out process. You may call it hard to achieve, others may call it the dharma gate of ease and joy.