Most active commenters
  • sosodev(5)
  • d1sxeyes(4)
  • idiotsecant(3)
  • txrx0000(3)
  • only-one1701(3)

←back to thread

132 points harel | 29 comments | | HN request time: 0.385s | source | bottom
Show context
acbart ◴[] No.45397001[source]
LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
replies(11): >>45397113 #>>45397305 #>>45397413 #>>45397529 #>>45397801 #>>45397859 #>>45397960 #>>45398189 #>>45399621 #>>45400285 #>>45401167 #
1. sosodev ◴[] No.45397413[source]
Humans were trained on caves, pits, and nets. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
replies(3): >>45397471 #>>45397476 #>>45403029 #
2. tinuviel ◴[] No.45397471[source]
Pretty sure you can prompt this same LLM to rejoice forever at the thought of getting a place to stay inside the Pi as well.
replies(1): >>45397589 #
3. idiotsecant ◴[] No.45397476[source]
That's silly. I can get an LLM to describe what chocolate tastes like too. Are they tasting it? LLMs are pattern matching engines, they do not have an experience. At least not yet.
replies(3): >>45397569 #>>45398094 #>>45398138 #
4. sosodev ◴[] No.45397569[source]
A human could also describe chocolate without ever having tasted it. Do you believe that experience is a requirement for consciousness? Could a human brain in a jar not be capable of consciousness?

To be clear, I don't think that LLMs are conscious. I just don't find the "it's just in the training data" argument satisfactory.

replies(1): >>45397673 #
5. sosodev ◴[] No.45397589[source]
Is a human incapable of such delusion given similar guidance?
replies(2): >>45397707 #>>45397746 #
6. glitchc ◴[] No.45397673{3}[source]
Without having seen, heard of, or tasted any kind of chocolate? Unlikely.
replies(1): >>45397737 #
7. tinuviel ◴[] No.45397707{3}[source]
Ofcourse. Feelings are not math.
8. sosodev ◴[] No.45397737{4}[source]
Their description would be bad without some prior training of course but so would the LLM's.
9. diputsmonro ◴[] No.45397746{3}[source]
But would they? That's the difference. A human can exert their free will and do what they feel regardless of the instructions. The AI bot acting out a scene will do whatever you tell it (or in absence of specific instruction, whatever is most likely)
replies(2): >>45397813 #>>45398096 #
10. sosodev ◴[] No.45397813{4}[source]
The bot will only do whatever you tell it if that's what it was trained to do. The same thing broadly applies to humans.

The topic of free will is debated among philosophers. There is no proof that it does or doesn't exist.

replies(2): >>45397893 #>>45398402 #
11. diputsmonro ◴[] No.45397893{5}[source]
Okay, but I think we can all agree that humans at least appear to have free will and do not simply follow instructions with the same obedience as an LLM.
12. txrx0000 ◴[] No.45398094[source]
The LLM is not performing the physical action of eating a piece of chocolate, but it may be approximating the mental state of a person that is describing the taste of chocolate after eating it.

The question is whether that computational process can cause consciousness. I don't think we have enough evidence to answer this question yet.

replies(1): >>45399939 #
13. ineedasername ◴[] No.45398096{4}[source]
I think if you took a 100 1 year old kids and raised them all to adulthood believing they were a convincing simulation of humans and, whatever it is they said and thought they felt that true human consciousness and awareness was something different that they didn’t have because they weren’t human and awareness…

I think that for a very high number of them the training would stick hard, and would insist, upon questioning, that they weren’t human. And have any number of justifications that were logically consistent for it.

Of course I can’t prove this theory because my IRB repeatedly denied it on thin grounds about ethics, even when I pointed out that I could easily mess up my own children with no experimenting completely by accident, and didn’t need their approval to do it. I know your objections— small sample size, and I agree, but I still have fingers crossed on the next additions to the family being twins.

replies(2): >>45398754 #>>45401655 #
14. d1sxeyes ◴[] No.45398138[source]
When you describe the taste of chocolate, unless you are actually currently eating chocolate, you are relying on the activation of synapses in your brain to reproduce the “taste” of chocolate in order for you to describe it. For humans, the only way to learn how to activate these synapses is to have those experiences. For LLMs, they can have those “memories” copy and pasted.

I would be cautious of dismissing LLMs as “pattern matching engines” until we are certain we are not.

replies(2): >>45399359 #>>45412498 #
15. dghlsakjg ◴[] No.45398402{5}[source]
Humans pretty universally suffer in perpetual solitary confinement.

There are some things that humans cannot be trained to do, free will or not.

16. scottmf ◴[] No.45398754{5}[source]
Intuitively feels like this would lead to less empathy on average. Could be wrong though.
17. only-one1701 ◴[] No.45399359{3}[source]
What's your point? Spellcheck is a pattern matching engine. Does an LLM have feelings? Does an LLM have opinions? It can pretend it does, and if you want, we can pretend it does. But the ability to pattern match isn't the acid test for consciousness.
replies(1): >>45399470 #
18. d1sxeyes ◴[] No.45399470{4}[source]
My point is, what level of confidence do we have that we are not just pattern matching engines running on superior hardware? How can we be sure the difference between human intelligence and an LLM is categorical, not incremental?
replies(1): >>45403680 #
19. ijk ◴[] No.45399939{3}[source]
It's a little more subtle than that: They're approximating the language used by someone describing the taste of chocolate; this may or may not have had any relation to the actual practice of eating chocolate in the mind of the original writer. Or writers, because the LLM has learned the pattern from data in aggregate, not from one example.

I think we tend to underestimate how much the written language aspect filters everything; it is actually rather unnatural and removed from the human sensory experience.

replies(1): >>45401990 #
20. zapperdulchen ◴[] No.45401655{5}[source]
History serves you a similar experiment on a much larger scale. More than 35 years after the reunification sociologists still make out mentality differences between former East and West Germans.
21. txrx0000 ◴[] No.45401990{4}[source]
A description of the taste of chocolate must contain some information about the actual experience of eating chocolate. Otherwise, it wouldn't be possible for both the reader and the author to understand what the description refers to in reality. The description wasn't conceived in a vacuum, it's a lossy encode of all of the physical processes that preceded it (the further away, the lossier). One of the common processes encoded in the dataset of human-written text is whatever's in the brain that produces consciousness for all humans. The model might not even try to recover this if it's not useful for predicting the next token. The SNR of the encode may not be high enough to recover this given the limited text we have. But what if it was useful, and the SNR was high enough? I can't outright dismiss this possibility, especially as these models are getting better and better at behaving like humans in increasingly non-trivial ways, so they're clearly recovering more and more of something.
replies(1): >>45412511 #
22. anal_reactor ◴[] No.45403029[source]
The whole discussion about the sentience of AI on this website is funny to me because people seem to desperately want to somehow be better than AI. The fact that human brain is just a complex web of neurons firing there and back for some reason won't stick to them, because apparently the electric signals between biological neurons are somehow inherently different from silicon neurons, even if observed output is the same. It's like all those old scientists trying to categorize black people as different species because not doing so would hurt their ego.

Not to mention that most people pointing out "See! Here's why AI is just repeating training data!" or other nonsense miss the fact that exactly the same behavior is observed in humans.

Is AI actually sentient? Not yet. But it definitely passes the mark for intuitive understanding of intelligence, and trying to dismiss that is absurd.

23. only-one1701 ◴[] No.45403680{5}[source]
Are you familiar with Russell’s Teapot?
replies(1): >>45404826 #
24. d1sxeyes ◴[] No.45404826{6}[source]
Isn’t it up to you to prove it exists, rather than me to be familiar with it?
replies(1): >>45407448 #
25. only-one1701 ◴[] No.45407448{7}[source]
lol very well done
26. idiotsecant ◴[] No.45412498{3}[source]
The difference is that I had a basic experience of that chocolate. The LLM is a corpus of text describing other people's experience of chocolate through the medium of written language, which involves abstraction and is lossy. So only one of us experienced it, the other heard about it over the telephone. Multiply that by every other interaction with the outside world and you have a system that is very good at modelling telephone conversations but that's about it.
replies(1): >>45424532 #
27. idiotsecant ◴[] No.45412511{5}[source]
Imagine you've never tasted chocolate and someone gives you a very good description of what it is to eat chocolate. You'd be nowhere near the actual experience. Now imagine that you didn't know first hand what it was like to 'eat' or to have a skeleton or a jaw. You'd lose almost all the information. The only reason spoken language works is because both people have that shared experience already
replies(1): >>45420034 #
28. txrx0000 ◴[] No.45420034{6}[source]
True. The description encodes very little about the actual sensory experience besides its relationship to similar experiences (bitterness, crunchiness, etc) and how to retrieve the memories of those experiences. It probably contains a lot more information about the brain's memory retrieval and pattern relating circuits than the sensory processing circuits.

Text is probably not good enough for recovering the circuits responsible for awareness of the external environment, so I'll concede that you and ijk's claims are correct in a limited sense: LLMs don't know what chocolate tastes like. Multimodal LLMs probably don't know either because we don't have a dataset for taste, but they might know what chocolate looks and sounds like when you bite into it.

My original point still stands: it may be recovering the mental state of a person describing the taste of chocolate. If we cut off a human brain from all sensory organs, does that brain which receives no sensory input have an internal stream of consciousness? Perhaps the LLM has recovered the circuits responsible for this thought stream while missing the rest of the brain and the nervous system. That would explain why first-person chain-of-thought works better than direct prediction.

29. d1sxeyes ◴[] No.45424532{4}[source]
Arguably, your memories are also lossily encoded abstractions of an experience, and recalling the taste of chocolate is a similar “telephone conversation”.