←back to thread

132 points harel | 1 comments | | HN request time: 0.333s | source
Show context
acbart ◴[] No.45397001[source]
LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
replies(11): >>45397113 #>>45397305 #>>45397413 #>>45397529 #>>45397801 #>>45397859 #>>45397960 #>>45398189 #>>45399621 #>>45400285 #>>45401167 #
txrx0000 ◴[] No.45397960[source]
I think this popular take is a hypothesis rather than an observation of reality. Let's make this clear by asking the following question, and you'll see what I mean when you try to answer it:

Can you define what real despairing is?

replies(1): >>45398960 #
snickerbockers ◴[] No.45398960[source]
If we're going to play the burden of proof game, id submit that machines have never been acknowledged as being capable of experiencing despair and therefore it's on you to explain why this machine is different.
replies(1): >>45401678 #
1. txrx0000 ◴[] No.45401678[source]
I'm trying to say there's no sufficient evidence either way.

The mechanism by which our consciousness emerges remains unresolved, and inquiry has been moving towards more fundamental processes: philosophy -> biology -> physics. We assumed that non-human animals weren't conscious before we understood that the brain is what makes us conscious. Now we're assuming non-biological systems aren't conscious while not understanding what makes the brain conscious.

We're building AI systems that behave more and more like humans. I see no good reason to outright dismiss the possibility that they might be conscious. If anything, it's time to consider it seriously.