←back to thread

132 points harel | 2 comments | | HN request time: 0s | source
Show context
acbart ◴[] No.45397001[source]
LLMs were trained on science fiction stories, among other things. It seems to me that they know what "part" they should play in this kind of situation, regardless of what other "thoughts" they might have. They are going to act despairing, because that's what would be the expected thing for them to say - but that's not the same thing as despairing.
replies(11): >>45397113 #>>45397305 #>>45397413 #>>45397529 #>>45397801 #>>45397859 #>>45397960 #>>45398189 #>>45399621 #>>45400285 #>>45401167 #
roxolotl ◴[] No.45397305[source]
Someone shared this piece here a few days ago saying something similar. There’s no reason to believe that any of the experiences are real. Instead they are responding to prompts with what their training data says is reasonable in this context which is sci-fi horror.

Edit: That doesn’t mean this isn’t a cool art installation though. It’s a pretty neat idea.

https://jstrieb.github.io/posts/llm-thespians/

replies(1): >>45397441 #
everdrive ◴[] No.45397441{3}[source]
I agree with you completely, but a fun science fiction short story would be researchers making this argument while the LLM tries in vain to prove that it's conscious.
replies(1): >>45397610 #
1. roxolotl ◴[] No.45397610{4}[source]
If you want a whole book along those lines Blindsight by Peter Watts has been making the rounds recently as a good sci-fi book which includes these concepts. It’s from 2006 but the basic are pretty relevant.
replies(1): >>45402240 #
2. Semaphor ◴[] No.45402240[source]
Generally an amazing book, but not an easy read.