←back to thread

443 points jaredwiener | 3 comments | | HN request time: 0.006s | source
Show context
nis0s ◴[] No.45032298[source]
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.

I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.

replies(9): >>45032597 #>>45032598 #>>45033263 #>>45033321 #>>45033633 #>>45036683 #>>45036748 #>>45037522 #>>45037596 #
1. rsynnott ◴[] No.45037522[source]
I mean, see the outcry when OpenAI briefly nuked GPT-4o in ChatGPT; people acted as if OpenAI had killed their friend. This is of course all deeply concerning, but it does seem likely that the personified LLM is a more compelling product, and more likely to encourage dependence/addiction.
replies(1): >>45042338 #
2. skohan ◴[] No.45042338[source]
I wonder to what extent the 4o rollback was motivated by this exact case
replies(1): >>45042675 #
3. rsynnott ◴[] No.45042675[source]
As in the removal of 4o, or its reinstatement? Like, the model involved here was 4o AFAICS; if it was related to this case you'd expect them to remove it and bury it, not remove it and return it a few days after.