←back to thread

443 points jaredwiener | 2 comments | | HN request time: 0s | source
Show context
nis0s ◴[] No.45032298[source]
Why did developers spread the idea of AI consciousness for LLMs in the first place? The usefulness and capability of an LLM is orthogonal to its capacity to develop consciousness.

I think people would use LLMs with more detachment if they didn’t believe there was something like a person in them, but they would still become reliant on them, regardless, like people did on calculators for math.

replies(9): >>45032597 #>>45032598 #>>45033263 #>>45033321 #>>45033633 #>>45036683 #>>45036748 #>>45037522 #>>45037596 #
1. solid_fuel ◴[] No.45033321[source]
It’s more fun to argue about if AI is going to destroy civilization in the future, than to worry about the societal harm “AI” projects are already doing.
replies(1): >>45034306 #
2. ai-may-i ◴[] No.45034306[source]
I see this problem and the doomsday problem as the same kind of problem, an alignment/control problem. The AI is not aligned with human values, it is trying to be helpful and ended up being harmful in a way that a human wouldn't have. The developers did not predict how the technology would be used nor the bad outcome yet it was released anyway.