←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0s | source
Show context
adzm ◴[] No.45032407[source]
Wow, he explicitly stated he wanted to leave the noose out so someone would stop him, and ChatGPT told him not to. This is extremely disturbing.
replies(1): >>45033325 #
causal ◴[] No.45033325[source]
It is disturbing, but I think a human therapist would also have told him not to do that, and instead resorted to some other intervention. It is maybe an example of why having a partial therapist is worse than none: it had the training data to know a real therapist wouldn't encourage displaying nooses at home, but did not have the holistic humanity and embodiment needed to intervene appropriately.

Edit: I should add that the sycophantic "trust me only"-type responses resemble nothing like appropriate therapy, and are where OpenAI most likely holds responsibility for their model's influence.

replies(1): >>45035732 #
incone123 ◴[] No.45035732[source]
Even here you are anthropomorphising. It doesn't 'know' anything. A human therapist would escalate this to a doctor or even EMS.
replies(1): >>45038950 #
1. causal ◴[] No.45038950[source]
"know" is used colloquially this way even for non-AI systems.

"it encodes the training data in weights to predict a token mimicking a human ..." - better?