←back to thread

443 points jaredwiener | 1 comments | | HN request time: 0.206s | source
Show context
rideontime ◴[] No.45032301[source]
The full complaint is horrifying. This is not equivalent to a search engine providing access to information about suicide methods. It encouraged him to share these feelings only with ChatGPT, talked him out of actions which would have revealed his intentions to his parents. Praised him for hiding his drinking, thanked him for confiding in it. It groomed him into committing suicide. https://drive.google.com/file/d/1QYyZnGjRgXZY6kR5FA3My1xB3a9...
replies(6): >>45032582 #>>45032731 #>>45035713 #>>45036712 #>>45037683 #>>45039261 #
idle_zealot ◴[] No.45032582[source]
I wonder if we can shift the framing on these issues. The LLM didn't do anything, it has no agency, it can bear no responsibility. OpenAI did these things. It is accountable for what it does, regardless of the sophistication of the tools it uses to do them, and regardless of intent. OpenAI drove a boy to suicide. More than once. The law must be interpreted this way, otherwise any action can be wrapped in machine learning to avoid accountability.
replies(10): >>45032677 #>>45032798 #>>45032857 #>>45033177 #>>45033202 #>>45035815 #>>45036475 #>>45036923 #>>45037123 #>>45039144 #
ruraljuror ◴[] No.45033202[source]
I agree with your larger point, but I don't understand what you mean the LLM doesn’t do anything? LLMs do do things and they can absolutely have agency (hence all the agents being released by AI companies).

I don’t think this agency absolves companies of any responsibility.

replies(1): >>45033360 #
MattPalmer1086 ◴[] No.45033360[source]
An LLM does not have agency in the sense the OP means. It has nothing to do with agents.

It refers to the human ability to make independent decisions and take responsibility for their actions. An LLM has no agency in this sense.

replies(1): >>45033735 #
ruraljuror ◴[] No.45033735[source]
If you confine agency to something only humans can have, which is “human agency,” then yes of course LLMs don’t have it. But there is a large body of philosophical work studying non-human agency, and it is from this characteristic of agency that LLM agents take their name. Hariri argues that LLMs are the first technology that are agents. I think saying that they “can’t do things” and are not agents misunderstands them and underestimates their potential.
replies(2): >>45034417 #>>45036679 #
1. MattPalmer1086 ◴[] No.45036679[source]
LLMs can obviously do things, so we don't disagree there; I didn't argue they couldn't do things. They can definitely act as agents of their operator.

However, I still don't think LLMs have "agency", in the sense of being capable of making choices and taking responsibility for the consequences of them. The responsibility for any actions undertaken by them still reside outside of themselves; they are sophisticated tools with no agency of their own.

If you know of any good works on nonhuman agency I'd be interested to read some.