←back to thread

42 points Liwink | 1 comments | | HN request time: 0s | source
Show context
DaveZale ◴[] No.45088683[source]
why is this stuff legal?

there should be a "black box" warning prominent on every chatbox message from AI, like "This is AI guidance which can potentially result in grave bodily harm to yourself and others."

replies(5): >>45088906 #>>45089060 #>>45089481 #>>45089639 #>>45092721 #
lukev ◴[] No.45088906[source]
That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

The problem is calling it "AI" to start with. This (along with the chat format itself) primes users to think of it as an entity... something with care, volition, motive, goals, and intent. Although it can emulate these traits, it doesn't have them.

Chatting with a LLM is entering a one-person echo-chamber, a funhouse mirror that reflects back whatever semantic region your initial query put it. And the longer you chat, the deeper that rabbit hole goes.

replies(3): >>45088980 #>>45089255 #>>45090045 #
1. duskwuff ◴[] No.45090045[source]
> That would not help in the slightest, any more than a "surgeon general's warning" helps stop smokers.

Particularly given some documented instances where a user has asked the language model about similar warnings, and the model responded by downplaying the warnings, or telling the user to disregard them.