←back to thread

179 points martinald | 1 comments | | HN request time: 0s | source
Show context
ryao ◴[] No.44538755[source]
Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
replies(10): >>44538785 #>>44538805 #>>44538808 #>>44538903 #>>44538929 #>>44539030 #>>44539924 #>>44540225 #>>44540905 #>>44542283 #
eviks ◴[] No.44538785[source]
Why is your definition of safety so limited? Death isn't the only type of harm...
replies(1): >>44538796 #
ryao ◴[] No.44538796{3}[source]
There are other forms of safety, but whether a digital parrot says something that people do not like is not a form of safety. They are abusing the term safety for marketing purposes.
replies(1): >>44538885 #
eviks ◴[] No.44538885{4}[source]
You're abusing the terms by picking either the overly limited ("death") or overly expansive ("not like") definitions to fit your conclusion. Unless you reject the fact that harm can come from words/images, a parrot can parrot harmful words/images, so be unsafe.
replies(2): >>44538891 #>>44539234 #
ryao ◴[] No.44538891{5}[source]
The maxim “sticks and stones can break my bones, but words can never hurt me” comes to mind here. That said, I think this misses the point that the LLM is not a gatekeeper to any of this.
replies(2): >>44538911 #>>44539028 #
1. jiggawatts ◴[] No.44539028{6}[source]
I find it particularly irritating that the models are so overly puritan that they refuse to translate subtitles because they mention violence.