←back to thread

224 points martinald | 1 comments | | HN request time: 0.489s | source
Show context
ryao ◴[] No.44538755[source]
Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
replies(12): >>44538785 #>>44538805 #>>44538808 #>>44538903 #>>44538929 #>>44539030 #>>44539924 #>>44540225 #>>44540905 #>>44542283 #>>44542952 #>>44543574 #
1. simianwords ◴[] No.44540225[source]
I hope the same people questioning ai safety (which is reasonable) don’t also hold concern on Grok due to the recent incident.

You have to understand that a lot of people do care about these kind of things.