←back to thread

159 points martinald | 2 comments | | HN request time: 0.571s | source
Show context
ryao ◴[] No.44538755[source]
Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
replies(9): >>44538785 #>>44538805 #>>44538808 #>>44538903 #>>44538929 #>>44539030 #>>44539924 #>>44540225 #>>44540905 #
1. recursive ◴[] No.44538903[source]
I also think it's marketing but kind of for the opposite reason. Basically I don't think any of the current technology can be made safe.
replies(1): >>44538982 #
2. nomel ◴[] No.44538982[source]
Yes, perfection is difficult, but it's relative. It can definitely be made much safer. Looking at the analysis of pre vs post alignment makes this obvious, including when the raw unaligned models are compared to "uncensored" models.