←back to thread

159 points martinald | 2 comments | | HN request time: 0.003s | source
Show context
ryao ◴[] No.44538755[source]
Am I the only one who thinks mention of “safety tests” for LLMs is a marketing scheme? Cars, planes and elevators have safety tests. LLMs don’t. Nobody is going to die if a LLM gives an output that its creators do not like, yet when they say “safety tests”, they mean that they are checking to what extent the LLM will say things they do not like.
replies(9): >>44538785 #>>44538805 #>>44538808 #>>44538903 #>>44538929 #>>44539030 #>>44539924 #>>44540225 #>>44540905 #
natrius ◴[] No.44538808[source]
An LLM can trivially instruct someone to take medications with adverse interactions, steer a mental health crisis toward suicide, or make a compelling case that a particular ethnic group is the cause of your society's biggest problem so they should be eliminated. Words can't kill people, but words can definitely lead to deaths.

That's not even considering tool use!

replies(9): >>44538847 #>>44538877 #>>44538896 #>>44538914 #>>44539109 #>>44539685 #>>44539785 #>>44539805 #>>44540111 #
ryao ◴[] No.44538847[source]
This is analogous to saying a computer can be used to do bad things if it is loaded with the right software. Coincidentally, people do load computers with the right software to do bad things, yet people are overwhelmingly opposed to measures that would stifle such things.

If you hook up a chat bot to a chat interface, or add tool use, it is probable that it will eventually output something that it should not and that output will cause a problem. Preventing that is an unsolved problem, just as preventing people from abusing computers is an unsolved problem.

replies(3): >>44538876 #>>44539033 #>>44540550 #
ronsor ◴[] No.44538876[source]
As the runtime of any program approaches infinity, the probability of the program behaving in an undesired manner approaches 1.
replies(1): >>44538887 #
ryao ◴[] No.44538887[source]
That is not universally true. The yes program is a counter example:

https://www.man7.org/linux/man-pages/man1/yes.1.html

replies(1): >>44538973 #
cgriswald ◴[] No.44538973[source]
Devil's advocate:

(1) Execute yes (with or without arguments, whatever you desire).

(2) Let the program run as long as you desire.

(3) When you stop desiring the program to spit out your argument,

(4) Stop the program.

Between (3) and (4) some time must pass. During this time the program is behaving in an undesired way. Ergo, yes is not a counter example of the GP's claim.

replies(1): >>44539002 #
1. ryao ◴[] No.44539002[source]
I upvoted your reply for its clever (ab)use of ambiguity to say otherwise to a fairly open and shut case.

That said, I suspect the other person was actually agreeing with me, and tried to state that software incorporating LLMs would eventually malfunction by stating that this is true for all software. The yes program was an obvious counter example. It is almost certain that all LLMs will eventually generate some output that is undesired given that it is determining the next token to output based on probabilities. I say almost only because I do not know how to prove the conjecture. There is also some ambiguity in what is a LLM, as the first L means large and nobody has made a precise definition of what is large. If you look at literature from several years ago, you will find people saying 100 million parameters is large, while some people these days will refuse to use the term LLM to describe a model of that size.

replies(1): >>44539039 #
2. cgriswald ◴[] No.44539039[source]
Thanks, it was definitely tongue-in-cheek. I agree with you on both counts.