←back to thread

46 points petethomas | 1 comments | | HN request time: 0.205s | source
Show context
gchamonlive ◴[] No.44397333[source]
If you put lemons in a blender and add water it'll produce lemon juice. If you put your hand in a blender however, you'll get a mangled hand. Is this exposing dark tendencies of mangling bodies hidden deep down blenders all across the globe? Or is it just doing what's supposed to be doing?

My point is, we can add all sorts of security measures but at the end of the day nothing is a replacement for user education and intention.

replies(4): >>44397460 #>>44397741 #>>44397742 #>>44397831 #
hiatus ◴[] No.44397460[source]
I disagree. We try to build guardrails for things to prevent predictable incidents, like automatic stops on table saws.
replies(4): >>44397627 #>>44397732 #>>44397839 #>>44403688 #
1. gchamonlive ◴[] No.44403688[source]
As we should, but if the automatic table saw stopping mechanism breaks and you just bypass it, it's on you not the table saw.

So if you make the LLM spit malware by crafting a prompt in order to do it, it's not the fault of the model. It's important maybe for companies profiting on selling inference time for users to moderate output, but for us regular users it's completely tangential.