←back to thread

586 points mizzao | 1 comments | | HN request time: 0.248s | source
Show context
vasco ◴[] No.40666684[source]
> "As an AI assistant, I cannot help you." While this safety feature is crucial for preventing misuse,

What is the safety added by this? What is unsafe about a computer giving you answers?

replies(11): >>40666709 #>>40666828 #>>40666835 #>>40666890 #>>40666984 #>>40666992 #>>40667025 #>>40667243 #>>40667633 #>>40669842 #>>40670809 #
tgsovlerkhgsel ◴[] No.40666984[source]
I think there are several broad categories all wrapped under "safety":

- PR (avoid hurting feelings, avoid generating text that would make journalists write sensationalist negative articles about the company)

- "forbidden knowledge": Don't give people advice on how to do dangerous/bad things like building bombs (broadly a subcategory of the above - the content is usually discoverable through other means and the LLM generally won't give better advice)

- dangerous advice and advice that's dangerous when wrong: many people don't understand what LLMs do, and the output is VERY convincing even when wrong. So if the model tells people the best way to entertain your kids is to mix bleach and ammonia and blow bubbles (a common deadly recipe recommended on 4chan), there will be dead people.

- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped, scamming people at scale (think Nigeria scam but automated), or election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).

I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.

replies(12): >>40667179 #>>40667184 #>>40667217 #>>40667630 #>>40667902 #>>40667915 #>>40667982 #>>40668089 #>>40668819 #>>40669415 #>>40670479 #>>40673732 #
1. wruza ◴[] No.40667179[source]
Iow, we have a backdoor, and by backdoor I mean a whole back wall missing, but only certified entities are allowed to [ab]use it and it’s better to keep it all under the rug and pretend all ok.

You can’t harden humanity against this exploit without pointing it out and making a few examples. Someone will make an “unsafe” but useful model eventually and this safety mannequin will flop with a bang, cause it’s similar to avoiding sex and drugs conversations with kids.

It’s nice that companies think about it at all. But the best thing they will ever do is to cover their own ass while keeping everyone naked before the storm.

The history of covering is also ridden with exploits, see e.g. google’s recent model which cannot draw situations without rainbow-coloring people. For some reason, this isn’t considered as cultural/political hijacking or exploitation, despite the fact that the problem is purely domestic to the model’s origin.