What is the safety added by this? What is unsafe about a computer giving you answers?
What is the safety added by this? What is unsafe about a computer giving you answers?
- PR (avoid hurting feelings, avoid generating text that would make journalists write sensationalist negative articles about the company)
- "forbidden knowledge": Don't give people advice on how to do dangerous/bad things like building bombs (broadly a subcategory of the above - the content is usually discoverable through other means and the LLM generally won't give better advice)
- dangerous advice and advice that's dangerous when wrong: many people don't understand what LLMs do, and the output is VERY convincing even when wrong. So if the model tells people the best way to entertain your kids is to mix bleach and ammonia and blow bubbles (a common deadly recipe recommended on 4chan), there will be dead people.
- keeping bad people from using the model in bad ways, e.g. having it write stories where children are raped, scamming people at scale (think Nigeria scam but automated), or election interference (people are herd animals, so if you show someone 100 different posts from 100 different "people" telling them that X is right and Y is wrong, it will influence them, and at scale this has the potential to tilt elections and conquer countries).
I think the first ones are rather stupid, but the latter ones get more and more important to actually have. Especially the very last one (opinion shifting/election interference) is something where the existence of these models can have a very real, negative effect on the world (affecting you even if you yourself never come into contact with any of the models or its outputs, since you'll have to deal with the puppet government elected due to it), and I appreciate the companies building and running the models doing something about it.
The last ones are rather stupid too. Bad people can just write stories or creating drawings about disgusting things. Should we censor all computers to prevent such things from happening? Or hands and paper?
We won't keep the bottle corked forever though. It's like we're just buying ourselves time to figure out how we're going to deal with the deluge of questionable generated content that's about to hit us.