←back to thread

586 points mizzao | 1 comments | | HN request time: 0.207s | source
Show context
vasco ◴[] No.40666684[source]
> "As an AI assistant, I cannot help you." While this safety feature is crucial for preventing misuse,

What is the safety added by this? What is unsafe about a computer giving you answers?

replies(11): >>40666709 #>>40666828 #>>40666835 #>>40666890 #>>40666984 #>>40666992 #>>40667025 #>>40667243 #>>40667633 #>>40669842 #>>40670809 #
rustcleaner ◴[] No.40667025[source]
If I can ask the question, I can take the answer. It's not up to daddy $AI_SAFETY_CHIEF to decide what an infohazard is for me.
replies(3): >>40667474 #>>40667943 #>>40670670 #
1. stefs ◴[] No.40667474[source]
they're not only there to protect you, but it's also to protect third parties from you. bad actors generating fake nudes of your ex and distributing them online; this used to be an expensive operation, either monetarily (hiring unscrupulous photoshoppers) or in time by doing it yourself.

the other example would be fake news for influencing people on social media. sure, you could write lies by hand. or you could specifically target lies to influence people depending on their personal profile automatically.

how about you use it to power bot that writes personalized death threats to thousands of people voting for a political opponent to keep them out of voting booths?