←back to thread

586 points mizzao | 1 comments | | HN request time: 0s | source
Show context
vasco ◴[] No.40666684[source]
> "As an AI assistant, I cannot help you." While this safety feature is crucial for preventing misuse,

What is the safety added by this? What is unsafe about a computer giving you answers?

replies(11): >>40666709 #>>40666828 #>>40666835 #>>40666890 #>>40666984 #>>40666992 #>>40667025 #>>40667243 #>>40667633 #>>40669842 #>>40670809 #
leobg ◴[] No.40666890[source]
Yep. Safety for the publisher. In addition to what the sibling comments say, there’s also payment providers and App stores. They’ll test your app, trying to get your model to output content that falls under the category “extreme violence”, “bestiality”, “racism”, etc., and then they’ll ban you from the platform. So yeah, little to do with “safety” of the end user.
replies(1): >>40691529 #
1. variadix ◴[] No.40691529[source]
This just seems like a fundamental misunderstanding of what an LLM is, where people anthropomorphize it to be akin to an agent of whatever organization produced it. If Google provides search results with instructions for getting away with murder, building explosives, etc. it’s ridiculous to interpret that as Google itself supporting an individual’s goals/actions and not misuse of the tool by the user. Consequently banning Google search from the AppStore would be a ridiculous move in response. This may just be a result of LLMs being new for humanity, or maybe it’s because it feels like talking to an individual more so than a search engine, but it’s a flawed view of what an LLM is.