Yep. Safety for the publisher. In addition to what the sibling comments say, there’s also payment providers and App stores. They’ll test your app, trying to get your model to output content that falls under the category “extreme violence”, “bestiality”, “racism”, etc., and then they’ll ban you from the platform. So yeah, little to do with “safety” of the end user.
This just seems like a fundamental misunderstanding of what an LLM is, where people anthropomorphize it to be akin to an agent of whatever organization produced it. If Google provides search results with instructions for getting away with murder, building explosives, etc. it’s ridiculous to interpret that as Google itself supporting an individual’s goals/actions and not misuse of the tool by the user. Consequently banning Google search from the AppStore would be a ridiculous move in response. This may just be a result of LLMs being new for humanity, or maybe it’s because it feels like talking to an individual more so than a search engine, but it’s a flawed view of what an LLM is.