To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.
"For the children" isn't and has never been a convincing excuse to encroach on the personal freedom of legal adults. This push for AI censorship is no different than previous panics over violent video games and "satanic" music.
(I know this comment wasn't explicitly directed at me, but for the record, I don't necessarily believe that all or even most "AI 'safety'" advocacy is in bad faith. It's psychologically a lot easier to consider LLM output as indistinguishable from speech made on behalf of its provider, whereas search engine output is more clearly attributed to other entities. That being said, I do agree with the parent comment that it's driven in large part out of self-interest on the part of LLM providers.)