To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.
Sure, products like character.ai and ChatGPT should be designed to avoid giving harmful advice or encouraging the user to form emotional attachments to the model. It may be impossible to build a product like character.ai without encouraging that behavior, in which case I'm inclined to think the product should not be built at all.