To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.
Are you saying you're opposed to letting AI perform physical labor, or that you're opposed to requiring safeguards that allow humans to physically shut it off?
Ultimately, this isn't strictly an issue specific to genAI. If a "script roulette" program that downloaded and executed random GitHub Gist files somehow became popular, or if someone created a web app that allowed anyone to anonymously pilot a fleet of robots, I'd suggest that those be subject to exactly the same types of safety regulations I proposed.
Any such regulations should be generically written, not narrowly targeted at AI algorithms. I'd still call that "AI safety", because in practice it's a much more useful definition of AI safety than the one being pushed today. "Non-determinism safety" doesn't really have the same ring to it.