To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.
Here is a couple of real world AI issues that have already happened due to the lack of AI Safety.
- In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime.
- Being denied ICU because you are diabetic. (Thankfully that never went into production)
- Having your resume rejected because you are a woman.
- Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification)
- Radicalizing users by promoting extreme content for engagement.
- Denying prestige scholarships to black people who live in black neighbourhoods.
- Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them.
... and the list is huge!
I mean, just because you could kill a million people by hand doesn't mean that a pistol, or an automatic weapon, or nuclear weapons aren't an issue, just an irrelevant technology. Guns in a home make suicide more likely simply because they are a tool that allows for a split-second action. "If someone really wants to do X, they will find a way" just doesn't map onto reality.