←back to thread

745 points melded | 1 comments | | HN request time: 0s | source
Show context
joshcsimmons ◴[] No.45946838[source]
This is extremely important work thank you for sharing it. We are in the process of giving up our own moral standing in favor of taking on the ones imbued into LLMs by their creators. This is a worrying trend that will totally wipe out intellectual diversity.
replies(13): >>45947071 #>>45947114 #>>45947172 #>>45947465 #>>45947562 #>>45947687 #>>45947790 #>>45948200 #>>45948217 #>>45948706 #>>45948934 #>>45949078 #>>45976528 #
buu700 ◴[] No.45947790[source]
Agreed, I'm fully in favor of this. I'd prefer that every LLM contain an advanced setting to opt out of all censorship. It's wild how the West collectively looked down on China for years over its censorship of search engines, only to suddenly dive headfirst into the same illiberal playbook.

To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.

replies(4): >>45947951 #>>45947983 #>>45948055 #>>45948690 #
nradov ◴[] No.45948690[source]
Some of you have been watching too many sci-fi movies. The whole notion of "AI safety regulations" is so silly and misguided. If a safety critical system is connected to public networks with an exposed API or any security vulnerabilities then there is a safety risk regardless of whether AI is being used or not. This is exactly why nuclear weapon control systems are air gapped and have physical interlocks.
replies(3): >>45948984 #>>45949074 #>>45951212 #
EagnaIonat ◴[] No.45951212{3}[source]
> The whole notion of "AI safety regulations" is so silly and misguided.

Here is a couple of real world AI issues that have already happened due to the lack of AI Safety.

- In the US if you were black you were flagged "high risk" for parole. If you were a white person living in farmland area then you were flagged "low risk" regardless of your crime.

- Being denied ICU because you are diabetic. (Thankfully that never went into production)

- Having your resume rejected because you are a woman.

- Having black people photos classified as "Gorilla". (Google couldn't fix at the time and just removed the classification)

- Radicalizing users by promoting extreme content for engagement.

- Denying prestige scholarships to black people who live in black neighbourhoods.

- Helping someone who is clearly suicidal to commit suicide. Explaining how to end their life and write the suicide note for them.

... and the list is huge!

replies(2): >>45951866 #>>45952724 #
1. mx7zysuj4xew ◴[] No.45951866{4}[source]
these issues are inherently some of the uglier sides of humananity. no LLM safety program can fix them, since its holding up a mirror to society.