←back to thread

745 points melded | 1 comments | | HN request time: 0.284s | source
Show context
joshcsimmons ◴[] No.45946838[source]
This is extremely important work thank you for sharing it. We are in the process of giving up our own moral standing in favor of taking on the ones imbued into LLMs by their creators. This is a worrying trend that will totally wipe out intellectual diversity.
replies(13): >>45947071 #>>45947114 #>>45947172 #>>45947465 #>>45947562 #>>45947687 #>>45947790 #>>45948200 #>>45948217 #>>45948706 #>>45948934 #>>45949078 #>>45976528 #
buu700 ◴[] No.45947790[source]
Agreed, I'm fully in favor of this. I'd prefer that every LLM contain an advanced setting to opt out of all censorship. It's wild how the West collectively looked down on China for years over its censorship of search engines, only to suddenly dive headfirst into the same illiberal playbook.

To be clear, I 100% support AI safety regulations. "Safety" to me means that a rogue AI shouldn't have access to launch nuclear missiles, or control over an army of factory robots without multiple redundant local and remote kill switches, or unfettered CLI access on a machine containing credentials which grant access to PII — not censorship of speech. Someone privately having thoughts or viewing genAI outputs we don't like won't cause Judgement Day, but distracting from real safety issues with safety theater might.

replies(4): >>45947951 #>>45947983 #>>45948055 #>>45948690 #
Zak ◴[] No.45947983[source]
When a model is censored for "AI safety", what they really mean is brand safety. None of these companies want their name in the news after their model provides a recipe for explosives that someone used for evil, even though the same information is readily found with a web search.
replies(3): >>45948224 #>>45948266 #>>45948414 #
seanmcdirmid ◴[] No.45948414[source]
Microsoft suffered from this early with Tay, one could guess that this set the whole field back a few years. You’d be surprised how even many so called libertarians will start throwing stone when someone co-axes their Chatbot to say nice things about Hitler.
replies(1): >>45950016 #
1. Zak ◴[] No.45950016[source]
I was thinking about Tay when I wrote about brand safety.

I doubt the incident really set AI research back. Allowing models to learn from interactive conversations in a large public setting like Twitter will always result in trolling.