←back to thread

544 points tosh | 1 comments | | HN request time: 0s | source
Show context
hmottestad ◴[] No.43464637[source]
Just don’t ask it about the tiananmen square massacre or you’ll get a security warning. Even if you rephrase it.

It’ll happily talk about Bloody Sunday.

Probably a great model, but it worries me that it has such restrictions.

Sure OpenAI also has lots of restrictions, but this feels more like straight up censorship since it’ll happily go on about bad things the governments of the west have done.

replies(7): >>43464653 #>>43464682 #>>43464747 #>>43464832 #>>43466474 #>>43466816 #>>43468343 #
1. BoorishBears ◴[] No.43464832[source]
Daily reminder that all commerical LLMs are going to align with the governments their corporations exist under.

https://imgur.com/a/censorship-much-CBxXOgt

It's not even nefarious: they don't want the model spewing out content that will get them in trouble in the most general sense. It just so happens most governments have things that will get you in trouble.

The US is very obsessed with voter manipulation these days, so OpenAI and Anthropic's models are extra sensitive if the wording implies they're being used for that.

China doesn't like talking about past or ongoing human rights violations, so their models will be extra sensitive about that.