←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0.257s | source
Show context
femto ◴[] No.42892058[source]
This bypasses the overt censorship on the web interface, but it does not bypass the second, more insidious, level of censorship that is built into the model.

https://news.ycombinator.com/item?id=42825573

https://news.ycombinator.com/item?id=42859947

Apparently the model will abandon its "Chain of Thought" (CoT) for certain topics and instead produce a canned response. This effect was the subject of the article "1,156 Questions Censored by DeepSeek", which appeared on HN a few days ago.

https://news.ycombinator.com/item?id=42858552

Edit: fix the last link

replies(10): >>42892216 #>>42892648 #>>42893789 #>>42893794 #>>42893914 #>>42894681 #>>42895397 #>>42896346 #>>42896895 #>>42903388 #
cyanydeez[dead post] ◴[] No.42892648[source]
[flagged]
thebruce87m ◴[] No.42892941[source]
US based models could suffer the same fate.
replies(2): >>42893120 #>>42894055 #
1. sangnoir ◴[] No.42894055[source]
No hypothetical there - it has already happened, just not about Tiananmen square. Have you tried asking ChatGPT about David Mayer[1] or Jonathan Turley[1]? Give it a whirl and watch the all-American censorship at work.

Corporations avoiding legal trouble is the one thing in common between American, Chinese, or any other AI company, really.

1. https://www.404media.co/not-just-david-mayer-chatgpt-breaks-...