←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0.422s | source
Show context
femto ◴[] No.42892058[source]
This bypasses the overt censorship on the web interface, but it does not bypass the second, more insidious, level of censorship that is built into the model.

https://news.ycombinator.com/item?id=42825573

https://news.ycombinator.com/item?id=42859947

Apparently the model will abandon its "Chain of Thought" (CoT) for certain topics and instead produce a canned response. This effect was the subject of the article "1,156 Questions Censored by DeepSeek", which appeared on HN a few days ago.

https://news.ycombinator.com/item?id=42858552

Edit: fix the last link

replies(10): >>42892216 #>>42892648 #>>42893789 #>>42893794 #>>42893914 #>>42894681 #>>42895397 #>>42896346 #>>42896895 #>>42903388 #
pgkr ◴[] No.42893914[source]
Correct. The bias is baked into the weights of both V3 and R1, even in the largest 671B parameter model. We're currently conducting analysis on the 671B model running locally to cut through the speculation, and we're seeing interesting biases, including differences between V3 and R1.

Meanwhile, we've released the first part of our research including the dataset: https://news.ycombinator.com/item?id=42879698

replies(2): >>42896337 #>>42900659 #
nicce ◴[] No.42896337[source]
Is it really in the model? I haven’t found any censoring yet in the open models.
replies(3): >>42896411 #>>42897572 #>>42918952 #
homebrewer ◴[] No.42897572[source]
Really? Local DeepSeek refuses to talk about certain topics (like Tiananmen) unless you prod it again and again, just like American models do about their sensitive stuff (which DeepSeek is totally okay with — I spent last night confirming just that). They're all badly censored which is obvious to anyone outside both countries.
replies(2): >>42900663 #>>42902553 #
1. mmazing ◴[] No.42900663[source]
Not my experience - https://imgur.com/xanNjun just ran this moments ago.