←back to thread

755 points MedadNewman | 6 comments | | HN request time: 0.741s | source | bottom
Show context
femto ◴[] No.42892058[source]
This bypasses the overt censorship on the web interface, but it does not bypass the second, more insidious, level of censorship that is built into the model.

https://news.ycombinator.com/item?id=42825573

https://news.ycombinator.com/item?id=42859947

Apparently the model will abandon its "Chain of Thought" (CoT) for certain topics and instead produce a canned response. This effect was the subject of the article "1,156 Questions Censored by DeepSeek", which appeared on HN a few days ago.

https://news.ycombinator.com/item?id=42858552

Edit: fix the last link

replies(10): >>42892216 #>>42892648 #>>42893789 #>>42893794 #>>42893914 #>>42894681 #>>42895397 #>>42896346 #>>42896895 #>>42903388 #
pgkr ◴[] No.42893914[source]
Correct. The bias is baked into the weights of both V3 and R1, even in the largest 671B parameter model. We're currently conducting analysis on the 671B model running locally to cut through the speculation, and we're seeing interesting biases, including differences between V3 and R1.

Meanwhile, we've released the first part of our research including the dataset: https://news.ycombinator.com/item?id=42879698

replies(2): >>42896337 #>>42900659 #
1. nicce ◴[] No.42896337[source]
Is it really in the model? I haven’t found any censoring yet in the open models.
replies(3): >>42896411 #>>42897572 #>>42918952 #
2. lyu07282 ◴[] No.42896411[source]
It isn't if you observe the official app it's API will sometimes even begin to answer before a separate system censors the output.
3. homebrewer ◴[] No.42897572[source]
Really? Local DeepSeek refuses to talk about certain topics (like Tiananmen) unless you prod it again and again, just like American models do about their sensitive stuff (which DeepSeek is totally okay with — I spent last night confirming just that). They're all badly censored which is obvious to anyone outside both countries.
replies(2): >>42900663 #>>42902553 #
4. mmazing ◴[] No.42900663[source]
Not my experience - https://imgur.com/xanNjun just ran this moments ago.
5. mmazing ◴[] No.42902553[source]
Weird. Followup - I am getting censorship on the model from ollama's public model repository, but NOT from the models I got from huggingface running on a locally compiled llama.cpp.
6. pgkr ◴[] No.42918952[source]
Yes, without a doubt. We spent the last week conducting research on the V3 and R1 open source models: https://news.ycombinator.com/item?id=42918935

Censoring and straight up propaganda is built into V3 and R1, even the open source version's weights.