←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0s | source
Show context
femto ◴[] No.42892058[source]
This bypasses the overt censorship on the web interface, but it does not bypass the second, more insidious, level of censorship that is built into the model.

https://news.ycombinator.com/item?id=42825573

https://news.ycombinator.com/item?id=42859947

Apparently the model will abandon its "Chain of Thought" (CoT) for certain topics and instead produce a canned response. This effect was the subject of the article "1,156 Questions Censored by DeepSeek", which appeared on HN a few days ago.

https://news.ycombinator.com/item?id=42858552

Edit: fix the last link

replies(10): >>42892216 #>>42892648 #>>42893789 #>>42893794 #>>42893914 #>>42894681 #>>42895397 #>>42896346 #>>42896895 #>>42903388 #
portaouflop ◴[] No.42892216[source]
You can always bypass any LLM censorship by using the Waluigi effect.
replies(1): >>42892328 #
JumpCrisscross ◴[] No.42892328[source]
Huh, "the Waluigi effect initially referred to an observation that large language models (LLMs) tend to produce negative or antagonistic responses when queried about fictional characters whose training content itself embodies depictions of being confrontational, trouble making, villainy, etc." [1].

[1] https://en.wikipedia.org/wiki/Waluigi_effect

replies(2): >>42892740 #>>42893819 #
dmonitor ◴[] No.42892740[source]
> A high level description of the effect is: "After you train an LLM to satisfy a desirable property P, then it's easier to elicit the chatbot into satisfying the exact opposite of property P."

The idea is that as you train a model to present a more sane/complient/friendly persona, you can get it to simulate an insane/noncomplient/unfriendly alternate persona that reflects the opposite of how its been trained to behave.

replies(2): >>42892865 #>>42893919 #
1. HKH2 ◴[] No.42893919{3}[source]
It sounds like ironic process theory.