←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0.218s | source
Show context
femto ◴[] No.42892058[source]
This bypasses the overt censorship on the web interface, but it does not bypass the second, more insidious, level of censorship that is built into the model.

https://news.ycombinator.com/item?id=42825573

https://news.ycombinator.com/item?id=42859947

Apparently the model will abandon its "Chain of Thought" (CoT) for certain topics and instead produce a canned response. This effect was the subject of the article "1,156 Questions Censored by DeepSeek", which appeared on HN a few days ago.

https://news.ycombinator.com/item?id=42858552

Edit: fix the last link

replies(10): >>42892216 #>>42892648 #>>42893789 #>>42893794 #>>42893914 #>>42894681 #>>42895397 #>>42896346 #>>42896895 #>>42903388 #
portaouflop ◴[] No.42892216[source]
You can always bypass any LLM censorship by using the Waluigi effect.
replies(1): >>42892328 #
JumpCrisscross ◴[] No.42892328[source]
Huh, "the Waluigi effect initially referred to an observation that large language models (LLMs) tend to produce negative or antagonistic responses when queried about fictional characters whose training content itself embodies depictions of being confrontational, trouble making, villainy, etc." [1].

[1] https://en.wikipedia.org/wiki/Waluigi_effect

replies(2): >>42892740 #>>42893819 #
1. __MatrixMan__ ◴[] No.42893819[source]
While I use LLMs I form and discard mental models for how they work. I've read about how they work, but I'm looking for a feeling that I can't really get by reading, I have to do my own little exploration. My current (surely flawed) model has to do with the distinction between topology and geometry. A human mind has a better grasp of topology, if you tell them to draw a single triangle on the surfaces of two spheres they'll quickly object. But an LLM lacks that topological sense, so they'll just try really hard without acknowledging the impossibility of the task.

One thing I like about this one is that it's consistent with the Waluigi effect (which I just learned of). The LLM is a thing of directions and distances, of vectors. If you shape the space to make a certain vector especially likely, then you've also shaped that space to make its additive inverse likely as well. To get away from it we're going to have to abandon vector spaces for something more exotic.