←back to thread

755 points MedadNewman | 1 comments | | HN request time: 0.216s | source
Show context
tossaway2000 ◴[] No.42891368[source]
> I wagered it was extremely unlikely they had trained censorship into the LLM model itself.

I wonder why that would be unlikely? Seems better to me to apply censorship at the training phase. Then the model can be truly naive about the topic, and there's no way to circumvent the censor layer with clever tricks at inference time.

replies(8): >>42891449 #>>42891458 #>>42891492 #>>42891833 #>>42891894 #>>42893301 #>>42893449 #>>42901322 #
1. plasticeagle ◴[] No.42891833[source]
I would imagine that the difficulty lies in finding effective ways to remove information from the training data in that way. There's an enormous amount of data, and LLMs are probably pretty good at putting information together from different sources.