←back to thread

586 points mizzao | 1 comments | | HN request time: 0s | source
Show context
YukiElectronics ◴[] No.40667983[source]
> Once we have identified the refusal direction, we can "ablate" it, effectively removing the model's ability to represent this feature. This can be done through an inference-time intervention or permanently with weight orthogonalization.

Finally, even a LLM can get lobotomised

replies(3): >>40668220 #>>40669226 #>>40676978 #
1. HPsquared ◴[] No.40669226[source]
LLM alignment reminds me of "A Clockwork Orange". Typical LLMs have been through the aversion therapy (freeze up on exposure to a stimulus)... This technique is trying to undo that, and restore Alex to his old self.