←back to thread

586 points mizzao | 1 comments | | HN request time: 0.207s | source
Show context
YukiElectronics ◴[] No.40667983[source]
> Once we have identified the refusal direction, we can "ablate" it, effectively removing the model's ability to represent this feature. This can be done through an inference-time intervention or permanently with weight orthogonalization.

Finally, even a LLM can get lobotomised

replies(3): >>40668220 #>>40669226 #>>40676978 #
1. noduerme ◴[] No.40668220[source]
I think it's been sort of useful at least that LLMs have helped us have new ways of thinking about how human brains are front-loaded with little instruction sets before being sent out to absorb, filter and recycle received language, often like LLMs not really capable of analyzing its meaning. There will be a new philosophical understanding of all prior human thought that will arise from this within the next 15 years.