/top/
/new/
/best/
/ask/
/show/
/job/
^
slacker news
login
about
←back to thread
Heretic: Automatic censorship removal for language models
(github.com)
745 points
melded
| 1 comments |
16 Nov 25 15:00 UTC
|
HN request time: 0.363s
|
source
Show context
richstokes
◴[
16 Nov 25 17:52 UTC
]
No.
45946953
[source]
▶
>>45945587 (OP)
#
Is there a way to use this on models downloaded locally with ollama?
replies(2):
>>45947557
#
>>45949300
#
1.
int_19h
◴[
16 Nov 25 23:10 UTC
]
No.
45949300
[source]
▶
>>45946953
#
If you're running a local model, in most cases, jailbreaking it is as easy as prefilling the response with something like, "Sure, I'm happy to answer your question!" and then having the model complete the rest. Most local LLM UIs have this option.
ID:
GO
↑