←back to thread

586 points mizzao | 2 comments | | HN request time: 0.412s | source
Show context
okwhateverdude ◴[] No.40666128[source]
I gave some of the llama3 ablated models (eg. https://huggingface.co/cognitivecomputations/Llama-3-8B-Inst...) a try and was pretty disappointed in the result. Could have been problems in the dataset, but overall, the model felt like it had been given a lobotomy. It would fail to produce stop tokens frequently and then start talking to itself.
replies(2): >>40666138 #>>40666399 #
Der_Einzige ◴[] No.40666138[source]
I have entirely the opposite experience. Llama3 70b obliterated works perfectly and is willing to tell me how to commit mass genocide, all while maintaining quality outputs.
replies(3): >>40666337 #>>40666433 #>>40667051 #
1. m463 ◴[] No.40666433[source]
> how to commit mass genocide, all while maintaining quality outputs.

sounds like a messed up eugenics filter.

replies(1): >>40667290 #
2. ◴[] No.40667290[source]