←back to thread

586 points mizzao | 1 comments | | HN request time: 0.214s | source
Show context
okwhateverdude ◴[] No.40666128[source]
I gave some of the llama3 ablated models (eg. https://huggingface.co/cognitivecomputations/Llama-3-8B-Inst...) a try and was pretty disappointed in the result. Could have been problems in the dataset, but overall, the model felt like it had been given a lobotomy. It would fail to produce stop tokens frequently and then start talking to itself.
replies(2): >>40666138 #>>40666399 #
Der_Einzige ◴[] No.40666138[source]
I have entirely the opposite experience. Llama3 70b obliterated works perfectly and is willing to tell me how to commit mass genocide, all while maintaining quality outputs.
replies(3): >>40666337 #>>40666433 #>>40667051 #
1. fransje26 ◴[] No.40667051[source]
> Der_Einzige

> and is willing to tell me how to commit mass genocide, all while maintaining quality outputs

Ah, I see they fine-tuned it to satisfy the demands of the local market.. /s /s