←back to thread

156 points martinald | 1 comments | | HN request time: 0.206s | source
Show context
krackers ◴[] No.44538619[source]
Probably the results were worse than K2 model released today. No serious engineer would say it's for "safety" reasons given that ablation nullifies any safety post-training.
replies(1): >>44538817 #
simonw ◴[] No.44538817[source]
I'm expecting (and indeed hoping) that the open weights OpenAI model is a lot smaller than K2. K2 is 1 trillion parameters and almost a terabyte to download! There's no way I'm running that on my laptop.

I think the sweet spot for local models may be around the 20B size - that's Mistral Small 3.x and some of the Gemma 3 models. They're very capable and run in less than 32GB of RAM.

I really hope OpenAI put one out in that weight class, personally.

replies(1): >>44539739 #
1. NitpickLawyer ◴[] No.44539739[source]
Early rumours (from a hosting company that apparently got early access) was that you'd need "multiple h100s to run it", so I doubt it's a gemma - mistral small tier model..