> I ran all the open models (anything not from OpenAI, meaning anything that doesn’t start with gpt or o1) myself using Q5_K_M quantization, whatever that is.
It's just a lossy compression of all of the parameters, probably not important, right?
replies(1):