←back to thread

602 points emrah | 1 comments | | HN request time: 0s | source
Show context
noodletheworld ◴[] No.43743667[source]
?

Am I missing something?

These have been out for a while; if you follow the HF link you can see, for example, the 27b quant has been downloaded from HF 64,000 times over the last 10 days.

Is there something more to this, or is just a follow up blog post?

(is it just that ollama finally has partial (no images right?) support? Or something else?)

replies(3): >>43743700 #>>43743748 #>>43754518 #
deepsquirrelnet ◴[] No.43743700[source]
QAT “quantization aware training” means they had it quantized to 4 bits during training rather than after training in full or half precision. It’s supposedly a higher quality, but unfortunately they don’t show any comparisons between QAT and post-training quantization.
replies(1): >>43743713 #
noodletheworld ◴[] No.43743713[source]
I understand that, but the qat models (1) are not new uploads.

How is this more significant now than when they were uploaded 2 weeks ago?

Are we expecting new models? I don’t understand the timing. This post feels like it’s two weeks late.

[1] - https://huggingface.co/collections/google/gemma-3-qat-67ee61...

replies(2): >>43743759 #>>43743843 #
llmguy ◴[] No.43743759[source]
8 days is closer to 1 week then 2. And it’s a blog post, nobody owes you realtime updates.
replies(1): >>43743783 #
noodletheworld ◴[] No.43743783[source]
https://huggingface.co/google/gemma-3-27b-it-qat-q4_0-gguf/t...

> 17 days ago

Anywaaay...

I'm literally asking, quite honestly, if this is just an 'after the fact' update literally weeks later, that they uploaded a bunch of models, or if there is something more significant about this I'm missing.

replies(2): >>43743882 #>>43744308 #
1. timcobb ◴[] No.43743882[source]
Probably the former... I see your confusion but it's really only a couple weeks at most. The news cycle is strong in you, grasshopper :)