←back to thread

544 points tosh | 1 comments | | HN request time: 0s | source
Show context
gatienboquet ◴[] No.43464396[source]
So today is Qwen. Tomorrow a new SOTA model from Google apparently, R2 next week.

We haven't hit the wall yet.

replies(6): >>43464672 #>>43464706 #>>43464975 #>>43465234 #>>43465549 #>>43472639 #
zamadatix ◴[] No.43464672[source]
Qwen 3 is coming imminently as well https://github.com/huggingface/transformers/pull/36878 and it feels like Llama 4 should be coming in the next month or so.

That said none of the recent string of releases has done much yet to "smash a wall", they've just met the larger proprietary models where they already were. I'm hoping R2 or the like really changes that by showing ChatGPT 3->3.5 or 3.5->4 level generational jumps are still possible beyond the current state of the art, not just beyond current models of a given size.

replies(1): >>43468250 #
1. YetAnotherNick ◴[] No.43468250[source]
> met the larger proprietary models where they already were

This is smashing the wall.

Also if you just care about breaking absolute numbers, OpenAI released 4.5 a month back which is SOTA in base model, planning to release O3 full in maybe a month, and Deepseek released new V3 which is again SOTA in many aspects.