←back to thread

164 points ksec | 2 comments | | HN request time: 0.001s | source
Show context
vessenes ◴[] No.44498842[source]
Short version: A Qwen-2.5 7b model that has been turned into a diffusion model.

A couple notable things: first is that you can do this at all, (left to right model -> out of order diffusion via finetuning) which is really interesting. Second, the final version beats original by a small margin on some benchmarks. Third is that it’s in the ballpark of Gemini diffusion, although not competitive — to be expected for any 7B parameter model.

A diffusion model comes with a lot of benefits in terms of parallelization and therefore speed; to my mind the architecture is a better fit for coding than strict left to right generation.

Overall, interesting. At some point these local models will get good enough for ‘real work’ and they will be slotted in at API providers rapidly. Apple’s game is on-device; I think we’ll see descendants of these start shipping with Xcode in the next year as just part of the coding experience.

replies(6): >>44498876 #>>44498921 #>>44499170 #>>44499226 #>>44499376 #>>44501060 #
baobun ◴[] No.44499170[source]
Without having tried it, what I keep getting surprised with is how apparently widely different architectures (and in other cases training data) lead to very similar outcomes. I'd expect results to vary a lot more.
replies(3): >>44499473 #>>44499659 #>>44500645 #
hnaccount_rng ◴[] No.44500645[source]
But if the limiting factor is the data on which the models are trained and not the actual “computation” than this would be exactly expected right?
replies(1): >>44500914 #
1. Ldorigo ◴[] No.44500914[source]
The data might be the limiting factor of current transformer architectures, but there's no reason to believe it's a general limiting factor of any language model (e.g. humans brains are "trained" on orders of magnitude less data and still generally perform better than any model available today)
replies(1): >>44501486 #
2. hnaccount_rng ◴[] No.44501486[source]
That depends on whether these current learning models can really generalise or whether they can only interpolate within their training set