There's a world of difference between machine teaching approaches that can create a less complex model from a more capable one and bootstrapping a more capable model from synthetic data. And don't get me wrong, it's still very useful to be able to distill models in this way! Like it's in many cases low hanging fruit for optimizing the parameter count or other resource bottlenecks of the models in question. Maybe the original learned representation wasn't the simplest neural network that could approximate the same function to the tolerance we care about. This streamlining can sometimes even induce distillation of certain abstractions, which I think has best been used in Motion Transfer results like Nvidia's Megaportraits or more recently Alibaba's EMO. However, if there's a scale-oriented path to language models that generalize better - or are more controllable, or just do better on established benchmarks - that is currently bottlenecked by available data, it seems unlikely that relying on synthetic data from extant models will get it over that hurdle, and this should kind of match your intuition if you're familiar with the information theory underlying statistical models, which neural networks of any kind are:
A model's predictions are necessarily going to be a compression of the data available to them, and so the hypothetical information-theoretic best case scenario is that a model trained on its own outputs, or even those of models trained in a similar way on similar volumes of data will generate diverse enough data to train a new model to replicate its own performance. In practice, this tends not to happen. Curation of available data can produce models with more focused distributions within the space of models we can feasibly train with the data and resources available, and you can use ensemble learning techniques or I guess stuff like RLHF (Which is kind of a silly framing of that concept as some RL people have pointed out, but it's the one people are familiar with now), but all of this is essentially just moving around in a pareto front which may not contain any "strictly better" model for whatever criteria we care about
I think the scaling laws of these things are running up against some fundamental limits in terms of useful diversity of available data and computational feasibility of meaningful improvements in scale. While hype likes to pretend that anything that happens fast for a while is "exponential", there are lots of other families of functions that appear to shoot upward before plateauing after hitting some fundamental limit, like a sigmoid! To me, it makes more intuitive sense that the capacity of a given model family will hit a plateau than continue scaling indefinitely, especially when we start to run up against dataset limits, and if there's enough more data than the current major tech companies can have already gotten their hands on to train with to a degree that makes a dent, I'd be shocked
That's not to say that impressive results aren't still happening, they're just mostly tackling different problems - various modality transfers, distillation-like improvements that make extant capability sets cheaper (in computational terms) to run, superficial capability shifts that better refine a language model to serve a particular use case, etc. LLMs in their current form are probably in need of another significant qualitative breakthrough to overcome their fundamental problems. They're clearly quite useful to a lot of people in their current form. They just don't live up to all this hype that's flying around.