That's an interesting limitation. They can't make the LLMs (I still refuse to call them AIs) better, which the current dataset available. So with the sum of all human knowledge, more or less, and mixed in with the dumpster fire that it Internet comments, this is the best we can do with the current models.
I don't know much about LLMs, but that seems to indicate a sort of dead-end. The models are still useful, but limited in their abilities. So now the developers and researchers needs to start looking for new ways to use all this data. That in some sense resets the game. Sucks to be OpenAI, billions of dollars spend on a product that has been match or even outmatched by the competition in a few short years, not nearly enough time to make any of it back.
If there is a take away, it might be that it takes billions, if not trillions of dollars, to develop an AI and the result may still be less than what you hope for, and the investment really hard to recoup.