Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.