Just looking at what happened with chess, go, strategy games, protein folding etc, it's obvious that pretty much any field/problem that can be formalised and cheaply verified - e.g. mathematics, algorithms etc - will be solved, and that it's only a matter of time before we have domain-specific ASI.
I strongly encourage everyone to read about the bitter lesson [0] and verifier's law [1].
[0] http://www.incompleteideas.net/IncIdeas/BitterLesson.html
[1] https://www.jasonwei.net/blog/asymmetry-of-verification-and-...
I don't mind if software jobs move from writing software to verifying software either if it makes the whole process more efficient and the software becomes better as a result. Again, not what is happening here.
What is happening, at least in AI optimist CEO minds is "disruption". Drop the quality while cutting costs dramatically.
So... where's the kaboom? Where's the giant, earth-shattering kaboom? There are solid applications for AI in computer vision and sentiment analysis right now, but even these are fallible and have limited effectiveness when you do deploy them. The grander ambitions, even for pared-back "ASI" definitions, is just kicking the can further down the road.
But the next step is obviously increased formalism via formal methods, deterministic simulators etc, basically so that one could define an environment for a RL agent.
It isn't entirely clear what problem LLMs are solving and what they are optimizing towards... They sound humanlike and give some good solutions to stuff, but there are so many glaring holes. How are we so many years and billions of dollars in and I can't reliably play a coherent game of chess with ChatGPT, let alone have it be useful?
For the average consumer, LLM chatbots are infinitely better than Google at search-like tasks, and in effect solve that problem. Remember when we had to roll our eyes at dad because he asked Google "what are some cool restaurants?" instead of "nice restaurants SF 2018 reddit"? Well, that is over, he can ask that to ChatGPT and it will make the most effective searches for him, aggregate and answer. Remember when a total noob had to familiarize himself with a language by figuring out hello world, then functions, etc? Now it's over, these people can just draft a toy example of what they want to build with Cursor instantly, tell it to make everything nice and simple, and then have ChatGPT guide them through what is happening.
In some industries you just don't need that much more code quality than what LLMs give you. A quick .bat script doesn't need you to know the best implementation of anything, neither does a Python scraper using only the stdlib, but these were locked behind programming knowledge before LLMs
Sometimes I have the feeling that what happened with LLMs is so enormous that many researches and philosophers still haven't had time to gather their thoughts and process it.
I mean, shall we have a nice discussion about the possibility of "philosophical zombies"? On whether the Chinese room understands or not? Or maybe on the feasibility of the mythical Turing test? There's half a century or more of philosophical questions and scenarios that are not theory anymore, maybe they're not even questions anymore- and almost from one day to the other.
There’s this paper[1] you should read, is sparked an entire new AI dawn, it might answer your question
"What happened with LLMs" is what exactly? From some impressive toy examples like chatbots we as a society decided to throw all our resources into these models and they still can't fit anywhere in production except for assistant stuff
Many of us have been through previous hype-cycles like the dot-com boom, and have learned to be skeptical. Some of that learning has been "reinforced" by layoffs in the ensuing bust (reinforcement learning). A few claims in your note like "it's only a matter of time before we have domain-specific ASI" are jarring - as you are "assuming the sale". LLMs are great as a tool for some usecases - nobody denies that.
The investment dollars are creating a class of people who are fed by those dollars, and have the incentive to push the agenda. The skeptics in contrast have no ax to grind.
I think they have the capability to do it, yes. Maybe it's not the best tool you can use- too expensive, or too flexible to focus with high accuracy on that single task- but yes you can definitely use LLMs to understand literary style and extract data from it. Depending on the complexity of the text I'm sure they can do jobs that BERT can't.
> they still can't fit anywhere in production
Not sure what do you mean for "production" but there's an enormous amount of people using them for work.
Why would it play like the average? LLMs pick tokens to try and maximize a reward function, they don't just pick the most common word from the training data set.
At this point, I think it can only be explained by ignorance, bad faith, or fear of becoming irrelevant.
It can already be "cheaply verified" in the sense that if you write a proof in, say, Lean, the compiler will tell if you if it's valid. The hard part is coming up with the proof.
It may be possible that some sort of AI at some stage becomes as good, or even better than, research mathematicians in coming up with novel proofs. But so far it doesn't look like it - LLMs seem to be able to help a little bit with finding theorems (e.g. stuff like https://leansearch.net/), but to my understanding they are rather poor beyond that.
If the questions were given as-is (without a human formalizing it) and the llm didnt need domain solvers, and the llm was not trained on it already (which happened with frontier math) - I would be impressed.
Based on the past history with frontier math [1][2] I remain skeptical. The skeptic in me says that this happens prior to big announcements (GPT-5) to create the hype.
Finally, this article shows that LLMs were just bluffing in the usamo 2025 [3].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...
Based on the past history with frontier-math & AIME 2025 [1],[2] I would not trust announcements which cant be independently verified. I am excited to try it out though.
Also, the performance of LLMs was not even bronze [3].
Finally, this article shows that LLMs were just mostly bluffing [4].
[1] https://www.reddit.com/r/slatestarcodex/comments/1i53ih7/fro...