I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
As others have pointed out in other threads RLHF has progressed beyond next-token prediction and modern models are modeling concepts [1].
[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
[1] https://www.anthropic.com/news/tracing-thoughts-language-mod...
Intelligence as humans have it seems like a "know it when you see it" thing to me, and metrics that attempt to define and compare it will always be looking at only a narrow slice of the whole picture. To put it simply, the gut feeling I get based on my interactions with current AI, and how it is has developed over the past couple of years, is that AI is missing key elements of general intelligence at its core. While there's more lots more room for its current approaches to get better, I think there will be something different needed for AGI.
I'm not an expert, just a human.
I'd label that difference as long-term planning plus executive function, and wherever that overlaps with or includes delegation.
Most long-term projects are not done by a single human and so delegation almost always plays a big part. To delegate, tasks must be broken down in useful ways. To break down tasks a holistic model of the goal is needed where compartmentalization of components can be identified.
I think a lot of those individual elements are within reach of current model architectures but they are likely out of distribution. How many gantt charts and project plans and project manager meetings are in the pretraining datasets? My guess is few; rarely published internal artifacts. Books and articles touch on the concepts but I think the models learn best from the raw data; they can probably tell you very well all of the steps of good project management because the descriptions are all over the place. The actual doing of it is farther toward the tail of the distribution.
It reminds me of the difference between a fresh college graduate and an engineer with 10 years of experience. There are many really smart and talented college graduates.
But, while I am struggling to articulate exactly why, I know that when I was a fresh graduate, despite my talent and ambition, I would have failed miserably at delivering some of the projects that I now routinely deliver over time periods of ~1.5 years.
I think LLM's are really good at emulating the types of things I might say are the types of things that would make someone successful at this if I were to write it down in a couple paragraphs, or an article, or maybe even a book.
But... knowing those things as written by others just would not quite cut it. Learning at those time scales is just very different than what we're good at training LLM's to do.
A college graduate is in many ways infinitely more capable than a LLM. Yet there are a great many tasks that you just can't give an intern if you want them to be successful.
There are at least half a dozen different 1000-page manuals that one must reference to do a bare bones approach at my job. And there are dozens of different constituents, and many thousands of design parameters I must adhere to. Fundamentally, all of these things often are in conflict and it is my job to sort out the conflicts and come up with the best compromise. It's... really hard to do. Knowing what to bend so that other requirements may be kept rock solid, who to negotiate with for different compromises needed, which fights to fight, and what a "good" design looks like between alternatives that all seem to mostly meet the requirements. Its a very complicated chess game where it's hopelessly impossible to brute force but you must see the patterns along the way that will point you like sign posts into a good position in the end game.
The way we currently train LLM's will not get us there.
Until an LLM can take things in it's context window, assess them for importance, dismiss what doesn't work or turns out to be wrong, completely dismiss everything it knows when the right new paradigm comes up, and then permanently alter its decision making by incorporating all of that information in an intelligent way, it just won't be a replacment for a human being.