I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
IMO this out of distribution learning is all we need to scale to AGI. Sure there are still issues, it doesn't always know which distribution to pick from. Neither do we, hence car crashes.
[1]: https://arxiv.org/pdf/2303.12712 or on YT https://www.youtube.com/watch?v=qbIk7-JPB2c
As others have pointed out in other threads RLHF has progressed beyond next-token prediction and modern models are modeling concepts [1].
[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...
[1] https://www.anthropic.com/news/tracing-thoughts-language-mod...
Intelligence as humans have it seems like a "know it when you see it" thing to me, and metrics that attempt to define and compare it will always be looking at only a narrow slice of the whole picture. To put it simply, the gut feeling I get based on my interactions with current AI, and how it is has developed over the past couple of years, is that AI is missing key elements of general intelligence at its core. While there's more lots more room for its current approaches to get better, I think there will be something different needed for AGI.
I'm not an expert, just a human.
Anthropic recently released research where they saw how when Claude attempted to compose poetry, it didn't simply predict token by token and "react" to when it thought it might need a rhyme and then looked at its context to think of something appropriate, but actually saw several tokens ahead and adjusted for where it'd likely end up, ahead of time.
Anthropic also says this adds to evidence seen elsewhere that language models seem to sometimes "plan ahead".
Please check out the section "Planning in poems" here; it's pretty interesting!
https://transformer-circuits.pub/2025/attribution-graphs/bio...
I don't really get this. Are you saying autoregressive LLMs won't qualify as AGI, by definition? What about diffusion models, like Mercury? Does it really matter how inference is done if the result is the same?
I'd label that difference as long-term planning plus executive function, and wherever that overlaps with or includes delegation.
Most long-term projects are not done by a single human and so delegation almost always plays a big part. To delegate, tasks must be broken down in useful ways. To break down tasks a holistic model of the goal is needed where compartmentalization of components can be identified.
I think a lot of those individual elements are within reach of current model architectures but they are likely out of distribution. How many gantt charts and project plans and project manager meetings are in the pretraining datasets? My guess is few; rarely published internal artifacts. Books and articles touch on the concepts but I think the models learn best from the raw data; they can probably tell you very well all of the steps of good project management because the descriptions are all over the place. The actual doing of it is farther toward the tail of the distribution.
It reminds me of the difference between a fresh college graduate and an engineer with 10 years of experience. There are many really smart and talented college graduates.
But, while I am struggling to articulate exactly why, I know that when I was a fresh graduate, despite my talent and ambition, I would have failed miserably at delivering some of the projects that I now routinely deliver over time periods of ~1.5 years.
I think LLM's are really good at emulating the types of things I might say are the types of things that would make someone successful at this if I were to write it down in a couple paragraphs, or an article, or maybe even a book.
But... knowing those things as written by others just would not quite cut it. Learning at those time scales is just very different than what we're good at training LLM's to do.
A college graduate is in many ways infinitely more capable than a LLM. Yet there are a great many tasks that you just can't give an intern if you want them to be successful.
There are at least half a dozen different 1000-page manuals that one must reference to do a bare bones approach at my job. And there are dozens of different constituents, and many thousands of design parameters I must adhere to. Fundamentally, all of these things often are in conflict and it is my job to sort out the conflicts and come up with the best compromise. It's... really hard to do. Knowing what to bend so that other requirements may be kept rock solid, who to negotiate with for different compromises needed, which fights to fight, and what a "good" design looks like between alternatives that all seem to mostly meet the requirements. Its a very complicated chess game where it's hopelessly impossible to brute force but you must see the patterns along the way that will point you like sign posts into a good position in the end game.
The way we currently train LLM's will not get us there.
Until an LLM can take things in it's context window, assess them for importance, dismiss what doesn't work or turns out to be wrong, completely dismiss everything it knows when the right new paradigm comes up, and then permanently alter its decision making by incorporating all of that information in an intelligent way, it just won't be a replacment for a human being.
The signs are not there but while we may not be on an exponential curve (which would be difficult to see), we are definitely on a steep upward one which may get steeper or may fizzle out if LLM's can only reach human level 'intelligence' but not surpass it. Original article was a fun read though and 360,000 words shorter than my very similar fiction novel :-)
The most sure things we know is that it is a physical system, and that does feel like something to be one of these systems.
The threshold would be “produce anything that isn’t identical or a minor transfiguration of input training data.”
In my experience my AI assistant in my code editor can’t do a damn thing that isn’t widely documented and sometimes botches tasks that are thoroughly documented (such as hallucinating parameters names that don’t exist). I can witness this when I reach the edge of common use cases where extending beyond the documentation requires following an implication.
For example, AI can’t seem to understand how to help me in any way with Terraform dynamic credentials because the documentation is very sparse, and it is not part of almost any blog posts or examples online. My definition the variable is populated dynamically and real aren’t shown anywhere. I get a lot of irrelevant nonsense suggestions on how to fix it.
AI is a great “amazing search engine” and it can string together combinations of logic that already exist in documentation and examples while changing some names here and there, but what looks like true understanding really is just token prediction.
IMO the massive amount of training data is making the man behind the curtain look way better than he is.
That _probably_ won't capture everything, but for all practical purposes it's non-distinguishable from reality (yes, yes, time is not some constant everywhere)
Historically, a computer with these sorts of capabilities has always been considered true AI, going back to Alan Turing. Also of course including all sorts of science fiction, from recent movies like Her to older examples like Moon Is A Harsh Mistress.
https://old.reddit.com/r/singularity/comments/1jl5qfs/its_ju...
It's like saying that both a baby who can make a few steps and an adult have capability of "walking". It's just wrong.
But when we get a big aggregated of all of these little rules and quirks and improvements and subsystems for triggering different behaviours and processes - isn't that all humans are?
I don't think it'll happen for a long ass time, but I'm not one of those individuals who, for some reason, desperately want to believe that humans are special, that we're some magical thing that's unexplainable or can't be recreated.
I will feel and itch and subconsciously scratch it, especially if I'm concentrating on something. That's an subsystem independent of conscious thought.
I suppose it does make sense - that our early evolution consisted of a bunch of small, specific background processes that enables an individual's life to continue; a single celled organism doesn't have neurons but exactly these processes - chemical reactions that keep it "alive".
Then I imagine that some of these processes became complex enough that they needed to be represented by some form of logic, hence evolving neurons.
Subsequently, organisms comprised of many thousands or more of such neuronal subsystems developed higher order subsystems to be able to control/trigger those subsystems based on more advanced stimuli or combinations thereof.
And finally us. I imagine the next step, evolution found that consciousness/intelligence, an overall direction of the efforts of all of these subsystems (still not all consciously controlled) and therefore direction of an individual was much more effective; anticipation, planning and other behaviours of the highest order.
I wouldn't be surprised if, given enough time and the right conditions, that sustained evolution would result in any or most creatures on this planet evolving a conscious brain - I suppose we were just lucky.
Let's say we have a humanoid robot standing in a room that has a window open, at what point would the AI powering the robot decide that it's time to close the window?
That's probably one of the reasons why, I don't really see LLMs as much more than just algorithms that give us different responses just because we keep changing the seed...
I also think the difference between primitive brains and conscious, reasoning, high level brains could be more quantitative than qualitative. I certainly believe that all mammals (and more) have some sort of an internal conscious experience. And experiments have shown that all sorts of animals are capable of solving simple logical problems.
Also, related article from a couple of days ago: Intelligence Evolved at Least Twice in Vertebrate Animals
I'm not sure about the quantitative thing seeing as there are creatures with brains much physically much larger than ours, or brains with more neurons than we have. We currently have the most known synapses though that also seems to be because we haven't estimated that for so many species.