←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 2 comments | | HN request time: 0.001s | source
Show context
Vegenoid ◴[] No.43585338[source]
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

replies(8): >>43585429 #>>43585830 #>>43586381 #>>43586613 #>>43586998 #>>43587074 #>>43594397 #>>43619183 #
benlivengood ◴[] No.43585830[source]
METR [0] explicitly measures the progress on long term tasks; it's as steep a sigmoid as the other progress at the moment with no inflection yet.

As others have pointed out in other threads RLHF has progressed beyond next-token prediction and modern models are modeling concepts [1].

[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

[1] https://www.anthropic.com/news/tracing-thoughts-language-mod...

replies(2): >>43585918 #>>43586196 #
1. Fraterkes ◴[] No.43585918[source]
The METR graph proposes a 6 year trend, based largely on 4 datapoints before 2024. I get that it is hard to do analyses since were in uncharted territory, and I personally find a lot of the AI stuff impressive, but this just doesn't strike me as great statistics.
replies(1): >>43586603 #
2. benlivengood ◴[] No.43586603[source]
I agree that we don't have any good statistical models for this. If AI development were that predictable we'd likely already be past a singularity of some sort or in a very long winter just by reverse-engineering what makes the statistical model tick.