←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 0.204s | source
Show context
Vegenoid ◴[] No.43585338[source]
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

replies(9): >>43585429 #>>43585830 #>>43586381 #>>43586613 #>>43586998 #>>43587074 #>>43594397 #>>43619183 #>>43709628 #
jug ◴[] No.43586381[source]
> there are some new capabilities that are big, but they are still fundamentally next-token predictors

Anthropic recently released research where they saw how when Claude attempted to compose poetry, it didn't simply predict token by token and "react" to when it thought it might need a rhyme and then looked at its context to think of something appropriate, but actually saw several tokens ahead and adjusted for where it'd likely end up, ahead of time.

Anthropic also says this adds to evidence seen elsewhere that language models seem to sometimes "plan ahead".

Please check out the section "Planning in poems" here; it's pretty interesting!

https://transformer-circuits.pub/2025/attribution-graphs/bio...

replies(2): >>43586541 #>>43592440 #
percentcer ◴[] No.43586541[source]
Isn't this just a form of next token prediction? i.e. you'll keep your options open for a potential rhyme if you select words that have many associated rhyming pairs, and you'll further keep your options open if you focus on broad topics over niche
replies(6): >>43586729 #>>43587041 #>>43588233 #>>43591952 #>>43592212 #>>43620308 #
DennisP ◴[] No.43587041[source]
Assuming the task remains just generating tokens, what sort of reasoning or planning would say is the threshold, before it's no longer "just a form of next token prediction?"
replies(1): >>43590445 #
Vegenoid ◴[] No.43590445[source]
This is an interesting question, but it seems at least possible that as long as the fundamental operation is simply "generate tokens", that it can't go beyond being just a form of next-token prediction. I don't think people were thinking of human thought as a stream of tokens until LLMs came along. This isn't a very well-formed idea, but we may require an AI for which "generating tokens" is just one subsystem of a larger system, rather than the only form of output and interaction.
replies(1): >>43593351 #
DennisP ◴[] No.43593351[source]
But that means any AI that just talks to you can't be AI by definition. No matter how decisively the AI passes the Turing test, it doesn't matter. It could converse with the top expert in any field as an equal, solve any problem you ask it to solve in math or physics, write stunningly original philosophy papers, or gather evidence from a variety of sources, evaluate them, and reach defensible conclusions. It's all just generating tokens.

Historically, a computer with these sorts of capabilities has always been considered true AI, going back to Alan Turing. Also of course including all sorts of science fiction, from recent movies like Her to older examples like Moon Is A Harsh Mistress.

replies(3): >>43594087 #>>43632599 #>>43636122 #
Vegenoid ◴[] No.43636122[source]
I don't mean that the primary (or only) way that it interacts with a human can't be just text. Right now, the only way it interacts with anything is by generating a stream of tokens. To make any API calls, to use any tool, to make any query for knowledge, it is predicting tokens in the same way as it does when a human asks it a question. There may need to be other subsystems that the LLM subsystem interfaces with to make a more complete intelligence that can internally represent reality and fully utilize abstraction and relations.
replies(1): >>43722963 #
1. Hugsun ◴[] No.43722963[source]
I have not yet found any compelling evidence that suggests that there are limits to the maximum intelligence of a next token predictor.

Models can be trained to generate tokens with many different meanings, including visual, auditory, textual, and locomotive. Those alone seem sufficient to emulate a human to me.

It would certainly be cool to integrate some subsystems like a symbolic reasoner or calculator or something, but the bitter lesson tells us that we'd be better off just waiting for advancements in computing power.