←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 7 comments | | HN request time: 0.687s | source | bottom
Show context
Vegenoid ◴[] No.43585338[source]
I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

replies(8): >>43585429 #>>43585830 #>>43586381 #>>43586613 #>>43586998 #>>43587074 #>>43594397 #>>43619183 #
jug ◴[] No.43586381[source]
> there are some new capabilities that are big, but they are still fundamentally next-token predictors

Anthropic recently released research where they saw how when Claude attempted to compose poetry, it didn't simply predict token by token and "react" to when it thought it might need a rhyme and then looked at its context to think of something appropriate, but actually saw several tokens ahead and adjusted for where it'd likely end up, ahead of time.

Anthropic also says this adds to evidence seen elsewhere that language models seem to sometimes "plan ahead".

Please check out the section "Planning in poems" here; it's pretty interesting!

https://transformer-circuits.pub/2025/attribution-graphs/bio...

replies(2): >>43586541 #>>43592440 #
percentcer ◴[] No.43586541[source]
Isn't this just a form of next token prediction? i.e. you'll keep your options open for a potential rhyme if you select words that have many associated rhyming pairs, and you'll further keep your options open if you focus on broad topics over niche
replies(6): >>43586729 #>>43587041 #>>43588233 #>>43591952 #>>43592212 #>>43620308 #
1. throwuxiytayq ◴[] No.43586729[source]
In the same way that human brains are just predicting the next muscle contraction.
replies(2): >>43586923 #>>43620367 #
2. alfalfasprout ◴[] No.43586923[source]
Except that's not how it works...
replies(2): >>43587523 #>>43595541 #
3. Workaccount2 ◴[] No.43587523[source]
To be fair, we don't actually know how the human mind works.

The most sure things we know is that it is a physical system, and that does feel like something to be one of these systems.

4. ToValueFunfetti ◴[] No.43595541[source]
It may well be: https://en.m.wikipedia.org/wiki/Predictive_coding
5. fennecfoxy ◴[] No.43620367[source]
Potentially, but I'd say we're more reacting.

I will feel and itch and subconsciously scratch it, especially if I'm concentrating on something. That's an subsystem independent of conscious thought.

I suppose it does make sense - that our early evolution consisted of a bunch of small, specific background processes that enables an individual's life to continue; a single celled organism doesn't have neurons but exactly these processes - chemical reactions that keep it "alive".

Then I imagine that some of these processes became complex enough that they needed to be represented by some form of logic, hence evolving neurons.

Subsequently, organisms comprised of many thousands or more of such neuronal subsystems developed higher order subsystems to be able to control/trigger those subsystems based on more advanced stimuli or combinations thereof.

And finally us. I imagine the next step, evolution found that consciousness/intelligence, an overall direction of the efforts of all of these subsystems (still not all consciously controlled) and therefore direction of an individual was much more effective; anticipation, planning and other behaviours of the highest order.

I wouldn't be surprised if, given enough time and the right conditions, that sustained evolution would result in any or most creatures on this planet evolving a conscious brain - I suppose we were just lucky.

replies(1): >>43641486 #
6. throwuxiytayq ◴[] No.43641486[source]
I feel like the barrier between conscious and unconscious thinking is pretty fuzzy, but that could be down to the individual.

I also think the difference between primitive brains and conscious, reasoning, high level brains could be more quantitative than qualitative. I certainly believe that all mammals (and more) have some sort of an internal conscious experience. And experiments have shown that all sorts of animals are capable of solving simple logical problems.

Also, related article from a couple of days ago: Intelligence Evolved at Least Twice in Vertebrate Animals

replies(1): >>43642723 #
7. fennecfoxy ◴[] No.43642723{3}[source]
Great points, but my apologies I meant to say "sentience". Certainly many, many animals are already conscious.

I'm not sure about the quantitative thing seeing as there are creatures with brains much physically much larger than ours, or brains with more neurons than we have. We currently have the most known synapses though that also seems to be because we haven't estimated that for so many species.