I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.
Anthropic recently released research where they saw how when Claude attempted to compose poetry, it didn't simply predict token by token and "react" to when it thought it might need a rhyme and then looked at its context to think of something appropriate, but actually saw several tokens ahead and adjusted for where it'd likely end up, ahead of time.
Anthropic also says this adds to evidence seen elsewhere that language models seem to sometimes "plan ahead".
Please check out the section "Planning in poems" here; it's pretty interesting!
https://transformer-circuits.pub/2025/attribution-graphs/bio...
Historically, a computer with these sorts of capabilities has always been considered true AI, going back to Alan Turing. Also of course including all sorts of science fiction, from recent movies like Her to older examples like Moon Is A Harsh Mistress.
Let's say we have a humanoid robot standing in a room that has a window open, at what point would the AI powering the robot decide that it's time to close the window?
That's probably one of the reasons why, I don't really see LLMs as much more than just algorithms that give us different responses just because we keep changing the seed...