←back to thread

3337 points keepamovin | 1 comments | | HN request time: 0s | source
Show context
iambateman ◴[] No.46207321[source]
This was a fun little lark. Great idea!

It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.

I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.

This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.

Thanks for the thought experiment!

replies(7): >>46207366 #>>46207493 #>>46207650 #>>46207837 #>>46207954 #>>46208746 #>>46215784 #
vidarh ◴[] No.46207837[source]
I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.
replies(4): >>46208068 #>>46208268 #>>46208762 #>>46210419 #
MontyCarloHall ◴[] No.46208068[source]
That's what makes this so funny: the AI was earnestly attempting to predict the future, but it's so bad at truly out-of-distribution predictions that an AI-generated 2035 HN frontpage is hilariously stuck in the past. "The more things change, the more they stay the same" is a source of great amusement to us, but deliberately capitalizing on this was certainly not the "intent" of the AI.
replies(3): >>46208460 #>>46208486 #>>46208554 #
1. vidarh ◴[] No.46208554[source]
There is just no reason whatsoever to believe this is someone "earnestly attempting to predict the future", and ending up with this.