←back to thread

3337 points keepamovin | 1 comments | | HN request time: 0s | source
Show context
iambateman ◴[] No.46207321[source]
This was a fun little lark. Great idea!

It’s interesting to notice how bad AI is at gaming out a 10-year future. It’s very good at predicting the next token but maybe even worse than humans—who are already terrible—at making educated guesses about the state of the world in a decade.

I asked Claude: “Think ten years into the future about the state of software development. What is the most likely scenario?” And the answer it gave me was the correct answer for today and definitely not a decade into the future.

This is why it’s so dangerous to ask an LLM for personal advice of any kind. It isn’t trained to consider second-order effects.

Thanks for the thought experiment!

replies(7): >>46207366 #>>46207493 #>>46207650 #>>46207837 #>>46207954 #>>46208746 #>>46215784 #
vidarh ◴[] No.46207837[source]
I thought the page was a hilarious joke, not a bad prediction. A lot of these are fantastic observational humour about HN and tech. Gary Marcus still insisting AI progress is stalling 10 years from now, for example. Several digs at language rewrites. ITER hardly having nudged forwards. Google killing another service. And so on.
replies(4): >>46208068 #>>46208268 #>>46208762 #>>46210419 #
iambateman ◴[] No.46210419[source]
I totally agree that it was a funny joke.

But I've noticed that a lot of people think of LLM's as being _good_ at predicting the future and that's what I find concerning.

replies(1): >>46218116 #
1. vidarh ◴[] No.46218116[source]
That's a valid concern about the number of people who think people are good at predicting the future too.

(I'll make my prediction: 10 years from now, most things will be more similar to what things are today than most people expected them to be)