←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0s | source
Show context
Aurornis ◴[] No.44065615[source]
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

replies(7): >>44065741 #>>44065924 #>>44066032 #>>44066207 #>>44066383 #>>44067813 #>>44068990 #
magicalist ◴[] No.44066207[source]
> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

replies(3): >>44066794 #>>44070233 #>>44073094 #
1. Aurornis ◴[] No.44073094[source]
> while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade.

This is a recurring theme in rationalist blogs like Scott Alexander’s: They mix a lot of low-risk claims in with heavily hedged high-risk claims. The low risk claims (AI will continue to advance) inevitably come true and therefore the blog post looks mostly accurate in hindsight.

When reading the blog post in the current context the hedging goes mostly unnoticed because everyone clicked on the article for the main claim, not the hedging.

When reviewing blog posts from the past that didn’t age well, that hedging suddenly becomes the main thing their followers want you to see.

So in future discussions there are two outcomes: He’s always either right or “not entirely wrong”. Once you see it, it’s hard to unsee. Combine that with the almost parasocial relationship that some people develop with prominent figures in the rationalist sphere and there are a lot of echo chambers that, ironically, think they’re the only rational ones who see it like it really is.