←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0s | source
Show context
Aurornis ◴[] No.44065615[source]
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

replies(7): >>44065741 #>>44065924 #>>44066032 #>>44066207 #>>44066383 #>>44067813 #>>44068990 #
magicalist ◴[] No.44066207[source]
> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

replies(3): >>44066794 #>>44070233 #>>44073094 #
amarcheschi ◴[] No.44066794[source]
Yud is also something like 50% sure we'll die in a few years - if I'm not wrong

I guess they'll have to update their a priori % if we survive

replies(1): >>44068009 #
ben_w ◴[] No.44068009[source]
I think Yudkowsky is more like 90% sure of us all dying in a few (<10) years.

I mean, this is their new book: https://ifanyonebuildsit.com/

replies(2): >>44079881 #>>44101889 #
1. trod1234 ◴[] No.44101889[source]
There are a lot of people that believe most of us will die within the next 10 years, and a rational discussion of these subjects is largely based in the fact that for the last three generations, we have faced numerous existential threats that instead of solving them, have instead all had the can kicked down the road.

Eventually what inevitably happens is you get convergence in time where you simply do not have the resources, and with the risk factors today, that convergence my cause societal failure.

Super Intelligent AI alone, yeah that probably is not a threat because its so highly (astronomically) unlikely, but socio-economic collapse to starvation; now that's a very real possibility when you create something that destroys the ability for an individual to form capital, or breaks other underlying aspects which underpin all of societal organization going back hundreds of years.

Now these things won't happen overnight, but that's not the danger either. The danger is the hysteresis, or in other words by the time you find out and can objectively show its happening to react, its impossible to change the outcome. Your goose is just cooked as a species, and the cycle of doom just circles until no ones left.

Few realize that food today is wholly dependent on Haber-Bosch chemistry. You get 4x less yield without it, and following Catton in a post-extraction phase sustainable population numbers may be fractional compared to last century (when the population was 4bn). People break quite easily under certain circumstances and so any leaders following MAD doctrine will likely actually use it when they realize everything is failing and what's ahead.

These are just things that naturally happen when the mechanics of things that are long forgotten which underpin the way things work fail to ruin. The loss of objective reality is a warning sign of such things on the horizon.