←back to thread

129 points NotInOurNames | 3 comments | | HN request time: 0.001s | source
Show context
Aurornis ◴[] No.44065615[source]
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

replies(7): >>44065741 #>>44065924 #>>44066032 #>>44066207 #>>44066383 #>>44067813 #>>44068990 #
bpodgursky ◴[] No.44066032[source]
Do you feel that you are shifting goalposts a bit when quibbling over whether AI will kill everyone in 2030 or 2035? As of 10 years ago, the entire conversation would have seemed ridiculous.

Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.

replies(4): >>44066297 #>>44066346 #>>44067144 #>>44071660 #
sigmaisaletter ◴[] No.44067144[source]
> 10 years ago, the entire conversation would have seemed ridiculous

Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.

[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...

replies(1): >>44067642 #
throw310822 ◴[] No.44067642[source]
It is a bit disquieting though that these predictions instead of being pushed farther away are converging to a time even closer than originally imagined. Some breakthroughs and doomsday scenarios are constantly placed thirty years into the future; this seems to be actually getting closer earlier than imagined.
replies(1): >>44069411 #
1. jazzyjackson ◴[] No.44069411[source]
For the people imagining it yes

For many of us the conversation hasn't gotten any less ridiculous just because computers can talk now

replies(1): >>44070449 #
2. throw310822 ◴[] No.44070449[source]
> just because computers can talk now

I find it astounding that some people appear completely unable to grasp what this really means and what the implications are.

replies(1): >>44078065 #
3. jazzyjackson ◴[] No.44078065[source]
I see them as funhouse mirrors, the kind that reflect your image to make you skinny or fat, except they do it with semantics, big deal. I've never had an interaction with an llm that wasnt just repeating what I said more verbosely, or with compressed fuzzy facts sprinkled in.

There is no machine spirit that exists in a box separately from us, it's just a means for people to amplify and multiply their voice into ten thousand sock puppet bot accounts, that's all I'm able to grasp anyway. Curious to hear your experience that's led you to believe something different.