←back to thread

129 points NotInOurNames | 3 comments | | HN request time: 0s | source
Show context
Aurornis ◴[] No.44065615[source]
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

replies(7): >>44065741 #>>44065924 #>>44066032 #>>44066207 #>>44066383 #>>44067813 #>>44068990 #
magicalist ◴[] No.44066207[source]
> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

replies(3): >>44066794 #>>44070233 #>>44073094 #
amarcheschi ◴[] No.44066794[source]
Yud is also something like 50% sure we'll die in a few years - if I'm not wrong

I guess they'll have to update their a priori % if we survive

replies(1): >>44068009 #
ben_w ◴[] No.44068009[source]
I think Yudkowsky is more like 90% sure of us all dying in a few (<10) years.

I mean, this is their new book: https://ifanyonebuildsit.com/

replies(2): >>44079881 #>>44101889 #
1. godelski ◴[] No.44079881[source]

  > We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case.
Take us seriously, buy our book!

We're real researchers, so we make our definitely scientific case available to anyone who will give us $15-$30! It's the most important book ever, says some actor. Read it, so we all don't die!

For Christ's sake, how does anyone take this Harry Potter fanfiction writer serially?

replies(1): >>44086895 #
2. ben_w ◴[] No.44086895[source]
Because of what else he writes besides the fanfic. (Better question is why anyone takes JKR herself seriously).

But if you insist on only listening to people with academic acolades or industrial output, there's this other guy who got the Rumelhart Prize (2001), Turing Award (2018), Dickson Prize (2021), Princess of Asturias Award (2022), Nobel Prize in Physics (2024), VinFuture Prize (2024), Queen Elizabeth Prize for Engineering (2025), Order of Canada, Fellow of the Royal Society, and Fellow of the Royal Society of Canada

That's one person with all that, and he says there's a "10 to 20 per cent chance" that AI would be the cause of human extinction within the following three decades, and "it is hard to see how you can prevent the bad actors from using [AI] for bad things.": https://en.wikipedia.org/wiki/Geoffrey_Hinton

Myself, I'm closer to Hinton's view than Yudkowsky's: path dependency, i.e. I expect that before we get existential threat from AI, we get catastrophic economic threat the precludes existential threat.

replies(1): >>44090740 #
3. godelski ◴[] No.44090740[source]
I do say the same think about JKR, btw. And for the same things, because the content she writes. I think you focused on the fanfic part and not the part where I'm criticizing where they say their stuff is the most important to keep humanity alive and they're charging money for it. Meanwhile, you may notice in academia we publish papers to make them freely available, like on arXiv. If it is that important that people need to know, you make it available.

The second person, Hinton, is not as good of an authority as you'd expect. Though I do understand why people take him seriously. Fwiw, his Nobel was wildly controversial. Be careful, prizes often have political components. I have degrees in both CS and physics (and am an ML researcher) and both communities thought it was really weird. I'll let you guess which community found it insulting.

I want to remind you, in 2016 Hinton famously said[0]

  | Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff, but hasn't yet looked down so doesn't realize there's no ground beneath him. People should stop training radiologists now. It's just completely obvious that within 5 years that deep learning is going to be better than radiologists because it's going to get a lot more experience. It might take 10 years, but we've got plenty of radiologists already.
We're 10 years in now. Hinton has shown he's about as good at making predictions as Musk. What Hinton thought was laughably obvious actually didn't happen. He's made a number of such predictions. I'll link another small explanation from him[1] because it is so similar to something Sutskever said[2]. I can tell you with high certainty that every physicist laughs at such a claim. We've long experienced that being able to predict data does not equate to understanding that data[3].

I care very much about alignment myself[4,5,6,7]. The reason I push back on Yud and others making claims like they do is because they are actually helping create the future they say we should be afraid of. I'm not saying they're evil or directly making evil superintelligences. Rather, they're pulling attention and funds away from the problems that need to be solved. They are guessing about things we don't need to guess about. They are making confidently asserting claims we know to be false (to be able to make accurate predictions requires accurate understanding[8]). Without being able to openly and honestly speak to the limitations of our machines (mainly blinded by excitement), we create these exact dangers we worry about. I'm not calling for a pause on research, I'm calling for more research and more people to pay attention to the subtle nature of everything. In a way I am saying "slow down" but only in that I'm saying don't ignore the small stuff. We move so fast that we keep pushing off the small stuff, but the AI risk comes through the accumulation of debt. You need to be very careful to not let that debt get out of control. You don't create safe systems by following the boom and bust hype cycles that CS is so famous for. You don't just wildly race to build a nuclear reactor and try to sell it to people while it is still a prototype.

[0] https://fastdatascience.com/ai-in-healthcare/ai-replace-radi...

[1] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...

[2] https://youtu.be/Yf1o0TQzry8?t=449

[3] https://www.youtube.com/watch?v=hV41QEKiMlM

[4] https://news.ycombinator.com/item?id=44068943

[5] https://news.ycombinator.com/item?id=44070101

[6] https://news.ycombinator.com/item?id=44017334

[7] https://news.ycombinator.com/item?id=43909711

[8] This is the link to [1,2]. I mean you can reasonably create a data generating process that is difficult or impossible to distinguish from the actual data generating process but you have a completely different causal structure, confounding variables, and all that fun stuff. Any physicist will tell you that fitting the data is the easy part (it isn't easy). Interpreting and explaining the data is the hard part. That hard part is the building of the causal relationships. It is the "understanding" Hinton and Sutskever claim.