Most active commenters
  • HDThoreaun(6)
  • tptacek(6)
  • amarcheschi(5)
  • throw310822(4)
  • pvg(3)

←back to thread

129 points NotInOurNames | 53 comments | | HN request time: 0.424s | source | bottom
1. Aurornis ◴[] No.44065615[source]
Some useful context from Scott Alexander's blog reveals that the authors don't actually believe the 2027 target:

> Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.

They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

Claiming that one reason they didn't change the website was because it would be "annoying" to change the date is a good barometer for how seriously anyone should be taking this exercise.

replies(7): >>44065741 #>>44065924 #>>44066032 #>>44066207 #>>44066383 #>>44067813 #>>44068990 #
2. pinkmuffinere ◴[] No.44065741[source]
Ya, multiple failed predictions is an indicator of systemically bad predictors imo. That said, Scott Alexander usually does serious analysis instead of handwavey hype, so I tend to believe him more than many others in the space.

My somewhat native take is that we’re still close to peak hype, AI will under deliver on the inflated expectations, and we’ll head into another “winter”. This pattern has repeated multiple times, so I think it’s fairly likely based on that alone. Real progress is made during each cycle, i think humans are just bad at containing excitement

replies(1): >>44067044 #
3. amarcheschi ◴[] No.44065924[source]
The other writings from Scott Alexander on scientific racism are also another good point imho
replies(1): >>44066295 #
4. bpodgursky ◴[] No.44066032[source]
Do you feel that you are shifting goalposts a bit when quibbling over whether AI will kill everyone in 2030 or 2035? As of 10 years ago, the entire conversation would have seemed ridiculous.

Now we're talking about single digit timeline differences to the singularity or extinction. Come on man.

replies(4): >>44066297 #>>44066346 #>>44067144 #>>44071660 #
5. magicalist ◴[] No.44066207[source]
> They went from "this represents roughly our median guess" in the website to "maybe think of it as an 80th percentile version of the fast scenario that we don't feel safe ruling out" in followup discussions.

His post also just reads like they think they're Hari Seldon (oh Daniel's modal prediction, whew, I was worried we were reading fanfic) while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade. I really hope I don't have to keep reading references to this for the next decade.

replies(3): >>44066794 #>>44070233 #>>44073094 #
6. A_D_E_P_T ◴[] No.44066295[source]
What specifically would you highlight as being particularly egregious or wrong?

As a general rule, "it's icky" doesn't make something false.

replies(1): >>44066378 #
7. SketchySeaBeast ◴[] No.44066297[source]
Well, the first goal was 1997, but Skynet sure screwed that up.
replies(1): >>44117048 #
8. ewoodrich ◴[] No.44066346[source]
I'm in my 30s and remember my friend in middle school showing me a website he found with an ominous countdown to Kurzweil's "singularity" in 2045.
replies(1): >>44066398 #
9. amarcheschi ◴[] No.44066378{3}[source]
And it doesn't make it true either

Human biodiversity theories are a bunch of dogwhistles for racism

https://en.m.wikipedia.org/wiki/Human_Biodiversity_Institute

And his blog's survey reports a lot of users actually believing in those theories https://reflectivealtruism.com/2024/12/27/human-biodiversity...

(I wasn't referring to this Ai 2027 in specific)

replies(1): >>44066836 #
10. throw310822 ◴[] No.44066383[source]
Yes and no, is it actually important if it's 2027 or 28 or 2032? The scenario is such that a difference of a couple of years is basically irrelevant.
replies(1): >>44067712 #
11. throw310822 ◴[] No.44066398{3}[source]
> ominous countdown to Kurzweil's "singularity" in 2045

And then it didn't happen?

replies(1): >>44067207 #
12. amarcheschi ◴[] No.44066794[source]
Yud is also something like 50% sure we'll die in a few years - if I'm not wrong

I guess they'll have to update their a priori % if we survive

replies(1): >>44068009 #
13. HDThoreaun ◴[] No.44066836{4}[source]
Try steel manning in order to effectively persuade. This comment does not address the argument being made it just calls a field of study icky. The unfortunate reality is that shouting down questions like this only empowers the racist HBI people who are effectively leeches
replies(3): >>44067116 #>>44067293 #>>44068280 #
14. sigmaisaletter ◴[] No.44067044[source]
I think you mean "somewhat naive" instead of "somewhat native". :)

But, yes, this, in my mind the peak[1] bubble times ended with the DeepSeek shock earlier this year, and we are slowly on the downward trajectory now.

It won't be slow for long, once people start realizing Sama was telling them a fairy tale, and AGI/ASI/singularity isn't "right around the corner", but (if achievable at all) at least two more technology triggers away.

We got reasonably useful tools out of it, and thanks to Zuck, mostly for free (if you are an "investor", terms and conditions apply).

[1] https://en.wikipedia.org/wiki/Gartner_hype_cycle

15. amarcheschi ◴[] No.44067116{5}[source]
Scott effectively defended Lynn study on iq here https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...

Citing another blog post that defends it, while conveniently ignoring every other point being made by researchers https://en.m.wikipedia.org/wiki/IQ_and_the_Wealth_of_Nations

replies(1): >>44067181 #
16. sigmaisaletter ◴[] No.44067144[source]
> 10 years ago, the entire conversation would have seemed ridiculous

Bostrom's book[1] is 11 years old. The Basilisk is 15 years old. The Singularity summit was nearly 20 years ago. And Yudkowsky was there for all of it. If you frequented LessWrong in the 2010s, most of this is very very old hat.

[1]: https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...

[2]: Ford (2015) "Our Fear of Artificial Intelligence", MIT Tech Review: https://www.technologyreview.com/2015/02/11/169210/our-fear-...

replies(1): >>44067642 #
17. goatlover ◴[] No.44067207{4}[source]
Not between 2027 and 2032 anyway.
18. magicalist ◴[] No.44067293{5}[source]
> Try steel manning in order to effectively persuade. This comment does not address the argument being made it just calls a field of study icky.

Disagree (the article linked in the GP is a great read with extensive and specific citations) and reminder that you can just make the comment you'd like to see instead of trying to meta sea lion it into existence. Steel man away.

replies(1): >>44080271 #
19. throw310822 ◴[] No.44067642{3}[source]
It is a bit disquieting though that these predictions instead of being pushed farther away are converging to a time even closer than originally imagined. Some breakthroughs and doomsday scenarios are constantly placed thirty years into the future; this seems to be actually getting closer earlier than imagined.
replies(1): >>44069411 #
20. Jensson ◴[] No.44067712[source]
> The scenario is such that a difference of a couple of years is basically irrelevant.

2 years left and 7 years left is a massive difference, it is so much easier to deal with things 7 years in the future especially since its easier to see as we get closer.

replies(1): >>44067793 #
21. lm28469 ◴[] No.44067793{3}[source]
Yeah for example we had decades to tackle climate change and we easily over came the problem
replies(1): >>44068912 #
22. merksittich ◴[] No.44067813[source]
Also, the relevant manifold prediction has low odds: https://manifold.markets/IsaacKing/ai-2027-reports-predictio...
23. ben_w ◴[] No.44068009{3}[source]
I think Yudkowsky is more like 90% sure of us all dying in a few (<10) years.

I mean, this is their new book: https://ifanyonebuildsit.com/

replies(2): >>44079881 #>>44101889 #
24. tptacek ◴[] No.44068280{5}[source]
Hold on a sec. "HBD" is not a field of study; it's a meme ecosystem. There are (at least) two fields of actual scientific study that intersect with HBD: psychometrics, the subspecialty of psychology that deals in IQ measurement and twin studies, and molecular genetics, the quantitative subspecialty of genetics that studies correlations in the genome across large populations with marked traits (which can include things educational attainment).

Neither of these fields does "IQ maps", which are an article of faith in HBD circles. As soon as someone breaks out the Lynn IQ maps, they lose the "we're just doing science" card. Alexander did a whole recent article about them. We are past the point where anybody gets to high-horse criticism of his HBD stuff as un-rigorous.

replies(1): >>44070350 #
25. tptacek ◴[] No.44068283{7}[source]
They do not in fact have a point.
26. Jensson ◴[] No.44068912{4}[source]
Look at military tech 1941 and 1946, its a massive difference. 5 year time difference means a ton when people take it seriously.

Only reason climate change isn't solved is that people don't care enough.

replies(2): >>44069055 #>>44069619 #
27. pvg ◴[] No.44068922{7}[source]
What is the point that they have?
replies(1): >>44069805 #
28. AnimalMuppet ◴[] No.44068990[source]
Well, if it happens in 2028 instead of 2027, I don't think anyone will be justified in yelling at them for being wrong.

But I can't help notice that in one year, the expected arrival date slipped by... one year. That does not bode well...

29. lostmsu ◴[] No.44069055{5}[source]
> people don't care enough

I think your parent meant exactly this.

30. jazzyjackson ◴[] No.44069411{4}[source]
For the people imagining it yes

For many of us the conversation hasn't gotten any less ridiculous just because computers can talk now

replies(1): >>44070449 #
31. adrianN ◴[] No.44069619{5}[source]
The reason climate change isn’t solved is that it’s a problem that can’t be solved by technology alone. We already have all the technology we need to decarbonize. We lack the political will to make the necessary investment.
32. HDThoreaun ◴[] No.44069805{8}[source]
https://www.astralcodexten.com/p/how-to-stop-worrying-and-le...
replies(1): >>44069825 #
33. pvg ◴[] No.44069825{9}[source]
I've seen it. What is the point you think it makes/has?
replies(1): >>44080249 #
34. Ey7NFZ3P0nzAe ◴[] No.44070233[source]
> Hari Seldon is a fictional character in the Foundation series of novels by Isaac Asimov. In his capacity as mathematics professor at Streeling University on the planet Trantor, Seldon develops psychohistory, an algorithmic science that allows him to predict the future in probabilistic terms

- https://en.m.wikipedia.org/wiki/Hari_Seldon

35. amarcheschi ◴[] No.44070350{6}[source]
I'm not sure I understood your point.

In my root comment, when I said "good point" I didn't mean good point as good, it was meant as a good point of maybe not taking seriously whatever he says. I realize it wasn't clear because reading my root comment again I'm not sure either what it was meant to be understood by the readers

36. throw310822 ◴[] No.44070449{5}[source]
> just because computers can talk now

I find it astounding that some people appear completely unable to grasp what this really means and what the implications are.

replies(1): >>44078065 #
37. stuaxo ◴[] No.44071660[source]
I mean... neither of those is going to happen so it's pretty silly.
38. Aurornis ◴[] No.44073094[source]
> while being horoscope-vague enough that almost any possible development will fit into the "predictions" in the post for the next decade.

This is a recurring theme in rationalist blogs like Scott Alexander’s: They mix a lot of low-risk claims in with heavily hedged high-risk claims. The low risk claims (AI will continue to advance) inevitably come true and therefore the blog post looks mostly accurate in hindsight.

When reading the blog post in the current context the hedging goes mostly unnoticed because everyone clicked on the article for the main claim, not the hedging.

When reviewing blog posts from the past that didn’t age well, that hedging suddenly becomes the main thing their followers want you to see.

So in future discussions there are two outcomes: He’s always either right or “not entirely wrong”. Once you see it, it’s hard to unsee. Combine that with the almost parasocial relationship that some people develop with prominent figures in the rationalist sphere and there are a lot of echo chambers that, ironically, think they’re the only rational ones who see it like it really is.

39. jazzyjackson ◴[] No.44078065{6}[source]
I see them as funhouse mirrors, the kind that reflect your image to make you skinny or fat, except they do it with semantics, big deal. I've never had an interaction with an llm that wasnt just repeating what I said more verbosely, or with compressed fuzzy facts sprinkled in.

There is no machine spirit that exists in a box separately from us, it's just a means for people to amplify and multiply their voice into ten thousand sock puppet bot accounts, that's all I'm able to grasp anyway. Curious to hear your experience that's led you to believe something different.

40. godelski ◴[] No.44079881{4}[source]

  > We do not mean that as hyperbole. We are not exaggerating for effect. We think that is the most direct extrapolation from the knowledge, evidence, and institutional conduct around artificial intelligence today. In this book, we lay out our case.
Take us seriously, buy our book!

We're real researchers, so we make our definitely scientific case available to anyone who will give us $15-$30! It's the most important book ever, says some actor. Read it, so we all don't die!

For Christ's sake, how does anyone take this Harry Potter fanfiction writer serially?

replies(1): >>44086895 #
41. HDThoreaun ◴[] No.44080249{10}[source]
Intelligence distributions are not the same among the different racial groups
replies(1): >>44080301 #
42. HDThoreaun ◴[] No.44080271{6}[source]
The article linked is literally nothing but a collection of Alexander's comments and comments on his blog. It in no way provides any reason to believe he is wrong, it just laughs at him, strawmaning his takes.
replies(1): >>44082548 #
43. pvg ◴[] No.44080301{11}[source]
Oh so your idea is we shouldn't criticize scientific racists because they're right? No, we criticize them because they're wrong, the whole thing is made up pseudoscience as window-dressing for bigotry.
replies(1): >>44087833 #
44. tptacek ◴[] No.44082548{7}[source]
He just wrote a piece about the Lynn IQ maps that is practically a self-parody. Nobody needs to strawman him on this topic. Here is a direct quote:

Isn't It Super-Racist To Say That People In Sub-Saharan African Countries Have IQs Equivalent To Intellectually Disabled People? No. In fact, it would be super-racist not to say this!

The horse is completely out of the barn on this controversy about Alexander's views.

replies(1): >>44087883 #
45. ben_w ◴[] No.44086895{5}[source]
Because of what else he writes besides the fanfic. (Better question is why anyone takes JKR herself seriously).

But if you insist on only listening to people with academic acolades or industrial output, there's this other guy who got the Rumelhart Prize (2001), Turing Award (2018), Dickson Prize (2021), Princess of Asturias Award (2022), Nobel Prize in Physics (2024), VinFuture Prize (2024), Queen Elizabeth Prize for Engineering (2025), Order of Canada, Fellow of the Royal Society, and Fellow of the Royal Society of Canada

That's one person with all that, and he says there's a "10 to 20 per cent chance" that AI would be the cause of human extinction within the following three decades, and "it is hard to see how you can prevent the bad actors from using [AI] for bad things.": https://en.wikipedia.org/wiki/Geoffrey_Hinton

Myself, I'm closer to Hinton's view than Yudkowsky's: path dependency, i.e. I expect that before we get existential threat from AI, we get catastrophic economic threat the precludes existential threat.

replies(1): >>44090740 #
46. HDThoreaun ◴[] No.44087883{8}[source]
He is 100% right that it is more racist to ignore the IQ difference between countries and therefore the factors that create them than it is to pretend all countries have the same average iq. The only way to help people who have been hurt by environmental factors that hurt development is to study those factors and who they effect.
replies(1): >>44088696 #
47. tptacek ◴[] No.44088696{9}[source]
No, there are no such thing as IQ maps. They're fraudulent. He's defending fraudulent data.

Now you understand the reaction you (and he) are getting on this.

replies(1): >>44089144 #
48. HDThoreaun ◴[] No.44089144{10}[source]
As I said, pretending different countries don’t have different average iq is a massive mistake and only leads to people listening to the racists more. You can’t trick people on this by ignoring it.
replies(1): >>44089423 #
49. tptacek ◴[] No.44089423{11}[source]
You have no evidence to support that claim. The evidence Alexander mustered for it has been utterly discredited. The claim is itself racist! But that's not the biggest problem with it; the biggest problem is that it's fabricated.

There's a reason I'm confident about this, and it's not wokeness or an overconfidence in molecular genetics or whatever. It's that there has never been a global effort to collect per-country representative average IQs. The notion that there is, somewhere, a map relating countries to IQs is a racist fever dream.

50. tptacek ◴[] No.44089439{13}[source]
It is extremely deniable that intelligence distributions are "not the same across races" (whatever it is we might mean by "races"), and by citing Alexander as your support for that claim, your argument has become circular.
51. godelski ◴[] No.44090740{6}[source]
I do say the same think about JKR, btw. And for the same things, because the content she writes. I think you focused on the fanfic part and not the part where I'm criticizing where they say their stuff is the most important to keep humanity alive and they're charging money for it. Meanwhile, you may notice in academia we publish papers to make them freely available, like on arXiv. If it is that important that people need to know, you make it available.

The second person, Hinton, is not as good of an authority as you'd expect. Though I do understand why people take him seriously. Fwiw, his Nobel was wildly controversial. Be careful, prizes often have political components. I have degrees in both CS and physics (and am an ML researcher) and both communities thought it was really weird. I'll let you guess which community found it insulting.

I want to remind you, in 2016 Hinton famously said[0]

  | Let me start by saying a few things that seem obvious. I think if you work as a radiologist, you're like the coyote that's already over the edge of the cliff, but hasn't yet looked down so doesn't realize there's no ground beneath him. People should stop training radiologists now. It's just completely obvious that within 5 years that deep learning is going to be better than radiologists because it's going to get a lot more experience. It might take 10 years, but we've got plenty of radiologists already.
We're 10 years in now. Hinton has shown he's about as good at making predictions as Musk. What Hinton thought was laughably obvious actually didn't happen. He's made a number of such predictions. I'll link another small explanation from him[1] because it is so similar to something Sutskever said[2]. I can tell you with high certainty that every physicist laughs at such a claim. We've long experienced that being able to predict data does not equate to understanding that data[3].

I care very much about alignment myself[4,5,6,7]. The reason I push back on Yud and others making claims like they do is because they are actually helping create the future they say we should be afraid of. I'm not saying they're evil or directly making evil superintelligences. Rather, they're pulling attention and funds away from the problems that need to be solved. They are guessing about things we don't need to guess about. They are making confidently asserting claims we know to be false (to be able to make accurate predictions requires accurate understanding[8]). Without being able to openly and honestly speak to the limitations of our machines (mainly blinded by excitement), we create these exact dangers we worry about. I'm not calling for a pause on research, I'm calling for more research and more people to pay attention to the subtle nature of everything. In a way I am saying "slow down" but only in that I'm saying don't ignore the small stuff. We move so fast that we keep pushing off the small stuff, but the AI risk comes through the accumulation of debt. You need to be very careful to not let that debt get out of control. You don't create safe systems by following the boom and bust hype cycles that CS is so famous for. You don't just wildly race to build a nuclear reactor and try to sell it to people while it is still a prototype.

[0] https://fastdatascience.com/ai-in-healthcare/ai-replace-radi...

[1] https://www.reddit.com/r/singularity/comments/1dhlvzh/geoffr...

[2] https://youtu.be/Yf1o0TQzry8?t=449

[3] https://www.youtube.com/watch?v=hV41QEKiMlM

[4] https://news.ycombinator.com/item?id=44068943

[5] https://news.ycombinator.com/item?id=44070101

[6] https://news.ycombinator.com/item?id=44017334

[7] https://news.ycombinator.com/item?id=43909711

[8] This is the link to [1,2]. I mean you can reasonably create a data generating process that is difficult or impossible to distinguish from the actual data generating process but you have a completely different causal structure, confounding variables, and all that fun stuff. Any physicist will tell you that fitting the data is the easy part (it isn't easy). Interpreting and explaining the data is the hard part. That hard part is the building of the causal relationships. It is the "understanding" Hinton and Sutskever claim.

52. trod1234 ◴[] No.44101889{4}[source]
There are a lot of people that believe most of us will die within the next 10 years, and a rational discussion of these subjects is largely based in the fact that for the last three generations, we have faced numerous existential threats that instead of solving them, have instead all had the can kicked down the road.

Eventually what inevitably happens is you get convergence in time where you simply do not have the resources, and with the risk factors today, that convergence my cause societal failure.

Super Intelligent AI alone, yeah that probably is not a threat because its so highly (astronomically) unlikely, but socio-economic collapse to starvation; now that's a very real possibility when you create something that destroys the ability for an individual to form capital, or breaks other underlying aspects which underpin all of societal organization going back hundreds of years.

Now these things won't happen overnight, but that's not the danger either. The danger is the hysteresis, or in other words by the time you find out and can objectively show its happening to react, its impossible to change the outcome. Your goose is just cooked as a species, and the cycle of doom just circles until no ones left.

Few realize that food today is wholly dependent on Haber-Bosch chemistry. You get 4x less yield without it, and following Catton in a post-extraction phase sustainable population numbers may be fractional compared to last century (when the population was 4bn). People break quite easily under certain circumstances and so any leaders following MAD doctrine will likely actually use it when they realize everything is failing and what's ahead.

These are just things that naturally happen when the mechanics of things that are long forgotten which underpin the way things work fail to ruin. The loss of objective reality is a warning sign of such things on the horizon.

53. falcor84 ◴[] No.44117048{3}[source]
Well, they can still go back in time and change things