←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 2 comments | | HN request time: 0.529s | source
Show context
beklein ◴[] No.43572674[source]
Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)

https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...

//edit: remove the referral tags from URL

replies(9): >>43572850 #>>43572964 #>>43573185 #>>43573413 #>>43573523 #>>43575079 #>>43575122 #>>43575183 #>>43575630 #
motoxpro ◴[] No.43572964[source]
That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.
replies(2): >>43573807 #>>43576710 #
reducesuffering ◴[] No.43573807[source]
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?

This forum has been so behind for too long.

Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1

Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.

First stage is denial, I get it, not easy to swallow the gravity of what’s coming.

replies(6): >>43574597 #>>43575158 #>>43575632 #>>43575654 #>>43575793 #>>43575851 #
hn_throwaway_99 ◴[] No.43575632[source]
> Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?

OK, say I totally believe this. What, pray tell, are we supposed to do about it?

Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.

I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?

While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.

So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.

replies(3): >>43575869 #>>43583481 #>>43588198 #
1. reducesuffering ◴[] No.43588198[source]
> If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?

That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this.

replies(1): >>43588252 #
2. hollerith ◴[] No.43588252[source]
Slightly more detail: until about 2001 Yudkowsky was what we would now call an AI accelerationist, then it dawned on him that creating an AI that is much "better at reality" than people are would probably kill all the people unless the AI has been carefully designed to stay aligned with human values (i.e., to want what we want) and that ensuring that it stays aligned is a very thorny technical problem, but was still hopeful that humankind would solve the thorny problem. He worked full time on the alignment problem himself. In 2015 he came to believe that the alignment problem is so hard that it is very very unlikely to be solved by the time it is needed (namely, when the first AI is deployed that is much "better at reality" than people are). He went public with his pessimism in Apr 2022, and his nonprofit (the Machine Intelligence Research Institute) fired most of its technical alignment researchers and changed its focus to lobbying governments to ban the dangerous kind of AI research.