←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 1 comments | | HN request time: 0s | source
Show context
beklein ◴[] No.43572674[source]
Older and related article from one of the authors titled "What 2026 looks like", that is holding up very well against time. Written in mid 2021 (pre ChatGPT)

https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...

//edit: remove the referral tags from URL

replies(9): >>43572850 #>>43572964 #>>43573185 #>>43573413 #>>43573523 #>>43575079 #>>43575122 #>>43575183 #>>43575630 #
motoxpro ◴[] No.43572964[source]
That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.
replies(2): >>43573807 #>>43576710 #
reducesuffering ◴[] No.43573807[source]
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?

This forum has been so behind for too long.

Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1

Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.

First stage is denial, I get it, not easy to swallow the gravity of what’s coming.

replies(6): >>43574597 #>>43575158 #>>43575632 #>>43575654 #>>43575793 #>>43575851 #
archagon ◴[] No.43575158{3}[source]
And why are Altman's words worth anything? Is he some sort of great thinker? Or a leading AI researcher, perhaps?

No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.

replies(2): >>43576449 #>>43582075 #
1. skeeter2020 ◴[] No.43576449{4}[source]
well, he did also have a an early (failed) YC startup - does that add cred?