https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
https://www.alignmentforum.org/posts/6Xgy6CAf2jqHhynHL/what-...
//edit: remove the referral tags from URL
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this.