←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 2 comments | | HN request time: 0.558s | source
Show context
ivraatiems ◴[] No.43577204[source]
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

replies(8): >>43577330 #>>43577995 #>>43578252 #>>43578804 #>>43578889 #>>43580010 #>>43580150 #>>43583543 #
1. joshdavham ◴[] No.43577995[source]
> I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be.

For me personally, I hope that we do get AGI. I just don't want it by 2027. That feels way too fast to me. But AGI 2070 or 2100? That sounds much more preferable.

replies(1): >>43610287 #
2. saagarjha ◴[] No.43610287[source]
Like, when you're retired or dead?