←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 3 comments | | HN request time: 0.776s | source
Show context
ivraatiems ◴[] No.43577204[source]
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

replies(8): >>43577330 #>>43577995 #>>43578252 #>>43578804 #>>43578889 #>>43580010 #>>43580150 #>>43583543 #
1. baron816 ◴[] No.43578804[source]
My vision for an ASI future involves humans living in simulations that are optimized for human experience. That doesn’t mean we are just live in a paradise and are happy all the time. We’d experience dread and loss and fear, but it would ultimately lead to a deeply satisfying outcome. And we’d be able to choose to forget things, including whether we’re in a simulation so that it feels completely unmistakeable from base reality. You’d live indefinitely, experiencing trillions of lifespans where you get to explore the multiverse inside and out.

My solution to the alignment problem is that an ASI could just stick us in tubes deep in the Earth’s crust—it just needs to hijack our nervous system to input signals from the simulation. The ASI could have the whole rest of the planet, or it could move us to some far off moon in the outer solar system—I don’t care. It just needs to do two things for it’s creators—preserve lives and optimize for long term human experience.

replies(1): >>43591858 #
2. danielbln ◴[] No.43591858[source]
Are you being facetious? Just asking, because this is literally the plot of the Matrix.
replies(1): >>43601573 #
3. baron816 ◴[] No.43601573[source]
I’m serious. The Matrix is a movie made for entertainment. If the plot of The Matrix were “people live in The Matrix and everyone is fine with that and nothing interesting happens”, then it would’ve been an awful movie.

Again, you’re not experiencing a mundane or perfect world. It would be like being in a video game or movie, if you wanted. Some people would experience the plot of The Matrix as any of the characters. Or you could travel around the galaxy solving mysteries and fighting evil as a Jedi Master. Or you could spend some time living a quiet pastoral life in the Shire with your hobbit friends. Or you could do it all over and over again experiencing the highs and lows each time.