←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 4 comments | | HN request time: 0s | source
Show context
ivraatiems ◴[] No.43577204[source]
Though I think it is probably mostly science-fiction, this is one of the more chillingly thorough descriptions of potential AGI takeoff scenarios that I've seen. I think part of the problem is that the world you get if you go with the "Slowdown"/somewhat more aligned world is still pretty rough for humans: What's the point of our existence if we have no way to meaningfully contribute to our own world?

I hope we're wrong about a lot of this, and AGI turns out to either be impossible, or much less useful than we think it will be. I hope we end up in a world where humans' value increases, instead of decreasing. At a minimum, if AGI is possible, I hope we can imbue it with ethics that allow it to make decisions that value other sentient life.

Do I think this will actually happen in two years, let alone five or ten or fifty? Not really. I think it is wildly optimistic to assume we can get there from here - where "here" is LLM technology, mostly. But five years ago, I thought the idea of LLMs themselves working as well as they do at speaking conversational English was essentially fiction - so really, anything is possible, or at least worth considering.

"May you live in interesting times" is a curse for a reason.

replies(8): >>43577330 #>>43577995 #>>43578252 #>>43578804 #>>43578889 #>>43580010 #>>43580150 #>>43583543 #
abraxas ◴[] No.43577330[source]
I think LLM or no LLM the emergence of intelligence appears to be closely related to the number of synapses in a network whether a biological or a digital one. If my hypothesis is roughly true it means we are several orders of magnitude away from AGI. At least the kind of AGI that can be embodied in a fully functional robot with the sensory apparatus that rivals the human body. In order to build circuits of this density it's likely to take decades. Most probably transistor based, silicon based substrate can't be pushed that far.
replies(5): >>43577402 #>>43577908 #>>43578032 #>>43578329 #>>43579445 #
1. ivraatiems ◴[] No.43577402[source]
I think there is a good chance you are roughly right. I also think that the "secret sauce" of sapience is probably not something that can be replicated easily with the technology we have now, like LLMs. They're missing contextual awareness and processing which is absolutely necessary for real reasoning.

But even so, solving that problem feels much more attainable than it used to be.

replies(2): >>43577671 #>>43578105 #
2. narenm16 ◴[] No.43577671[source]
i agree. it feels like scaling up these large models is such an inefficient route that seems to be warranting new ideas (test-time compute, etc).

we'll likely reach a point where it's infeasible for deep learning to completely encompass human-level reasoning, and we'll need neuroscience discoveries to continue progress. altman seems to be hyping up "bigger is better," not just for model parameters but openai's valuation.

3. throwup238 ◴[] No.43578105[source]
I think the missing secret sauce is an equivalent to neuroplasticity. Human brains are constantly being rewired and optimized at every level: synapses and their channels undergo long term potentiation and depression, new connections are formed and useless ones pruned, and the whole system can sometimes remap functions to different parts of the brain when another suffers catastrophic damage. I don’t know enough about the matrix multiplication operations that power LLMs, but it’s hard to imagine how that kind of organic reorganization would be possible with GPUs matmul. It’d require some sort of advanced “self aware” profile guided optimization and not just trial and error noodling with Torch ops or CUDA kernels.

I assume that thanks to the universal approximation theorem it’s theoretically possible to emulate the physical mechanism, but at what hardware and training cost? I’ve done back of the napkin math on this before [1] and the number of “parameters” in the brain is at least 2-4 orders of magnitude more than state of the art models. But that’s just the current weights, what about the history that actually enables the plasticity? Channel threshold potentials are also continuous rather than discreet and emulating them might require the full fp64 so I’m not sure how we’re even going to get to the memory requirements in the next decade, let alone whether any architecture on the horizon can emulate neuroplasticity.

Then there’s the whole problem of a true physical feedback loop with which the AI can run experiments to learn against external reward functions and the core survival reward function at the core of evolution might itself be critical but that’s getting deep into the research and philosophy on the nature of intelligence.

[1] https://news.ycombinator.com/item?id=40313672

replies(1): >>43584548 #
4. lblume ◴[] No.43584548[source]
Transformers already are very flexible. We know that we can basically strip blocks at will, reorder modules, transform their input in predictable ways, obstruct some features and they will after a very short period of re-training get back to basically the same capabilities they had before. Fascinating stuff.