←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 3 comments | | HN request time: 0s | source
Show context
visarga ◴[] No.43583532[source]
The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.

During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.

What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.

Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.

If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.

replies(6): >>43584203 #>>43584778 #>>43585210 #>>43586239 #>>43587307 #>>43591163 #
1. nfc ◴[] No.43587307[source]
I agree with your point about the validation bottleneck becoming dominant over raw compute and simple model scaling. However, I wonder if we're underestimating the potential headroom for sheer efficiency breakthroughs at our levels of intelligence.

Von Neumann for example was incredibly brilliant, yet his brain presumably ran on roughly the same power budget as anyone else's. I mean, did he have to eat mountains of food to fuel those thoughts? ;)

So it looks like massive gains in intelligence or capability might not require proportionally massive increases in fundamental inputs at least at the highest levels of intelligence a human can reach, and if that's true for the human brain why not for other architecture of intelligence.

P.S. It's funny, I was talking about something along the lines of what you said with a friend just a few minutes before reading your comment so when I saw it I felt that I had to comment :)

replies(1): >>43592953 #
2. visarga ◴[] No.43592953[source]
I think you are underestimating the context, we all stand on shoulders of giants. Let's think what would happen if kid Einstein, at the young age of 5, was marooned on an island and recovered 30 years later. Will he have any deep insights to dazzle us with? I don't think he would.
replies(1): >>43594645 #
3. 2snakes ◴[] No.43594645[source]
Hayy ibn Yaqdhan Nature vs nurture and relative nature of intelligence iirc