←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 2 comments | | HN request time: 0.414s | source
Show context
visarga ◴[] No.43583532[source]
The story is entertaining, but it has a big fallacy - progress is not a function of compute or model size alone. This kind of mistake is almost magical thinking. What matters most is the training set.

During the GPT-3 era there was plenty of organic text to scale into, and compute seemed to be the bottleneck. But we quickly exhausted it, and now we try other ideas - synthetic reasoning chains, or just plain synthetic text for example. But you can't do that fully in silico.

What is necessary in order to create new and valuable text is exploration and validation. LLMs can ideate very well, so we are covered on that side. But we can only automate validation in math and code, but not in other fields.

Real world validation thus becomes the bottleneck for progress. The world is jealously guarding its secrets and we need to spend exponentially more effort to pry them away, because the low hanging fruit has been picked long ago.

If I am right, it has implications on the speed of progress. Exponential friction of validation is opposing exponential scaling of compute. The story also says an AI could be created in secret, which is against the validation principle - we validate faster together, nobody can secretly outvalidate humanity. It's like blockchain, we depend on everyone else.

replies(6): >>43584203 #>>43584778 #>>43585210 #>>43586239 #>>43587307 #>>43591163 #
nikisil80 ◴[] No.43584203[source]
Best reply in this entire thread, and I align with your thinking entirely. I also absolutely hate this idea amongst tech-oriented communities that because an AI can do some algebra and program an 8-bit video game quickly and without any mistakes, it's already overtaking humanity. Extrapolating from that idea to some future version of these models, they may be capable of solving grad school level physics problems and programming entire AAA video games, but again - that's not what _humanity_ is about. There is so much more to being human than fucking programming and science (and I'm saying this as an actual nuclear physicist). And so, just like you said, the AI arm's race is about getting it good at _known_ science/engineering, fields in which 'correctness' is very easy to validate. But most of human interaction exists in a grey zone.

Thanks for this.

replies(4): >>43584874 #>>43585958 #>>43587510 #>>43588739 #
1. loandbehold ◴[] No.43584874[source]
OK but getting good at science/engineering is what matters because that's what gives AI and people who wield it power. Once AI is able to build chips and datacenters autonomously, that's when singularity starts. AI doesn't need to understand humans or act human-like to do those things.
replies(1): >>43591832 #
2. 0x008 ◴[] No.43591832[source]
I think what they mean is that the fundamental question is IF any intelligence can really break out of its confined area of expertise and control a substantial amount of the world just by excelling in highly verifiable domains. Because a lot of what humans need to do is decisions based on expertise and judgement that in systems follows no transparent rules.

I guess it’s the age old question if we really know what we are doing („experience“) or we just tumble through life and it works out because the overall system of humans interacting with each other is big enough. The current state of world politics makes be think it’s the latter.