←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 3 comments | | HN request time: 0.612s | source
Show context
moab ◴[] No.43572725[source]
> "OpenBrain (the leading US AI project) builds AI agents that are good enough to dramatically accelerate their research. The humans, who up until very recently had been the best AI researchers on the planet, sit back and watch the AIs do their jobs, making better and better AI systems."

I'm not sure what gives the authors the confidence to predict such statements. Wishful thinking? Worst-case paranoia? I agree that such an outcome is possible, but on 2--3 year timelines? This would imply that the approach everyone is taking right now is the right approach and that there are no hidden conceptual roadblocks to achieving AGI/superintelligence from DFS-ing down this path.

All of the predictions seem to ignore the possibility of such barriers, or at most acknowledge the possibility but wave it away by appealing to the army of AI researchers and industry funding being allocated to this problem. IMO it is the onus of the proposers of such timelines to argue why there are no such barriers and that we will see predictable scaling in the 2--3 year horizon.

replies(5): >>43572940 #>>43573646 #>>43577760 #>>43579347 #>>43610201 #
throwawaylolllm ◴[] No.43572940[source]
It's my belief (and I'm far from the only person who thinks this) that many AI optimists are motivated by an essentially religious belief that you could call Singularitarianism. So "wishful thinking" would be one answer. This document would then be the rough equivalent of a Christian fundamentalist outlining, on the basis of tangentially related news stories, how the Second Coming will come to pass in the next few years.
replies(5): >>43575932 #>>43576871 #>>43578522 #>>43581761 #>>43610239 #
1. Mali- ◴[] No.43610239[source]
This is a letter signed by the most lauded AI researchers on Earth, along with CEOs from the biggest AI companies and many other very credible professors of computer science and engineering:

"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war." https://www.safe.ai/work/statement-on-ai-risk

Laughing it off as the same as the Second Coming CANNOT work. Unless you think yourself cleverer and more capable of estimating the risk than all of these experts in the field.

Especially since many of them have incentives that should prevent them from penning such a letter.

replies(1): >>43642566 #
2. poutrathor ◴[] No.43642566[source]
Troubling that these eminent great leaders does not cite climate change among societal-scale risks, a bigger and more certain societal-scale risk than a pandemy.

Would be a shame to have energy consumption by datacenters regulated, am I right ?

replies(1): >>43644208 #
3. Mali- ◴[] No.43644208[source]
Maybe global warming should be up there.

Perhaps they were trying to avoid any possible misunderstanding/misconstrual (there are misinformed people who don't believe in global warming).

In terms of avoiding all nitpicking, I think everyone that's not criminally insane believes in pandemics and nuclear bombs.