←back to thread

AI 2027

(ai-2027.com)
949 points Tenoke | 3 comments | | HN request time: 0.478s | source
Show context
superconduct123 ◴[] No.43575419[source]
Why are the biggest AI predictions always made by people who aren't deep in the tech side of it? Or actually trying to use the models day-to-day...
replies(8): >>43575588 #>>43575596 #>>43575598 #>>43575759 #>>43575806 #>>43576275 #>>43576313 #>>43579589 #
AlphaAndOmega0 ◴[] No.43575596[source]
Daniel Kokotajlo released the (excellent) 2021 forecast. He was then hired by OpenAI, and not at liberty to speak freely, until he quit in 2024. He's part of the team making this forecast.

The others include:

Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.

Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.

Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.

Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.

And finally, Scott Alexander himself.

replies(7): >>43575677 #>>43576363 #>>43576422 #>>43578530 #>>43583355 #>>43584786 #>>43589696 #
kridsdale3 ◴[] No.43575677[source]
TBH, this kind of reads like the pedigrees of the former members of the OpenAI board. When the thing blew up, and people started to apply real scrutiny, it turned out that about half of them had no real experience in pretty much anything at all, except founding Foundations and instituting Institutes.

A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.

replies(2): >>43575792 #>>43579438 #
flappyeagle ◴[] No.43575792[source]
c'mon man, you don't believe that, let's have a little less disingenuousness on the internet
replies(1): >>43576896 #
arduanika ◴[] No.43576896[source]
How would you know what he believes?

There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.

replies(1): >>43587725 #
1. flappyeagle ◴[] No.43587725[source]
Because they are not the same in any way. It’s not a bunch of junior academics, it’s literally including someone who worked at OpenAI
replies(2): >>43595966 #>>43615730 #
2. ◴[] No.43595966[source]
3. arduanika ◴[] No.43615730[source]
I asked you how you know kridsdale3 believes X, and you're reply is basically, "because I believe Y". I hope you don't call yourself a rationalist, given that you're hazy on the meaning of "because" and struggle with theory of mind.

Sure, OpenAI put up with one of these safety larpers for a few years while it was part of their brand. Reasonable people can disagree on how much that counts for.

You're right it's not a bunch of junior academics. It's not even a bunch of junior academics. This stuff would never pass muster in a reputable academic peer-reviewed journal, so from an academic perspective, this is not even the JV stuff. That's why they have to found their own bizarro network of foundations and so on, to give the appearance of seriousness and legitimacy. This might fool people who aren't looking closely, but the trick does not work on real academics, nor does it work on the silent majority of those who are actually building the tech capabilities.