The others include:
Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
And finally, Scott Alexander himself.
A lot of people (like the Effective Altruism cult) seem to have made a career out of selling their Sci-Fi content as policy advice.
They are great at selling stories - they sold the story of the crypto utopia, now switching their focus to AI.
This seems to be another appeal to enforce AI regulation in the name of 'AI safetyiism', which was made 2 years ago but the threats in it haven't really panned out.
For example an oft repeated argument is the dangerous ability of AI to design chemical and biological weapons, I wish some expert could weigh in on this, but I believe the ability to theorycraft pathogens effective in the real world is absolutely marginal - you need actual lab work and lots of physical experiments to confirm your theories.
Likewise the dangers of AI systems to exfiltrate themselves to multi-million dollar AI datacenter GPU systems everyone supposedly just has lying about, is ... not super realistc.
The ability of AIs to hack computer systems is much less theoretical - however as AIs will get better at black-hat hacking, they'll get better at white-hat hacking as well - as there's literally no difference between the two, other than intent.
And here in lies a crucial limitation of alignment and safetyism - sometimes there's no way to tell apart harmful and harmless actions, other than whether the person undertaking them means well.
There are engineers with AI predictions, but you aren't reading them, because building an audience like Scott Alexander takes decades.
The funny part, to me, is that it won't. They'll continue to toil and move on to the next huck just as fast as they jumped on this one.
And I say this from observation. Nearly all of the people I've seen pushing AI hyper-sentience are smug about it and, coincidentally, have never built anything on their own (besides a company or organization of others).
Every single one of the rational "we're on the right path but not quite there" takes have been from seasoned engineers who at least have some hands-on experience with the underlying tech.
This bullshit article is written for that audience.
Say bullshit enough times and people will invest.
Not all these soft roles
There's hype and there's people calling bullshit. If you work from the assumption that the hype people are genuine, but the people calling bullshit can't be for real, that's how you get a bubble.
Which to be fair it actually is kind of impressive if someone can make accurate predictions about the future that far head, but only because people are really bad at predicting the future.
Implicitly when I hear "superforecaster" I think they're someone that's really good at predicting the future, but deeper inspection often reveals that "the future" is constrained to the next 2 years. Beyond that they tend to be as bad as any other "futurist".
Sure, OpenAI put up with one of these safety larpers for a few years while it was part of their brand. Reasonable people can disagree on how much that counts for.
You're right it's not a bunch of junior academics. It's not even a bunch of junior academics. This stuff would never pass muster in a reputable academic peer-reviewed journal, so from an academic perspective, this is not even the JV stuff. That's why they have to found their own bizarro network of foundations and so on, to give the appearance of seriousness and legitimacy. This might fool people who aren't looking closely, but the trick does not work on real academics, nor does it work on the silent majority of those who are actually building the tech capabilities.