The others include:
Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
And finally, Scott Alexander himself.
Which to be fair it actually is kind of impressive if someone can make accurate predictions about the future that far head, but only because people are really bad at predicting the future.
Implicitly when I hear "superforecaster" I think they're someone that's really good at predicting the future, but deeper inspection often reveals that "the future" is constrained to the next 2 years. Beyond that they tend to be as bad as any other "futurist".