In 2021, a researcher named Daniel Kokotajlo published a blog post called “What 2026 Looks Like”, where he laid out what he thought would happen in AI over the next five years.
The world delights in thwarting would-be prophets. The sea of possibilities is too vast for anyone to ever really chart a course. At best, we vaguely gesture at broad categories of outcome, then beg our listeners to forgive us the inevitable surprises. Daniel knew all this and resigned himself to it. But even he didn’t expect what happened next.
He got it all right.
Okay, not literally all. The US restricted chip exports to China in late 2022, not mid-2024. AI first beat humans at Diplomacy in late 2022, not 2025. A rise in AI-generated propaganda failed to materialize. And of course the mid-2025 to 2026 period remains to be seen. But to put its errors in context, Daniel’s document was written two years before ChatGPT existed. Nobody except researchers and a few hobbyists had ever talked to an AI. In fact, talking to AI was a misnomer. There was no way to make them continue the conversation; they would free associate based on your prompt, maybe turning it into a paragraph-length short story. If you pulled out all the stops, you could make an AI add single digit numbers and get the right answer more than 50% of the time. Yet if you read Daniel’s blog post without checking the publication date, you could be forgiven for thinking it was a somewhat garbled but basically reasonable history of the last four years.
I wasn’t the only one who noticed. A year later, OpenAI hired Daniel to their policy team. While he worked for them, he was limited in his ability to speculate publicly. “What 2026 Looks Like” promised a sequel about 2027 and beyond, but it never materialized.
Unluckily for Sam Altman but luckily for the rest of us, Daniel broke with OpenAI mid-2024 in a dramatic split covered by the New York Times and others. He founded the AI Futures Project to produce the promised sequel, including:
Eli Lifland, a superforecaster who is ranked first on RAND’s Forecasting initiative. You can read more about him and his forecasting team here. He cofounded and advises AI Digest and co-created TextAttack, an adversarial attack framework for language models.
Jonas Vollmer, a VC at Macroscopic Ventures, which has done its own, more practical form of successful AI forecasting: they made an early stage investment in Anthropic, now worth $60 billion.
Thomas Larsen, the former executive director of the Center for AI Policy, a group which advises policymakers on both sides of the aisle.
Romeo Dean, a leader of Harvard’s AI Safety Student Team and budding expert in AI hardware.
…and me! Since October, I’ve been volunteering part-time, doing some writing and publicity work. I can’t take credit for the forecast itself - or even for the lion’s share of the writing and publicity - but it’s been an immense privilege to work alongside some of the smartest and most epistemically virtuous people I know, trying to absorb their worldview on a level deep enough to do it justice. We have no illusions that we’ll get as lucky as last time, but we still think it’s a valuable contribution to the discussion.
The summary: we think that 2025 and 2026 will see gradually improving AI agents. In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028. The US government wakes up in early 2027, potentially after seeing the potential for AI to be a decisive strategic advantage in cyberwarfare, and starts pulling AI companies into its orbit - not fully nationalizing them, but pushing them into more of a defense-contractor-like relationship. China wakes up around the same time, steals the weights of the leading American AI, and maintains near-parity. There is an arms race which motivates both countries to cut corners on safety and pursue full automation over public objections; this goes blindingly fast and most of the economy is automated by ~2029. If AI is misaligned, it could move against humans as early as 2030 (ie after it’s automated enough of the economy to survive without us). If it gets aligned successfully, then by default power concentrates in a double-digit number of tech oligarchs and US executive branch members; this group is too divided to be crushingly dictatorial, but its reign could still fairly be described as technofeudalism. Humanity starts colonizing space at the very end of the 2020s / early 2030s.
Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028. We keep the scenario centered around 2027 because it’s still his modal prediction (and because it would be annoying to change). Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
But even if things don’t go this fast (or if they go faster - another possibility we don’t rule out!) we’re also excited to be able to present a concrete scenario at all. We’re not the first to do this - previous contenders include L Rudolf L and Josh C - but we think this is a step up in terms of detail and ability to show receipts (see the Research section on the top for our model and justifications). We hope it will either make our 3-5 year timeline feel plausible, or at least get people talking specifics about why they disagree.
Dwarkesh Patel kindly invited me and Daniel on his podcast (sponsored by Jane Street, who also sponsored SSC many years ago!) to promote and explain the scenario. I don’t usually do podcasts, and I worry I was a bit of a third wheel in this one, but I’m hoping that my celebrity will get people to pay attention to what Daniel‘s saying. Think of it as “International aid expert discusses the Ethiopian famine with concerned Hollywood actor,” with me in the role of the actor, and you won’t be disappointed.
But seriously, the Daniel-Dwarkesh parts are great and should answer all your objections and then some. Take a sip of your drink every time Dwarkesh asks a question about bottlenecks; take a shot every time Daniel answers that his model already includes them and without the bottlenecks it would be even faster.
You can read the full scenario here - and if you have questions, don’t miss the FAQ and supplements. I’ll give you some time to form your own opinions, then write more about my specific takeaways tomorrow or next week.
Share this post