AI Futures Project is the group behind AI 2027. I’ve been helping them with their blog. Posts written or co-written by me include:
Beyond The Last Horizon - what’s behind that METR result showing that AI time horizons double every seven months? And is it really every seven months? Might it be faster?
AI 2027: Media, Reactions, Criticism - a look at some of the response to AI 2027, with links to some of the best objections and the team’s responses.
Why America Wins - why we predict that America will stay ahead of China on AI in the near future, and what could change this.
I will probably be shifting most of my AI blogging there for a while to take advantage of access to the team’s expertise. There’s also a post on transparency by Daniel Kokotajlo, and we hope to eventually host writing by other team members as well.
I’m especially happy with the horizons post, because we got it out just a few days before a new result that seems to support one of our predictions: OpenAI’s newest models’ time horizons land on the faster curve we predicted, rather than the slower seven-month doubling time highlighted in the METR report:

And speaking of expertise, the AIFP team have kindly volunteered to do an AMA (“ask me anything”, Q&A) here on ACX, this Friday, 3:30 - 6:00 PM California time. If you have any questions on the scenario, AI forecasting, or AI safety more generally, they can give you high-quality answers. I’ll make a separate post at the appointed time.
Share this post