Announcing "Crunch Time for Humanity"
A new blog
I’ve been thinking about AI risk since long before it was cool, and have blogged about this and many other topics at Häggström hävdar since 2011. In my 2016 book Here Be Dragons: Science, Technology and the Future of Humanity, I framed AI as one among a broader smorgasbord of technologies with the potential to radically transform society and our lives. While parts of the book have aged reasonably well, the chapter about AI has become very much out of date, so for an overview of how I currently think about AI a better reference is the text Our AI future and the need to stop the bear, which I wrote in February 2025.
It seems to me that AI has the potential to make the world a much better place, but also that it carries with it enormous risks, even to the point of threatening the extinction of Homo sapiens. So we need to get it right. For a long time, this issue seemed to me somewhat abstract, due to my belief that the crucial transition was most likely at least decades away. From about 2019 onwards, however, my AI timelines gradually shrank, and when I came in contact with Daniel Kokotajlo (who was then at OpenAI) in early 2023 I finally realized the need to take seriously timelines measured in years, not decades. Crunch time is now.
A large part of getting AI right is global coordination. This requires spreading awareness about the situation we are in, and about the magnitude of the stakes. That is why, in an attempt to reach new audiences, I am creating this new blog. On lesser issues (some of which are mostly of interest to my Swedish compatriots) I plan to continue writing at Häggström hävdar, but when it comes to issues related to the AI-driven and extremely rough ride that humanity may be facing the next five or ten years, I expect Crunch Time for Humanity to become my main outlet.
For more about me, see my homepage at Chalmers University of Technology in Gothenburg, Sweden, where I serve as a professor of mathematical statistics.

