How not to defend science
The Togelius threnody
Following the launch in November 2022 of ChatGPT, the amount of directly visible AI involvement in scientific papers began to rise sharply in 2023 and 2024. A common initial reaction to this was schadenfreude over cases where authors (along with reviewers and editors) carelessly allowed ChatGPT-talk such as “Certainly, here is a possible introduction for your topic” to slip into their papers. But for most of us, the gut reaction that using AI in writing papers should be forbidden was quickly recognized as overhasty. Of course misuse should be avoided, but done right, using AI in various stages of a research project — ranging from brainstorming and scanning of the relevant literature via the planning of experiments and analysis of data to the final writeup of the report or paper — can improve the quality of the research and the efficiency of the process.1 Tutorials and workshops on how to best harness AI in research were quickly rolled out and offered at universities all over the world.2 The discussion continues, and the use of large language models to advance research was recently named by the journal Science to be one of the top ten scientific breakthroughs of 2025. Even more notable are the unabashedly grand visions from industry leaders like Dario Amodei and Sam Altman about how AI over the coming decade may come to accelerate scientific progress by as much as an order or magnitude or more, and thereby to transform the world.3
Such thoughts lead naturally to the idea of making the AI tools used in research more autonomous, thereby turning them into machines that are less tool-like and more agent-like, and ultimately making participation by human researchers unnecessary or even a burden. This aspect was brought up at a panel discussion at a workshop on The Reach and Limits of AI for Scientific Discovery in connection with the NeurIPS conference earlier this month. Present in the audience was the accomplished computer scientist and AI researcher Julian Togelius, who was provoked into the following threnody or lament over the prospect of cutting human scientists such as himself out of the loop. From his blog post the day after the event:
I was at an event on AI for science yesterday, a panel discussion here at NeurIPS. The panelists discussed how they plan to replace humans at all levels in the scientific process. So I stood up and protested that what they are doing is evil. Look around you, I said. The room is filled with researchers of various kinds, most of them young. They are here because they love research and want to contribute to advancing human knowledge. If you take the human out of the loop, meaning that humans no longer have any role in scientific research, you’re depriving them of the activity they love and a key source of meaning in their lives. And we all want to do something meaningful. Why, I asked, do you want to take the opportunity to contribute to science away from us?
My question changed the course of the panel, and set the tone for the rest of the discussion. Afterwards, a number of attendees came up to me, either to thank me for putting what they felt into words, or to ask if I really meant what I said.
I can very much relate to the sentiment expressed here. Throughout my adulthood, doing research has been my main professional activity (surpassing even teaching), and there is no denying that it adds meaning to my life. I will miss it the day I will no longer be able to meaningfully contribute to research, whether it will be due to old age on my part, or to invincible competition from superhumanly capable AIs.
Yet, I cannot take Togelius’ side in this discussion, because there are other aspects to be balanced beside the joy and creativity that we scientists get to experience through our work. Think of all the prosperity and good that the output of science produces — surely that must enter as a factor on the moral evaluation of whether to push head with AI-driven research or to hold back. It helps to think of a concrete example, and one that was raised in response to Togelius at the NeurIPS workshop was cancer research. This seems like quite a clear case in favor of pushing ahead with whatever AI systems are available to accelerate research, because holding back on Togeliusian grounds would mean letting patients die in the hospitals by not giving them access to the new and better treatment that these AI systems would otherwise have produced; it seems to me morally monstrous to tell these patients (and their loved ones) that their lives need to be sacrificed for the sake of not creating research shortcuts that would eradicate otherwise highly stimulating and interesting challenges for human scientists. Here is Togelius’ response, in his blog post:
One of the panelists asked whether I would really prefer the joy of doing science to finding a cure for cancer and enabling immortality. I answered that we will eventually cure cancer and at some point probably be able to choose immortality. Science is already making great progress with humans at the helm. We'll get fusion power and space travel some day as well. Maybe cutting humans out of the loop could speed up this process, but I don't think it would be worth it. I think it is of crucial importance that we humans are in charge of our own progress. Expanding humanity's collective knowledge is, I think, the most meaningful thing we can do. If humans could not usefully contribute to science anymore, this would be a disaster. So, no. I do not think it worth it to find a cure for cancer faster if that means we can never do science again.
I hear the argument, and like Togelius I feel queasy about a future where advanced AI has eliminated any room for humans to contribute to research, but my mind keeps returning to those present-day cancer patients dying in the hospitals. Should we really be telling them “sorry dude, you will just have to die, because although we could save you using advanced AI, that would not just steal a bunch of fun research problems from a bunch of leading present-day cancer researchers, but would also put society on a trajectory towards eliminating the joy of scientific discovery forever, not just for cancer researchers but for scientists of all kinds”? Letting them die in this way still seems morally outrageous to me, and even more so given what a deeply privileged minority scientists are, especially compared to those cancer patients whose lives Togelius seems to be happy to sacrifice in order to preserve some of those privileges.4
So, to summarize what I’ve said so far, I very much reject Togelius’ argument against AI automation of research. But there is also something more to be said about it, namely that regardless of its objective correctness or falsehood, I believe that if one’s aim is to defend the work of human scientists and the academic institutions supporting such work, it is strategically counterproductive to go public with a message that emphasizes the joy of creative scientific discovery as the main reason for why we should uphold human-driven science. Science is a costly endeavor and those paying for it (whether via tax money or otherwise) are mostly non-scientists, who will want some bang for the buck in terms of valuable new knowledge. If the supposed bang for the buck is instead the joy of discovery that gets lost if we let AIs do the work instead of human researchers, then… well, what’s in it for those non-scientists who support us? It’s not literally nothing (because a few of them may be altruistically interested in giving us joy), but it’s also not much, and I believe that in the vast majority of cases they will decide that they prefer to prioritize other causes than supporting a very particular kind of joy experienced by a very small elite segment of the population-at-large.
I’ve been down this road (asking fellow scientists to employ a less navel-gazing attitude in their defense of science) before, and as a mathematician, I am fairly used to encountering colleagues with an attitude towards their work that, if expressed towards a broader audience, is similarly strategically counterproductive as the one expressed by Julian Togelius. What I specifically have in mind here is the among mathematicians all-too-common admiration for G.H. Hardy’s view that lack of applicability of a body of work in mathematics is not a vice but a virtue.5 Here, translated from Swedish, is an extract from an editorial that I wrote for the newsletter of the Swedish Mathematical Society, way back in 2006 when I served as chairman of that organization:
The eminent British mathematician G.H. Hardy argued in his A Mathematician’s Apology from 1940 that the lack of applicability of a piece of mathematical work should be regarded as a virtue. One frequently hears mathematicians subscribing to this view, and in the TV show Snillen spekulerar we recently heard the economics laureate Robert Aumann express something similar. He told us that before taking up game theory, he had devoted his doctoral dissertation to knot theory, which he at the time believed to be devoid of any possibility of application (a belief that has since proved mistaken), and he emphasized that it is precisely the lack of applications that lends the area an additional appeal in the eyes of mathematicians.
Let me stress this point: it may of course be entirely justified to engage in mathematical research that appears to lack extra-mathematical applications. But if so, it is despite the absence of applicability — not because of it.
I believe that most people (Aumann included) who have expressed sympathy for Hardy’s anti-application line would not seriously stand by it if pressed on the matter; rather, I suspect that it reflects the same desire to appear rebellious or cool that leads many young people to adopt a musical or dress style that strikes adults as tasteless or even offensive. Hardy’s own text, moreover, seems intended ironically. Yet I would argue that even when such views are expressed more or less in jest, they nevertheless risk damaging the good standing of mathematics.
Things are at their worst when enthusiasm for unapplicable mathematics is justified on the grounds that at least it cannot be used to develop new nuclear weapons or SUVs, or otherwise cause harm (an argument given considerable weight in Hardy’s essay). Consider for a moment what this implies, namely that that state-funded research in pure mathematics is a kind of labour-market measure for people so incapable of making a useful contribution that the best one can hope for is to keep them occupied without causing damage.
The Hardyesque position I criticized back then shares with the Togelius threnody a spoiled attitude that implicitly takes for granted that the society of non-scientists (tax payers and other donors) will keep supporting us, coupled with an arrogant disregard for what (if anything) we give them in return. This is not a plausible way to defend science and its institutions.
Finally, let me return to my rejection of Togelius’ argument against automation of science. Does this commit me to being in favor of full AI automation of science and cutting human scientists out of the loop? Not necessarily, as there may well be other, better, arguments against such automation. And indeed, there are.
I suspect that full automation of science would require AIs with such broad capabilities that they will be at most only a small step away from the ability to take over not just science, but all of human work and decision making.6 That would imply a total makeover of society, incomparably more radical than merely relieving Togelius and me and all of our colleagues across academic disciplines from our jobs and denying us the joy of scientific discovery. The transition to such a world without work is likely (unless managed with extreme care) to be very rough, but I believe in the end that it is possible to organize a society without work in an economically functioning way where everyone is materially well-off. But then what? What do we do with our lives? In his 2024 book Deep Utopia, Nick Bostrom sketches some ideas and insists on being optimistic, but the weirdness of his scenarios and of the book as a whole can equally well be read as a reductio ad absurdum. I think it is a legitimate position to be against such a restructuring of society, far more radical than the agricultural and industrial revolutions that humanity has previously passed through, and on that ground to oppose further development of advanced AI, including automation of science.7
Next, it is not a huge leap to go from accepting that AI might transform the world in such radical ways to realizing that it would be able to put an end to Homo sapiens, should it decide that doing so is a good idea. Avoiding such an outcome while still getting the benefits from superintelligent AI requires solving the AI alignment problem of giving advanced AIs goals and values in line with our own. We seem not to be on track towards solving this problem in time for superintelligence, and Soares and Yudkowsky may very well be right in their judgement that If Anyone Builds It, Everyone Dies. Therein lies a highly plausible case that we ought to pull the brakes on frontier AI development, and it is not unreasonable to take this to include a moratorium on further AI automation of science.
So Julian Togelius may well be right after all in his appeal, “Please, don’t automate science!”, although for the wrong reason.
When the quality improvement becomes sufficiently clear, it may even imply that it is wrong for a single scientist to insist on human purity and to unilaterally ignore this development, as thoughtfully discussed in a recent essay by Paul Bloom.
At one such event, at the University of Gothenburg in May 2024, I took part and took up the organizers’ kind invitation to give a side show on AI risk.
An area where such acceleration seems particularly consequential is AI research itself, because it introduces a kind of turbo feedback into the process that might lead to so-called superintelligent machines much earlier than most of us have anticipated. This connects to old ideas about an intelligence explosion and a Singularity, as pioneered by Good (1965) and Solomonoff (1985). Good’s paper contains the following oft-quoted passage.
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. […] Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
Compared to Good and Solomonoff, recent analyses by, e.g., Aschenbrenner (2024) and Kokotajlo et al (2025) on how scenarios driven by this feedback mechanism may play out within a single-digit number of years from now are far less hand-wavy (albeit not entirely free from such gestures), more concrete and more quantitative.
I am not the first one to react similarly to Togelius’ blog post. Here is Andy Masley:
It is honestly alarming to me that stuff like this, the idea that we ought to significantly delay curing cancer exclusively to give human researchers the personal gratification of finding it without AI, is being taken seriously at conferences. The background attitude here that hundreds of millions of premature deaths are worth the trade-off of making people feel important is I think going to require a lot more direct scrutiny and condemnation in the next decade. This attitude is insane and presents itself as normal.
Those are harsh words, and even harsher is Twitter commentator Sarah who describes Togelius’ position as being on a “cartoon character level of evil”. One might be tempted criticize Sarah here for not employing a tone that is conducive to constructive dialogue, but in her defense, note that it wasn’t her who first used the term “evil” in this discussion; see my first quote from Togelius’ blog post.
Separately but in the same vein, there is the cold reception among my mathematical colleagues every time I have brought up Nick Bostrom’s radically outside-the-box view of advances in mathematics and other fields.
If nothing else, the potentially extremely powerful feedback put into AI research (discussed in Footnote 3 above) that such a technology entails would likely create superintelligent AI that could do all that.
This is not to say that it is a trivial matter to defend such a position against accusations of being backward-looking, Luddite and Kaczynskian. To be honest I am not sure where I stand on this difficult issue, but I hope to come back to it.

