Those who welcome the end of the human race
On AI successionism
The successionists
If Homo sapiens goes extinct, is that good riddance? Personally, I would immediately say no, but not everyone is equally on board with the pro-human view. At 38:18 into the remarkable interview that New York Times journalist Ross Douthat did with Silicon Valley billionaire Peter Thiel this summer, there is the following unnerving passage.
RD: I think you would prefer the human race to endure, right?
PT: Uh...
RD: You’re hesitating.
PT: Well, I don’t know. I would... I would...
RD: This is a long hesitation!
PT: There’s so many questions implicit in this.
RD: Should the human race survive?
PT: Uh… yes.
RD: OK.
PT: But...
Thiel then goes on to talk about transcendence and immortality and overcoming nature with the help of God and whatnot. Perhaps we can take solace in the “yes” that he finally manage to produce. Still, even if we agree with him (as I do) that there are many further questions, including definitional issues about what it means to be a human, lurking underneath Douthat’s original question, there is something deeply disconcerting in Thiel’s hesitation and inability to give a quick and clear affirmative answer to that seemingly straightforward yes/no-question. Is this influential thinker1 and tech entrepreneur fundamentally not on humanity’s side? Is he a successionist?
A successionist is someone who welcomes the end of the human race, provided we have — in Dan Faggella’s words — a worthy successor. And when the intended worthy successor is an AI, we may speak more specifically about AI successionism. This gives rise to further definitional issues, not just about the meaning of “human”, but also that of “worthy”, and while these meanings are far from settled, I will refrain here from trying to fully pinpoint them. Instead, I will accept that the term “successionism” inherits a good deal of fuzziness from “human” and “worthy”, and that the concept therefore is not entirely black and white but leaves considerable grey areas and room for interpretations. But with that said, it seems clear to me from the above quote that Peter Thiel at least leans deeply into successionism.
There are various more clearcut examples of successionists in or around the AI and tech sphere, and let me mention a few.2 A prominent one is Larry Page, co-founder of Google, who on several occasions has expressed unabashedly successionist sentiments, the most famous of which is his quarrel with Elon Musk at a party in Napa Valley in 2013. The incident was reported by Max Tegmark (who was a first-hand witness to the incident) in his 2017 book Life 3.0, and again by Walter Isaacson in his 2023 biography Elon Musk. From p 241 in the latter book:
At Musk’s 2013 birthday party, [he and Page] got into a passionate debate in front of the other guests [...]. Musk argued that unless we built in safeguards, artificial intelligence systems might replace humans, making our species irrelevant or even extinct.
Page pushed back. Why would it matter, he asked, if machines someday surpassed humans in intelligence, even consciousness? It would simply be the next stage of evolution.
Human consciousness, Musk retorted, was a precious flicker of light in the universe, and we should not let it be extinguished. Page considered that sentimental nonsense. [...] He accused Musk of being a “speciesist,” someone who was biased in favor of their own species. “Well, yes, I am pro-human,” Musk responded. “I fucking like humanity, dude.”3
To continue the list of examples, consider the legendary AI researcher Richard Sutton, who advocates that we “prepare for, but not fear, the inevitable succession from humanity to AI”, or the likewise legendary Jürgen Schmidhuber who spells out his successionist view as follows:
In the long run, humans will not remain the crown of creation. [...] But that’s okay because there is still beauty, grandeur, and greatness in realizing that you are a tiny part of a much grander scheme which is leading the universe from lower complexity towards higher complexity.
Two other prominent successionists are Hugo de Garis, who has proclaimed that “humans should not stand in the way of a higher form of evolution”, and Robin Hanson, who is fond of downplaying the drama of AI takeover by comparing it to handing over the world to our grandchildren, and who has likened AI safety concerns to “wanting to control [the AIs] somehow, such as via genocide, slavery, lobotomy, or mind-control”. And, to take an example from my own neighborhood, the well-known Swedish philosopher and ethicist Torbjörn Tännsjö reacted in the foremost Swedish newspaper Dagens Nyheter to my 2016 book Here Be Dragons by describing my view on the desirability of preventing AI-caused human extinction as “strange and misguided”, and going on to elaborate at some length on his own successionist standpoint; his article was appropriately headlined This is why the universe becomes a better place when man is gone.4
I could continue the list,5 but that would give very little evidence about how widespread successionist ideas are among AI thinkers. To go a small step in the direction of actual scientific data, let me quote the following informal observation by Andrew Critch:
From my recollection, >5% of AI professionals I’ve talked to about extinction risk have argued human extinction from AI is morally okay, and another ~5% argued it would be a good thing. I’ve listed some of their views below. […]
These views are collected from the thousands of in-person conversations I’ve had about extinction risk, with hundreds of AI engineers, scientists, and professors, mostly spanning 2015-2023. Each is labeled with my rough recollection (r) of how common it seems to be, which add up to more than 10% because they overlap:
a) (r≈10%) AI will be morally superior to humans, either automatically due to its intelligence, or by design because we made it that way. The universe will be a better place if we let it replace us entirely.
b) (r≈5%) AIs will be humanity’s “children” and it’s morally good to be surpassed and displaced by one’s children.
c) (r≈5%) Evolution is inevitable and should be embraced. AI will be more fit for survival than humans, so we should embrace that and just go extinct like almost all species eventually do.
d) (r≈3%) AI can be designed to survive without suffering, so it should replace all life in order to end suffering.
e) (r≈2%) The world is very unfair, but if everyone dies from AI it will be more fair, which overall makes me feel okay with it. Dying together is less upsetting than dying alone.
I will come back to these five categories of arguments for AI successionism, but first I want to say something more general about the urgency of addressing it, and how to go about such discussion.
How should we engage with the successionists?
If Critch’s experience is representative, something like 10% of AI professionals are AI successionists, or at least have a propensity for expressing successionist views. That’s a shockingly high number, to those of us who are not successionists ourselves.6 It might be a good thing for the prospects of humanity if we can talk these people out of their pro-extinctionist ideas. From the vantage point of my current position that we are in such dire time trouble to solve AI alignment in time for superintelligence that in order to survive as a species it may necessary that we pull the emergency brakes on the AI race, talking (non-successionist) sense into them is desirable for the simple reason that as long as they stick to their guns, they are an obstacle and an opposing force to the consensus formation and political mobilization that may be needed to pull those brakes.7
There is also another, less obvious and less familiar, reason why talking sense into them might be a good idea. If an AI takeover begins before robotics has matured to the stage of enabling fully autonomous robots roaming the Earth, then at least in the early stage of the takeover the AI is likely to need the help of individual humans to do its biddings in the physical world. This tends to be an element of concrete AI takeover scenarios such as those outlined in the AI 2027 report and the recent book by Yudkowsky and Soares. Doing so requires powers of persuasion, and this is part of why the AI evals frameworks that most of the leading AI companies employ prior to deployment of their models have tended to include persuasion among their classes of potentially dangerous capabilities.8 With high enough such capabilities, all of us stand at risk of being persuaded, but susceptibility to persuasion by a rogue AI will obviously not be entirely homogeneous in the human population. Some of us are likely to be more susceptible to manipulation than others, and those who are can arguably be seen as disproportionately dangerous to the continued hegemony of Homo sapiens. Many psychological factors may contribute to such susceptibility, but an obvious one is the inclination towards successionist ideas: if someone already thinks it would be a good idea that humanity is replaced by advanced AI, then it is probably easier to talk them into doing things that help bring about such an outcome. And so, the more people have such an inclination, the greater the danger to the future flourishing of humanity.9
Assuming that all this is enough to get the reader on board with the desirability of convincing successionists of a more humanity-friendly (or, shall we say, “speciesist”) attitude towards a potential AI takeover, how shall we go about such discussions? Here Critch has some advice. If someone feels “outrage” about the successionist position, he can respect that, but he asks us in such a case to hold back those feelings, and to approach the successionists with “empathy and respect rather than derision”.10 This is not just to avoid contributing to an emotionally toxic atmosphere and a turmoil that might get us all killed, but also because more civilized and low-key discussion may allow for finding common ground and compromise solutions “that are positive from many of the viewpoints and feelings hiding behind these pro-extinctionist views, without accepting extinction”.
All of which makes sense to me, but I would add one more reason (one that I’ve suggested elsewhere) for treading carefully in discussions with successionists. Namely, I think that in many cases their views have not been very carefully thought through, and they may be based on outdated ideas about long timelines until AGI — many decades or even centuries. Such long timelines encourage a kind of abstract mode of thinking where the urge to take the broadest possible non-anthropocentric perspectives, while at the same time one doesn’t feel the horrifying force of concrete scenarios where every single human dies. So chances are good that as it becomes clear to them that an AI apocalypse within 5 or 10 years from now is an entirely realistic prospect, they will see things in a different light and perhaps envision how AI brutally kills them and all of their loved ones, something that may well cause them to reconsider and adopt a more pro-human attitude. But if their opinions have already been firmly cemented through defense mechanisms kicking in due to our overly aggressive anti-successionist rhetoric, then perhaps it will be too late for such a change of hearts. So my suggestion for how to approach these thinkers is that rather than expressing indignation at their pro-extinctionist attitude, a better way may be to gently steer the discussion towards the possibility of very short timelines to AGI and superintelligence, and the open question of what this might mean for how we think about AI-caused human extinction.
What are their reasons?
Central to any discussion about AI successionism should be what the assumptions and considerations are that lead to such a view. If we understand that, then not only are we better equipped to argue for a pro-human stance, but we might even find some valuable grain of truth or other merit in the successionist position. An ideal here is of course to let the successionists speak for themselves, but practice so far has shown that they rarely expand on this much beyond stating one or more of the five arguments listed by Critch in the quote I gave above.
In the absence of such deeper discussion, we may resort to speculations about their reasons. This borders on psychlogizing, which is not always a wrong thing to do, but needs to be done with care, so as not to fall into the trap of thinking that finding psychological reasons for one’s opponent’s ideas automatically obviates the need for addressing their actual arguments. As long as that is kept in mind, I strongly recommend Jan Kulveit’s essay The memetics of AI successionism, and in particular his emphasis on various tensions and cognitive dissonances that may lie at the root of successionist ideas that may serve to reconcile these tensions. For someone working on advancing AI capabilities, one such tension is how concerns about AI existential risk clashes with their desire, shared by almost everyone, to be “the hero of their own story”. A related tension occurs when the AI researcher considers how they are on a path towards making themselves obsolete, raising painful questions about self-worth and the meaning of life. Entangled with these are further tensions related to the desire to be on the right side of history, and to how the “progress is good” heuristic (which, to be sure, has been overwhelmingly true in how, over the last couple of hundred years, advances in science and technology have boosted human prosperity) may suddenly be invalidated with an AI apocalypse.11
Let me now return to Andrew Critch’s categorization of five common kinds of argument for for AI successionism. He emphasizes that he “firmly disagree[s] with all five of these views” (as do I!), yet he goes on to say that “behind each of [these] views there is a morally defensible core of values”. To evaluate that latter claim, it is worth having a look at each of the views in turn, but I’ll scramble the order a bit compared to Critch’s list.
First, consider view (a) that “AI will be morally superior to humans, either automatically due to its intelligence, or by design” and that “the universe will be a better place if we let it replace us entirely”. This is the most common view according to Critch’s informal study, and the one advocated in the aforementioned article by Tännsjö. It is strongly rooted in utilitarian thinking, but ignores any deontological side-constraints about, e.g., our right not to be murdered by robots. In Tännsjö’s version of (a), the moral realism that he adheres to ensures that the AI’s moral superiority will be an automatic consequence of their superintelligence. I believe it is a mistake (even in the unexpected case where moral realism turns out to be true) to take such an outcome for granted, as I have argued elsewhere, including in my paper on the Müller-Cannon instrumental vs general intelligence distinction.
View (d) that “AI can be designed to survive without suffering, so it should replace all life in order to end suffering” stands in a similar relation to negative utilitarianism as (a) does to utilitarianism. Negative utilitarianism is the variant of hedonic utilitarianism that disregards everything on the positive side of the hedonic scale and only cares about minimizing suffering; it has some abhorrent consequences12 and has fewer adherents among professional philosophers than more standard brands of utilitarianism, yet I do think it counts as a legitimate moral philosophy. The more it can be argued that life consists overwhelmingly of suffering (as philosopher David Benatar does, and as Buddhists tend to do; see also the Dawkins quotes below that can be taken as arguments in the same direction), the more tenable does negative utilitarianism become. But one may wonder why a negative utilitarian would want the AI to continue existing, other than maybe to endlessly patrol the world and make sure no suffering-capable life comes into existence.
Next, regarding view (b) that “AIs will be humanity’s ‘children’ and it’s morally good to be surpassed and displaced by one’s children”, the second part is of course sufficiently widely held to deserve some respect, but the first part is just an analogy that we may or may not accept as sufficiently spot-on to be a good argument for AI successionism. Robin Hanson is fond of defending it, but I am not convinced. At the very least, we need some further property of AI beyond “we created it” for the argument to fly, because otherwise the same argument can equally well be used to defend, say, turd successionism.
This leaves the two views (c) on evolution and (e) on fairness. I will devote by far the largest space to (c), but let me first quickly dismiss the view (e) that “the world is very unfair, but if everyone dies from AI it will be more fair”. That is such an utterly perverted application of the fairness ideal that all my attempts at looking at it forgivingly make my mind go blank except for the brilliant gallow humor of Tom Lehrer’s We will all go together when we go, and this is the one case that in my opinion makes Critch’s claim that all five views have “a morally defensible core of values” go a bit overboard in its attempt at diplomacy.
Finally, there is view (c) about how “evolution is inevitable” and how “AI will be more fit for survival than humans, so we should embrace that and just go extinct like almost all species eventually do”. Here I am sympathetic to the implied broader notion of evolution that allows for it to make the jump from DNA strings carried by meatsacks to a silicon-based digital existence. Nevertheless, on the face of it, I find view (c) borderline incoherent. It is obviously an appeal to nature, but note that as far as we know, all species before us have fought tooth and claw for their existence,13 and none of them has deliberately stood back and said “for the good of evolution as a whole, we willfully surrender to those superior organisms over there”. So why in the world would it be more natural and more in the true spirit of evolution if this time we did just that, rather than exploiting for our own survival the unprecedented strategic capabilities that evolution has equipped us with?
But there is plenty more to say about (c). Philosopher Joe Carlsmith, in a recent episode of Faggella’s Our Worthy Successors podcast series, offers the following interpretation of much of evolution-based successionist thinking:
Here’s an observation that people make, that often, at least provisionally, increases their loyalty towards evolution as a normative standard. They say something like “It seems like evolution led to all this stuff I like. It seems like there’s this process that is selecting essentially for the most competitive or powerful […] and look, here I am, here’s consciousness, here’s beauty, all this stuff I like, so isn’t that enough to then have allegiance to wherever this process goes in the future?” […] I think the answer is importantly no.
I think he is right that this is roughly how many successionists think about evolution, and I very much agree with his final “no”. Defending this “no” is, however, quite involved. It can be done from many angles, and Carlsmith has touched upon a few of them. In his essay Deep atheism and AI risk, he asks us to be realistic about what evolution has brought about, and invokes a famous quote from Richard Dawkins’ 1995 book River Out of Eden:
The total amount of suffering per year in the natural world is beyond all decent contemplation. During the minute it takes me to compose this sentence, thousands of animals are being eaten alive; others are running for their lives, whimpering with fear; others are being slowly devoured from within by rasping parasites; thousands of all kinds are dying of starvation, thirst and disease.
Consider the kind of faith held by Carlsmith’s hypothetical evolution loyalist in a process that causes such insane amount of suffering in its creation — surely such faith is misplaced?
The optimistic evolution loyalist might respond that while there is still large amount of suffering in the world today, most of it is actually in the past, and with the emergence of Homo sapiens we have seen a shift in proportions towards more joy and various other desirable goods, and less suffering. So there is obviously a trend for the better, and it is the continued unfolding of this trend that we can safely sit back and enjoy!14
Such optimism seems to me incautious and misplaced. Consider how the Dawkins quote continues:
…dying of starvation, thirst, and disease. It must be so. If there ever is a time of plenty, this very fact will automatically lead to an increase in the population until the natural state of starvation and misery is restored. In a universe of electrons and selfish genes, blind physical forces and genetic replication, some people are going to get hurt, other people are going to get lucky, and you won’t find any rhyme or reason in it, nor any justice. The universe that we observe has precisely the properties we should expect if there is, at bottom, no design, no purpose, no evil, no good, nothing but pitiless indifference.
Here Dawkins anticipates Robin Hanson’s later idea that we live in an exceptional “dreamtime” caused by an anomalous but ultimately unsustainable decoupling between economic growth and population growth, and that evolutionary selection forces are bound to eventually throw us back into a Malthusian trap where our consumption is forced down to the level of subsistence. Scott Alexander elaborates on such outcomes in his classic Meditations on Moloch, and Carlsmith adds to the discussion in his recent talk Can goodness compete?, addressing certain aspects of what the world may look like in the long run, after superintelligence has been invented. Exactly what goodness is (love, beauty, positively valenced conscious experience, whatever) is left open, but the finding is that under the plausible assumption that goodness is not identical to power, sheer selection pressures make the long-term future looks bleak. From the introductory part of the talk:
The vibe is something like this: Competition. Well, what wins competition? Power. And power is famously, unfortunately, not the same as goodness. So implication, maybe: every opportunity to sacrifice goodness for the sake of power makes you more likely to win. So if the future involves a lot of competition, then goodness loses.
A related theme in the talk concerns so-called strategy-stealing assumption, which is the idea that agents with good values are under no competitive disadvantage because of the possibility of copying the strategies of its adversaries who compete for the same resources. This symmetry may for various reasons fail, says Carlsmith. In particular, what he calls locusts — agents who simply value maximizing growth and resource consumption above all else — turn out to be especially difficult counterparties to someone who wants to avoid burning resources and instead use them for some specific more constructive purpose. (In this sense humanists and paperclippers are more akin to each other than to locusts.) Selection pressures may therefore lead to a world dominated by locusts, a troubling convergence scenario somewhat in the same spirit as one envisioned in the 2020 paper An AGI modifying its utility function in violation of the strong orthogonality thesis that I coauthored with James Miller and Roman Yampolskiy. If these kinds of long-term futures are what evolutionary pressure leads to, then surely evolution is not worthy of our loyalty, and we should instead do what we can to break free of its shackles and try to create a better coordinated world for ourselves.
A moderate successionism?
An in-depth interview with Carlsmith on (among other issues) the ideas in his Can goodness compete talk is conducted by Fin Moorhouse in a recent episode of the ForeCast podcast, and 1:34:07 into the recording Carlsmith offers a nice metaphor. He asks us to imagine a world with three kinds of fish — big, medium, and small — where fish across different sizes interact in two ways: smaller fish create bigger ones, and bigger fish eat smaller ones. Initially there are only small fish, and the first thing that happens is that the small fish create medium fish, only to promptly be eaten by those medium fish. Next:
Now you’re the middle fish. Should you create the big fish? You might be like “there’s this process of creating bigger fish, that has gone well, it created me”. But actually you shouldn’t, what you should do is be like “this process went this far, it created me, but no, I want to stop it here”. […] Just because you were the output of a process doesn’t mean that this process if continued will lead to good places.
The strength of this fable is not that it conveys any deep insight — there really isn’t any great depth to it — but rather that it helps bring to the forefront a crucial tension. For consider the obvious response from a successionist, who would surely complain that the recommendation to stop is middle-fishocentric, in that it is based solely on what outcome is good for the middle fish, while failing to consider what is best for the world as a whole: maybe a world with big fish is better than one with medium fish? And likewise (continues the successionist) we should not be overly anthropocentric when we consider the possibility of the human race being replaced by superintelligent AIs, because what matters is not just what is good for us humans, but what is good for the universe.
I actually think the successionist has a good point here — the broader perspective does matter. I will agree with them here as long as they don’t insist on entirely throwing out the anthropocentric perspective, but merely point out that the point of view of the universe also needs to be considered.15
There seems to be some room for compromise here. As a human I want humanity to persist, but perhaps not at any cost to the universe. If we think sufficiently long-term — a million or a billions years from now, or maybe just a thousand years from now — it seems fine to me to have a world which by then is inhabited not by literal humans but by some sort of ancestors of us, as long as they have plenty of stuff sufficiently similar to the things we value most: love, beauty, knowledge, happiness, and so on. I might even be fine with a world where those particular values have been thrown over board in favor of other even better values that have been discovered by our superhumanly intelligent ancestors over the millennia. But I have one condition: Homo sapiens must not go extinct too violently, or so abruptly that all our cherished life projects go down in flames. The process has to be gradual, nonviolent, harmonious and voluntary. Then it’s fine.
How about that as a reasonable compromise — a moderate successionism?16
Influential in some circuits, that is. I don’t rule out that he has had some influence on me, but if so, it must have been indirect, because whenever I’ve tried to read or listen to him, I recoil from his arguments and ideas. These include his extreme fear of regulation and the stagnation he considers an inevitable consequence (seemingly trumping even concerns about existential risk), and his relentless talk about the antichrist, recently even going as far as suggesting Greta Thunberg or Eliezer Yudkowsky could be the literal antichrist. Given this, I find it jaw-dropping that a respected (by me!) intellectual like Tyler Cowen not only speaks of Thiel as being “one of the greatest and most important public intellectuals of our entire time” but adds on top of this that “throughout the course of history, he will be recognized as such”.
Of course I am not the first to list revealing quotes from successionists; see, e.g., the section on zealots in The Compendium by Connor Leahy et al, and The endgame of edgelord eschatology by Emile Torres. Note, however, that the second of these references swings wildly in all directions in a way that sometimes exhibits more heat than precision, such as in his treatment of Sam Altman. While Altman does deserve criticism on various grounds (an opinion returning readers of this blog are likely to know that I hold), pointing (as Torres does) to his choice to sign up for brain preservation in case he dies young and in the hope of eventually uploading his mind to the cloud is not a valid argument for him being a successionist. This is like (say) pointing to someone’s enthusiasm for playing tennis as evidence that they wish to force everyone else to play tennis, or (to take an example that might be more familiar to Torres) to accuse someone of wanting to exterminate men just because they themself has transitioned from man to some other gender.
This incident had major consequences for subsequent developments in the AI sphere. It created lasting tension and distrust between the two former friends, to the extent that when Google in 2014 bought DeepMind (which at that time was in a unique position as the world’s foremost AI developer), Musk found the increased influence Page thereby acquired to be so frightening that he needed to do something about it. This is what led him in 2015 to co-found OpenAI with Sam Altman and others, thereby kicking off the ongoing AI race.
My translation from the Swedish original.
Among thinkers with at least some successionist tendencies, one might even include Nick Bostrom, who with his seminal 2014 book Superintelligence has done more than almost anyone else to warn about the existential danger that AI may impose on us. Consider his oft-quoted definition of existential risk in his 2013 paper Existential risk as global priority:
An existential risk is one that threatens the premature extinction of Earth-originating intelligent life or the permanent and drastic destruction of its potential for desirable future development.
The reason he speaks of “Earth-originating intelligent life” here, rather than simply “humanity”, is that he is able to envision scenarios where humanity has been replaced by successors so worthy that the scenarios should count as sufficiently benign to not warrant the label existential catastrophe.
Or am I in fact a successionist myself — an even more shocking suggestion! I very much do not identify as one, but I’ll get back to the question with some more nuance towards the end of this essay.
Not the only obstacle, however: the two other major (and highly overlapping) groups of adversaries that similarly stand in the way of the desired mobilization are those who think AI is just another normal technology which cannot plausibly develop into AGI or anything similarly threatening, and those who dismiss the idea that such an entity might want to kill us.
See, e.g., Open AI’s Preparedness Framework from December 2023, and the video discussion of it that I did soon thereafter.
Someone whose liberal stance is less resolute than mine might be tempted to suggest that if worse comes to worst and we find ourselves having evidence of a rogue AI operating covertly to gain power, then in order to protect the human race we should preemptively lock up successionists and cut them off from Internet access. History offers plenty of examples where in a state of war similar actions have been taken against suspected or potential traitors. But no, we just cannot do that! Throwing people in jail based solely on their philosophical views strikes me as very much the kind of thing that is prohibited and wrong on deontological grounds, pretty much regardless of what a consequentialist calculation happens to recommend. If we are to draw any conclusion at all from this thought experiment, it would be that we should never ever risk releasing a misaligned AI with enough intelligence and agency to put us in such a dilemma.
I hope Critch will think this blog post meets his standards in terms of “empathy and respect”, but this remains to be seen.
The e/acc movement and Marc Andreessen’s Techno-Optimist Manifesto are expressions of denial of this tension and of the possibility that technological progress and what is fundamentally good may come apart.
As shown in Toby Ord’s Why I’m not negative utilitarian, although a negative utilitarian might reply that the same goes for virtually every ethical system that has been proposed, or see Simon Knutsson’s more detailed reply to Ord.
What about the dodo, you say. To which I reply, come on, you know what I mean.
If I had written this blog post more in the style of a Socratic dialogue, I might have let the fictitious Simplicio go on to point to the books The Better Angels of Our Nature and Enlightenment Now by Steven Pinker for evidence of how much better the world is getting, upon which I could have triumphantly explained to Simplicio that the way the world gets better according to Pinker is not from a hands-off policy where we humans stand back and watch evolution (or whatever the exterior force would be) improve everything around us, but from our agency based in our enlightenment and ever-increasing ability to reason rationally.
Carlsmith is clearly with me here; see, e.g., his essay series Otherness and control in the age of AGI which largely revolves around the need to balance the point of view of humanity with a more impartial point of view.
Readers familiar with my earlier writings will — notwithstanding my overly theatrical Footnote 6 above — not be shocked by this suggestion. For instance, Chapter 3 in Here Be Dragons is devoted to issues around transhumanism and human enhancement, and for the most part I come out moderately in favor of such technologies.


Might Thiel, Page, etc. actually be moderate successionists? They might think that the best outcome is 'humanity going extinct' only in the sense that we -- the people of today -- become digital and therefore non-human. Their remarks seem compatible with that view.
Robin Hanson has written about our current era as the "Dream Time" ( as in Australian aboriginal culture) because of the low competition pressure allowing for most or maybe even nearly all of what we find good about life and humanity.
"Locusts" are what ends the Dream Time.