Freddie deBoer has a post on what he calls “the temporal Copernican principle.” He argues we shouldn’t expect a singularity, apocalypse, or any other crazy event in our lifetimes. Discussing celebrity transhumanist Yuval Harari, he writes:
What I want to say to people like Yuval Harari is this. The modern human species is about 250,000 years old, give or take 50,000 years depending on who you ask. Let’s hope that it keeps going for awhile - we’ll be conservative and say 50,000 more years of human life. So let’s just throw out 300,000 years as the span of human existence, even though it could easily be 500,000 or a million or more. Harari's lifespan, if he's lucky, will probably top out at about 100 years. So: what are the odds that Harari’s lifespan overlaps with the most important period in human history, as he believes, given those numbers? That it overlaps with a particularly important period of human history at all? Even if we take the conservative estimate for the length of human existence of 300,000 years, that means Harari’s likely lifespan is only about .33% of the entirety of human existence. Isn’t assuming that this .33% is somehow particularly special a very bad assumption, just from the basis of probability? And shouldn’t we be even more skeptical given that our basic psychology gives us every reason to overestimate the importance of our own time?
(I think there might be a math error here - 100 years out of 300,000 is 0.033%, not 0.33% - but this isn’t my main objection.)
He then condemns a wide range of people, including me, for failing to understand this:
Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more. I think they should ask themselves how much of their understanding of the future ultimately stems from a deep-seated need to believe that their times are important because they think they themselves are important, or want to be.
I deny misunderstanding this. Freddie is wrong.
Since we don’t know when a future apocalypse might happen, we can sanity-check ourselves by looking at past apocalyptic near-misses. The closest that humanity has come to self-annihilation in the past 300,000 years was probably the Petrov nuclear incident in 19831, ie within Freddie’s lifetime. Pretty weird that out of 300,000 years, this would be only 41 years ago!
Maybe you’re more worried about environmental devastation than nuclear war? The biggest climate shock of the past 300,000 years was . . . also during Freddie’s lifetime2. Man, these three-in-a-thousand coincidences keep adding up!
“Temporal Copernicanism”, as described, fails basic sanity checks. But we shouldn’t have even needed sanity checks as specific as these. Common sense already tells us that new apocalyptic weapons and environmental disasters were more likely to arise during the 20th century than, say, the century between 184,500 BC and 184,400 BC!
What’s Freddie doing wrong, and how can we do better? The following argument is loosely based on one by Toby Ord. Consider three types of events:
First, those uniformly distributed across calendar time. For example, asteroid strikes are like this. Here Freddie is completely right: if there are 300,000 years of human history, and you live 100 years, there’s an 0.03% chance that the biggest asteroid in human history strikes during your lifetime. Because of this, most people who think about existential risk don’t take asteroid strikes too seriously as a potential cause of near-term apocalypses.
Second, those uniformly distributed across humans. This is what you might use to solve Sam Bankman-Fried’s3 Shakespeare problem - what’s the chance that the greatest playwright in human history is alive during a given period? Freddie sort of gets this far4, and provides a number: 7% of humans who ever lived are alive today5.
Third, those uniformly distributed across techno-economic advances. You’d use this to answer questions like “how likely is it that the most important discovery/invention in history thus far happens during my lifetime?” This seems like the right way to predict things like nuclear weapons, global warming, or the singularity. But it’s harder to measure than the previous two.
You could try using GDP growth. At the beginning of Freddie’s life, world GDP (measured in real dollars) was about $40 trillion per year. Now it’s about $120 trillion. So on this metric, about 66% of absolute techno-economic progress has happened during Freddie’s lifetime. But we might be more interested in relative techno-economic progress. That is, the Agricultural Revolution might have increased yields from 10 bushels to 100 bushels of corn. And some new tractor design invented yesterday might increase it from 10,000 bushels to 10,100 bushels. But that doesn’t mean the new tractor design was more important than the Agricultural Revolution. Here I think the right measure is log GDP growth; by this metric, about 20% of techno-economic progress has happened during Freddie’s lifetime.
Freddie sort of starts thinking in this direction6, but shuts it down on the grounds that some people think technological growth rates have slowed down since the mid-20th century. Usually the metric that gets brought out to support this is changes in total factor productivity, which do show the mid-20th century as a more dynamic period than today. So fine, let’s do the same calculation with total productivity. My impression from eyeballing this paper is that about 35% of all increase in TFP growth and 15% of all log TFP growth has still happened during Freddie’s lifetime.
So what’s our prior that the most exciting single technological advance in history thus far happens during Freddie’s lifetime? My best guess is 15%7.
How do we move from “most exciting advance in history” to questions about the singularity or the apocalypse?
Robin Hanson cashes out “the singularity” as an economic phase change of equal magnitude to the Agricultural or Industrial Revolutions. If we stick to that definition, we can do a little better at predicting it: it’s a change of a size such that it’s happened twice before. Using our previous number, we estimate ~30% chance that such a change happens in our lifetime.
(sanity check: the last such earth-shattering change was the Industrial Revolution, about 3 - 4 lifetimes ago.)
What about the apocalypse? This one is tougher. Freddie tries to do an argument from absurdity: suppose the apocalypse happened tomorrow. Wouldn’t it be crazy that, you, of all the humans who have ever existed, were correct when you thought the apocalypse was nigh? No, it’s not crazy at all. If the apocalypse happens tomorrow, then 7% of humans throughout history would have been right to predict an apocalypse in their lifetime. That’s not a such a low percent - your probability of being born in the final generation is about the same as (eg) your probability of being born in North America.
Here’s a question I don’t know how to answer - the number above (7%) is about how surprised you should be if the apocalypse happens in your lifetime. But I don’t think it’s the overall chance that the apocalypse happens in your lifetime, because the apocalypse could be millions of years away, after there had been trillions of humans, and then retroactively it would seem much less likely that the apocalypse happened during the 21st century. So: is it possible to calculate this chance? I think there ought to be a way to leverage the Carter Doomsday Argument here, but I’m not quite sure of the details.
Speaking of the Carter Doomsday Argument…
…Freddie is re-inventing anthropic reasoning, a well-known philosophical concept. The reason why the hundreds of academics who have written books and papers about anthropics have never noticed that it disproves transhumanism and the singularity is because Freddie’s version has obvious mistakes that a sophomore philosophy student would know better than to make.
(local Substacker Bentham’s Bulldog is a sophomore philosophy student, and his anthropics mistakes are much more interesting.)
The world’s leading expert on anthropic reasoning is probably Oxford philosophy professor Nick Bostrom, who literally wrote the book on the subject. Awkwardly for Freddie, Bostrom is also one of the founders of the modern singularity movement. This is because, understood correctly, anthropics provides no argument against a singularity or any other transhumanist idea, and might (weakly) support them.
I think if you use anthropic reasoning correctly, you end up with a prior probability of something like 30% that the singularity (defined as a technological revolution as momentous as agriculture or industry) happens8 during your lifetime, and a smaller percent that I’m not sure about (maybe 7%9?) that the apocalypse happens during your lifetime. None of these probabilities are lower than the probability that you’re born in North America, so people should stop acting like they’re so small as to be absurd or impossible.
But also, prior probabilities are easy-come, easy-go. The prior probability that you’re born in Los Angeles is only 0.05%. But if you look out your maternity ward window and see the Hollywood sign, ditch that number immediately and update to near certainty. No part of anthropics should be able to prevent you from updating on your observations about the world around you, and on your common sense.
(except maybe the part about how you’re in a simulation, or the part about how there’s definitely a God who created an infinite number of universes, or how there must be thousands of US states, or how the world must end before 10,000 AD, or how the Biblical Adam could use his reproductive decisions as a shortcut to supercomputation, or several other things along these same lines. I actually hate anthropic reasoning. I just think that if you’re going to do it, you should do it right.)
The Toba supervolcano is over-rated. You could argue Cuban Missile Crisis was worse than Petrov, but that just brings us back 60 years instead of 40, which I think still proves my point.
Something called “the Eemian” 130,000 years ago was larger in magnitude, but happened gradually over several thousand years. Maybe I’m cheating by failing to rigorously define “biggest climate shock”, but I think we’re definitely in at least the biggest of the past few millennia.
Better known for other work.
If he got this far halfway down, why did he even present the obviously-wrong 0.03% number as his headline result? Was he hoping we wouldn’t read the rest of his post?
This is slightly wrong for the exact framing of the question; your life is a span rather than a point, so probably by the time you die, about 10% of humans will have been alive during your lifespan. The exact way you think about this depends on how old you are, and I’ll stick with the 7% number for the rest of the essay.
Again, I don’t understand why he bothered giving the earlier obviously-wrong-for-this-problem numbers, vaguely half-alluded to the existence of this one in order to complain that someone could miscalculate it, and then put no effort into calculating it correctly or at least admitting that he couldn’t calculate the number that mattered.
Some of these numbers depend on how you’re thinking of “lifespan” vs. “lifespan so far” and how much of your actually-existing foreknowledge about the part of your life you’ve already lived you’re using. I’m just going to handwave all of that away since it depends on how you’re framing the question and doesn’t change results by more than a factor of two or three.
Realistically the Agricultural and Industrial Revolutions were long processes instead of point events. I think the singularity will be shorter (just as the Industrial Revolution was shorter than the Agricultural), but if this bothers you, imagine we’re talking about the start (or peak) of each.
It might be unfair for me to use this number as a central estimate instead of a lower bound, except that when I actually try to do the Carter Doomsday calculation I sometimes get higher estimates. I haven’t discussed these in the post because I’m very unsure I’m doing the calculation correctly.
Contra DeBoer On Temporal Copernicanism