436 Comments
User's avatar
Victor Chang's avatar

Yeah I like some of FDB's stuff but this was particularly off the mark. I immediately wanted to correct him in the comments but annoyingly his comment section is limited to paid subscribers.

Expand full comment
Dylan's avatar

To be fair to Freddie, he writes exactly the sort of things that makes public comments go to shit - the community here is mostly rationalists, the community there is a lot of right-wingers enjoying Freddie "goring the right ox" (as he puts it) with marxists and people interested in the same subject matter a seeming minority; even the paid comments can be pretty bad.

Expand full comment
Victor Chang's avatar

Yeah I feel like at some level FDB's super inflammatory rhetoric (while fun to read) naturally draws in more toxic commenters. This community is mostly rationalists but Scott also refrains from needless name-calling and assumes best intentions.

Expand full comment
Catmint's avatar

Except this post. If it does meet that bar, it's only by a technicality. But usually he's quite good at that.

Expand full comment
Migratory's avatar

Freddie has been putting out bad takes on AI for a long time now. He's so confidently wrong that it's grating. I lost patience with him a year ago, I'm not surprised Scott eventually snapped and wrote an aggressive article. We're only human.

Expand full comment
Shockwell's avatar

He hates his comments section, which I understand - it's full of obsessive unsophisticated antiwoke types who are not necessarily representative of his overall readership but (as always) dominate the conversation because they are obsessed.

At the same time... look, I'm a pretty strong believer that bloggers tend to get the commentariat they deserve. I don't mean that as a moral judgment - I really like Freddie and his writing, and I am a paid subscriber myself, but he is to a significant extent a "takedown" blogger, with a tone and approach that tends toward the hyperbolic and is sometimes more style than substance, insofar as he's not so much saying anything new as he is saying it very well. That is not a bad thing - some things deserve to be taken down, and Freddie does it better than anyone - but it is inevitably going to attract a type of reader that is less interested in ideas than in seeing their own preconceptions validated ("goring the right ox"). There's a reason the one-note antiwoke crowd hangs out at FdB rather than (say) ACX, even though this community is more sympathetic to a lot of the antiwoke talking points (eg trans-skepticism) than Freddie himself.

Expand full comment
RenOS's avatar

The paid comments are also "shit", as you put it. In general, I'm not convinced yet that restricting comments to paid improves the discourse beyond turning it more into an echo chamber; Nate Silver is an even better example where the articles are not even that inflammatory but the (paid) comment section is almost unreadable.

Expand full comment
FionnM's avatar

Half the time he closes the comments to paid subscribers as well when he doesn't get the reaction he wanted.

Expand full comment
Forrest's avatar

Based.

Expand full comment
The Unimpressive Malcontent's avatar

I've seen him out-right banning people for criticizing his articles in ways that were far more civil and respectful than what was within the articles themselves. Weird to make your career on antagonist writing while not permitting antagonism from others.

Expand full comment
Baron Aardvark's avatar

You’re better off. Freddie is constitutionally unable to handle reasoned disagreement. It always devolves into some version of an aggrieved and misunderstanding Freddie shouting, over and over, “you didn’t understand what I wrote you idiot.”

Expand full comment
Michael Kelly's avatar

more annoying than Freddie's comment section being limited to paid subscribers (which I get) he turns it off when he finds comments questioning trans, or turns comments off when he suspects it will turn contra trans. All this despite declaring himself a free speech zealot ... but he's also a Marxist free speech zealot. Censorship is most likely the defining term for a Marxist free speech zealot.

Expand full comment
darwin's avatar

That's the game, restrict comments section to paid users, occasionally make glaring errors that everyone feels a need to correct you on in the comments.

Expand full comment
Moon Moth's avatar

If I only had a nickel for every time someone was upset that I was wrong on the Internet...

Expand full comment
Eremolalos's avatar

I feel bad for DeBoer because he’s bipolar, and had a manic episode a while back where he said spewed a bunch of indefensible stuff. And he’s hypersensitive. Now

Scott’s rebutted his argument, which is fair enough, and now a bunch of ACX readers are criticizing him in a smirky kind of way, which sux. Jeez, let him be. He has his own kind of smarts.

Expand full comment
Moon Moth's avatar

Yeah. I don't like the pile-ons. I think Scott has the right of this argument, but abstractly, my main goal besides identifying the truth would be to convince Freddie of it. And that's best done by someone he knows and respects, like perhaps Scott. And probably best done without pile-ons or derogatory comments or anything that would get Freddie's hackles up, because that's just going to make it less likely that he change his mind.

I suppose "playing for an audience" and "increasing personal status" and "building political coalitions" are goals lots of people have, which might be served by mocking someone. I don't like it when I have those goals, and I don't like it when they shape my actions, but de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/ Mostly, it just seems cruel to sacrifice a random person due to a quirk of fate.

Expand full comment
Eremolalos's avatar

<de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/

Yeah, I understand about that. I seem to be wired to be unable to stick with playing for an audience, etc. I could make a case that's because I've got lotsa integrity etc etc., but I really don't think that's what it is. I am just not wired to be affiliative. I can be present in groups and participate in group activities and like many of the individuals, but I never have the feeling that these are my people, we're puppies from the same litter etc.

It's definitely cost me a lot. One cost, one I really don't mind that much, is money. If you're a psychologist who got Ivy League training and did a postdoc at a famous hospital that treats the rich and famous, you're in a great position to network your way into a private practice where you charge $300/hour, and see people to whom that sum is not a big deal. But I didn't do it. One reason I didn't is that it seems -- I don't know, kinda piggy -- to get good training then only give the benefits of it to Ivy Leaguers. But also, and this accounts for it more, I simply could not stand to do the required networking. Everybody knew me from work, but to really network I needed to like, accept Carol's suggestion to join her high end health club with flowers in the dressing room, and then have a little something with Carol in the club bar afterwards, and sometimes she'd bring along somebody she'd met on the stairmasters, and I realized I was supposed to bring along somebody I'd met now and then, and I just couldn't stand it. I hate talking to people while I'm working out. I like to close my eyes, put on headphones and try to get some self-hypnosis going where I have the feeling that the music is the power that's moving my legs, then sprint til I'm a sweaty red-faced mess.

Has it caused you misfortunes beyond the one big disaster?

Expand full comment
Greg G's avatar

He also tends to yell at people who disagree with him in the comments.

Expand full comment
Jay's avatar

I used to subscribe to Freddie. Don't really recommend it.

Expand full comment
Sergei's avatar

Speaking of Anthropics and Doomsday, Sean Carroll addressed it, yet again, in his last AMA

https://www.preposterousuniverse.com/podcast/2024/09/02/ama-september-2024/

> The doomsday argument for those who want to... Who haven't heard of it, is an argument that doomsday for the human race is not that far off in the future, in some way of measuring, based on statistics and the fact that the past of the human race is not that far in the past. It would be unlikely to find ourselves in the first 10 to the minus five of the whole history of the universe, so the whole history of humanity. Or even the 10 to the minus three of the whole history of humanity. So therefore, probably the whole history of humanity is not stretched into the future very far. That's the doomsday argument. So you say the doomsday argument fails because you are not typical, but consider the chronological list of the n humans who will ever live. Almost all the humans have fractional position encoded by an algorithm of size log 2n bits.

> This implies their fractional position has a uniform probability density function on the interval zero to one, so the doomsday argument proceeds. Surely it is likely that you are one of those humans. No, I can't agree with any of this, really, to be honest. Sure, you can encode the fractional position with a certain string of a certain length n, okay? Great. Sorry, the log 2n is the length of the string. Yes, that is true. There's absolutely no justification to go from that to a uniform probability density function. In fact, I am absolutely sure that I am not randomly selected from a uniform probability distribution on the set of all human beings who ever existed, because most of those human beings don't have the first name Sean. There you go, I am atypical in this ensemble. But where did this probability distribution purportedly come from? And why does it get set on human beings.

> Why not living creatures? Or why not people with an IQ above a certain or below a certain threshold? Or why not people in technologically advanced societies? You get wildly different answers. If you depend... If you put different... If you have different... What are they called? Reference classes for the set of people in which you are purportedly typical, multi-celled organisms. So that's why it's pretty easy to see that this kind of argument can't be universally correct, because it's just no good way to decide the reference class. People try, Nick Bostrom, former Mindscape guest, has put a lot of work into this, wrote a book on it, and we talked about it in our conversation. But I find all the efforts to put that distribution on completely unsatisfying. The one possible counter example would be possible counter example, would be if we were somehow in equilibrium.

> If somehow there was some feature of humanity where every generation was more or less indistinguishable from the previous generation, then within that equilibrium era, if there was a finite number of people, you might have some justification for choosing that as your reference class. But we are clearly not in equilibrium, things are changing around us very, very rapidly. So no era in modern human history is the same as the next era, no generation is the same, there's no reason to treat them similarly in some typicality calculation.

Expand full comment
Scott Alexander's avatar

I actually think you can do Carter Doomsday with anything you want - humans, living creatures, people with IQ above some threshold that your IQ is also over, even something silly like Californians. You'll get somewhat different answers, but only in the same sense that if you tried to estimate the average temperature of the US by taking one sample measurement in a randomly selected place, you would get different answers depending on which place you took your sample measurement (also, you're asking different questions - when will humanity be destroyed vs. when will California be destroyed).

I think the stronger objection to Carter Doomsday is https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal ; I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Expand full comment
Sergei's avatar

I think Sean's point is that this argument is useless because your reference class is unstable or something. Hence the last paragraph, where he steelmans it.

I do not understand the point of SSA vs SIA, neither seems to have any predictive power.

Expand full comment
The Ancient Geek's avatar

Man does not live by prediction alone. SSA and SIA are trying to do.something more like abduction.

Expand full comment
SnapDragon's avatar

Well, yes, you can do the Carter Doomsday argument with any reference class you want because it's wrong, and bad math can "prove" anything. The fact that it's so flexible, and that it's making predictions of the future that swing by orders of magnitude just based on what today's philosophy counts as consciousness, should warn you off of it!

The SIA at least has the property that you get the correct answer out of it (ie, you can't predict the future based on observing literally nothing). But the real problem is the idea that there's any "sampling" going on at all. Both the SSA and SIA have that bad assumption baked into them. There's one observer to a human, no more and no less. If you believe in solipsism - that you're the one true soul who is somehow picked (timelessly) out of a deterministic universe populated by p-zombies - THEN the Doomsday Argument applies. But I doubt that you do.

Note that the correct boring answer - that your existence doesn't inherently count as evidence for or against a future apocalypse - does NOT depend on fuzzy questions like what reference class you decide you count as. Good math doesn't depend on your frame of reference. :)

Expand full comment
Bentham's Bulldog's avatar

I think SIA shows exactly why Doomsday goes wrong, but that it's not hard to see that Doomsday does go wrong. Like, if Doomsday is right, then if the world would continue for a bunch of generations unless you get 10 royal flushes in a row in poker, you should be confident that you'll get 10 royal flushes in a row--not to mention that Adam and Eve stuff https://link.springer.com/article/10.1007/s11229-024-04686-w

Expand full comment
Ape in the coat's avatar

> I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Oh, come on, Scott, you know better than that!

Here is a common sense way to deal with the Doomsday Argument, whitch, I believe, resolves most of the confusion around the matter, without accepting the bizarreness of either SSA or SIA.

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic

Expand full comment
SnapDragon's avatar

Ape, I was so glad to read your post. For years, I've felt like I'm taking crazy pills whenever I see supposedly (or even actually) smart people discuss the DA. Like you said, "it's not a difficult problem to begin with" - well, ok, maybe it does take more than "a few minutes" to really get a solid grasp on it. But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

Expand full comment
Ape in the coat's avatar

You are most welcome. It's a rare pleasure to meet another person who reasons sanely about anthopics. And I extremely empatize with all the struggles you had to endure while doing so.

I have a whole series of posts about anthropic reasoning on LessWrong, which culminates in resolution of Sleeping Beauty paradox - feel free to check it out.

> But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

I know, right! I originally couldn't understand why otherwise reasonable people are so eager to accept nonsense as soon as they start talking about anthropics. I currently believe that the source of this confusion lies in a huge gap in understanding of fundamentals of probability theory, namely the concept of probability experiment, how to assign sound sample space to a problem and and where the uniform prior even comes from. Sadly, Bayesians can be very suceptible to it, because all the talk about probability experiments have "frequentist vibe".

Expand full comment
Victualis's avatar

Fewer words better. "The mutually exclusive events in the space are just two, whatever names you give them. They have equal probability. Ignore the flim-flam."

Expand full comment
Scott Alexander's avatar

This seems so confused that it's hard to even rebut. The fact that you are a specific person doesn't counteract the fact that you are a person.

Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Knowing only that I was born on the 311th day of the year, I should predict that a year is 622 days long (again, not as good, but right order of magnitude - compare to eg accidentally flipping the day-of-the-month and day-of-the-year and predicting a year is 14 days long!)

Knowing only that I was born on the 4th year of the decade, I should predict that a decade is 8 years long (again, pretty good).

Notice that by doing this I have a pretty low chance of getting anything wrong by many orders of magnitude. For example, to accidentally end up believing that a year lasted 10,000 days, I would have to have (by unlucky coincidence) have be born very early in the morning of January 1.

Here I think anthropics just works. And here I think it's really obvious that "but your parents, who birthed you, are specific people" has no bearing whatsoever on any of these calculations. You can just plug in the percent of the way through the interval that you were born. I think this works the same way when you use "the length of time humans exist" as the interval.

(I'm slightly eliding time and population, but if I knew how many babies were born on each day of the 1980s, I could do these same calculations and they would also be right).

Expand full comment
SnapDragon's avatar

But wait ... "the length of time humans exist" is NOT the measure that Doomsday Argument proponents use! Continuing your analysis, since you're born roughly 20,000 years into homo sapiens' history, you should expect homo sapiens to go for 20,000 years more. Which is much more than an order of magnitude off from what Doomsday Argument proponents predict (because population growth has been exponential, and they use "birth order index" instead of date). You picked a slightly different way to slice the data and got an irreconcilably different result.

This should tell you that something is going badly wrong here, but what? All you've really shown here is that you can sample dates, from a large enough set that, say, "day of year" will be nice and uniformly distributed. Then the Copernican Principle can be used. (It's a mistake to call this "anthropics", though, as what you're measuring is uncorrelated with the fact of your existence. If, say, babies were more likely to be stillborn later in the year, then you could predict ahead of time that your day of birth would probably be on an early date. That's an anthropic argument.)

But how will you sample "birth order index"? Everybody you know will have roughly the same one, regardless of whether there are billions or quadrillions of humans in our future. You're not an eldritch being existing out of time, and can't pick this property uniformly at random.

To be honest, I'm not sure I'm doing a good job of explaining what's going wrong here. I also feel like this discussion is "so confusing it's hard to rebut". The real way to understand why the DA is wrong is just to create a (simple) formal model with a few populated toy universes, and analyze it. Not much English to obfuscate with... just math.

Expand full comment
Ape in the coat's avatar

Thanks for the reply! Let's clear the confusion, whether its mine or yours.

> Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you. Therefore your birth rank in necessary among the 6-th ten-billion group of people, therefore you do not get to make the update in favor of short timeline.

Whether you could've been born in any other hour of the day or in any other day of the month or in any other month of the year is, strictly speaking, irrelevant to this core claim. We can imagine a universe where the birth of a child happens truly randomly throughout a year after their parents had sex. In such universe Doomsday Inference would still be false.

That said, our universe isn't like that. And your reasoning here doesn't systematically produce correct answers, allowing for us to distinguish between things that are and are not randomly sampled, which we can see via a couple of simple sanity checks.

> Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Sanity check number one. Would your reasoning produce different result if you birth hour was definetely not random?

Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference: notice that your predicted value is very close to correct value and therefore assume that you were born at a random time throughout the day, even though, it would be completely wrong. Sanity check 1 failed.

> Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Sanity check number two. Would some other, clearly wrong approach, fail to produce similarly good estimate?

I've generated a random english word, using this site: https://randomwordgenerator.com/

The word happened to be "inflation". It has 9 letters. Suppose for some reason I believe that the length of this word is randomly selected from the number of days in a month. That would give me an estimate of 18 days in a month. Even better than yours!

If I believed the same for the number of days in a year I'd be one order of magnitude wrong. On one hand, that's not good, but on the other, just one order of magnitude! If I generated a random number from -infinity to infinity, with all likelihood it would be many more orders of magnitude worse! Sanity check two also failed.

Now, there is, of course, a pretty obvious reason why both your and my methods of estimating all these things seem to be working good enough, which has little to do with random sampling or anthropics, but I think we do not need to go on this tangent now.

Expand full comment
Jiro's avatar

>Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference

The randomness in this example is no longer the randomness of when in the day you are born, but the randomness of which hypothetical world you picked. You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day. So it's essentially the same deduction, randomly choosing fixed-birth-time worlds instead of randomly choosing birth times.

Expand full comment
Ape in the coat's avatar

> You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day.

And in majority of such worlds Scott would still do similar inference, see that the estimate is good enough and, therefore wrongly conclude that birth dates of people happens at random.

The point I'm making here is that his reasoning method clearly doesn't allow to distinguish between worlds where things actually happen randomly and worlds where there is only one deterministic outcome.

Expand full comment
Butlerian's avatar

> First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you.

But every human who has ever and will ever live also has a causal process that produces them. My possession of a causal process therefore does nothing to distinguish me, in a temporal-Copernican sense, from all other possible observers.

I have read your LessWrong post all the way through twice, and, well, maybe I'm a brainlet, but I don't understand your line of argumentation at all. I'm not a random sample because... I have parents and grandparents? How does that make me non-random?

Humans had parents and grandparents in 202,024BC, and (pardon my speculation) humans will have parents and grandparents in 202,044AD (unless DOOM=T), therefore the fact that I have parents and grandparents in 2024 doesn't make any commentary on the sensible-ness of regarding myself as a random sample.

Expand full comment
Ape in the coat's avatar

This is not about your ability to distinguish things. This is about your knowledge of the nature of the probability experiment you are reasoning about.

Lets start from the beginning. When we are dealing with a random number generator, how comes we can estimate the range in which it produces numbers, just from one outcome?

We can do it due to the properties of Normal distribution. Most of the generated values will be closer to the mean value than not - see the famous Bell curve picture for an illustration. So whatever value you received is likely to be close to the mean, and, therefore, this way you can estimate the range with some confidence. Of course, the more values you have the better your estimate will be, all other things being equal, but you can get a somewhat reasonable estimate even with a single value.

But consider you are dealing not with a random number generator but with an iterator: instead of producing random numbers, this process produces numbers in a strict order, every next number is large the previous exactly by 1. Can you estimate the range in which it produces numbers based on some outcome that you've received?

No, because now there is no Bell curve. It's not even clear whether there is any maximum value at all, before we take into account physical limitations of the implementation of the iterator. As soon as you know that you are dealing with an iterator and not a random number generator applying this method, appropriate for reasoning about random number generators and not iterators, would be completely ungrounded.

Do you agree with me so far? Do you understand how this is relevant to Doomsday Inference?

Expand full comment
Kirby's avatar

If I understand SIA correctly, it is connected to the idea that a sigmoid and an exponential distribution look the same until you reach the inflection point. By assuming that the distribution of observers looks like what we've seen historically, Carter privileges the sigmoid distribution of observers. Instead, if you consider the worlds in which population growth starts rising again for any number of reasons, you come to the conclusion that we can't normalize our distribution of observers. For example, if you simulated observers throughout history and asked them when doomsday would arise, none of them would have said the 21st century until at least the 1850s - that's a lot of observers that Carter doomsday misled terribly, often by millennia and orders of magnitude of population based on what we already know. Based on those observations, you could make a trollish reverse-Carter argument that we would expect Carter doomsday arguments to be off by millennia and orders of magnitude of population. And what the SIA paper seems to find is that the Carter and anti-Carter arguments exactly cancel out. That’s not as implausible as it sounds, given that we might expect to be able to draw no meaningful conclusions from total ignorance.

Expand full comment
lorem_ipsum's avatar

Their argument is so obviously bad in a completely uninteresting way that I wonder why Scott bothered to Contra it. Was it gaining traction or something?

Expand full comment
Archibald Stein's avatar

Because Freddie is a prominent enough blogger. To give an extreme example, if the prime minister of Australia (whoever that is) said something obviously wrong, it would be worth rebutting.

Expand full comment
UK's avatar

Anthony Albanese

Expand full comment
etheric42's avatar

No, not the UK.

Expand full comment
Scott Alexander's avatar

I think if I'm right about prior odds of 30% on a singularity in our lifetime, that's a novel and interesting result. And if I'm wrong I'll enjoy having hundreds of smart people read over it and cooperate on figuring out the real number.

Expand full comment
lorem_ipsum's avatar

Abusing footnote 8 somewhat, I don't know what the prior probability that my lifetime would overlap the time interval that someone in the far future would assign to the singularity after the industrial revolution.

I'd argue that the posterior probability is 100% based on the scale of the Robin Hansen definition. You make a good case for it in techno-economic section. In terms of economic growth and social impact isn't the information age obviously a singularity? I'm connected to literally billions of people in a giant joint conversation. How preposterous would that statement be in a pre-information age world.

When the histories are written by our future descendants or creations it might make sense for them to date the singularity from the start of the information age to the next stable state.

Expand full comment
Annette Maon's avatar

I also think that the information age (aka Toffler's technological revolution) has already happened. The rate of major revolution has been accelerating.

Fire was invented so far in prehistory that we have no records of that time.

Writing (which enabled history and science) and the wheel followed the agricultural revolution by only a few millenia.

The industrial revolution was less than 3 millenia after that.

The information age took only 0.4 millenia.

Each of these revolutions significaltly increased the rate of innovation and the singularity will accelerate it even more.

At this rate of acceleration the singularity (which may have already happened and is smart enough to hide from us primitive apes) may indeed cause another revolution beyond the information age. Whenever that happens, I wonder if we will be smart enough to recognize that it already did.

Expand full comment
Anonymous's avatar

So the singularity is people using apps to do the same things they were already doing?

Expand full comment
Philo Vivero's avatar

The expert consensus seems to be we're using apps to do things in a dramatically different way than we used to. If you want to go down that rabbit hole, search for dopamine related to social media.

There are a lot of weird things going on right now. It seems uncontroversial to say that society is acting very differently than it has in the past (even the recent past), and that it's a fairly reasonable assumption that's because we're being manipulated by algorithms.

That those algorithms are just dumb automatons in the service of Moloch as instantiated by humans is taken for granted. But that they might already be in the service of one or more AGIs is not out of the question.

Expand full comment
Anonymous's avatar

Social media seems to be a new flavor of social heroin. Humanity has developed many of these in recent years.

Garbage content being spewed out by AI would probably concern people more if it didn’t compare favorably to the garbage that was already being produced by human hands.

Expand full comment
Mr. Doolittle's avatar

I get that you're using Robin Hansen's definition and that changes things, but I don't think we refer to a Singularity that happens but nobody noticed. Based on the common definition, we will all know the Singularity happened if/when it does, without the need to pontificate about it.

Expand full comment
Victualis's avatar

This might be reasonable if the Internet doesn't continue to fragment. We already don't have a single giant conversation.

Expand full comment
Melvin's avatar

I don't think "singularity" is sufficiently well defined that it makes sense to try to figure out a number.

Using your definition of being at least as important as agriculture or the industrial revolution... well, I accept that 30% isn't an unreasonable number but don't think that the Industrial Revolution is as significant as the sort of thing I've heard called a "singularity" in the past.

Expand full comment
John N-G's avatar

It's an estimate, and it's perhaps more reasonable than others, but it has three key assumptions:

1. The singularity would be in the same class of consequentiality as the Industrial or Agricultural Revolution, no more, no less. This assumption puts a constraint on the sort of singularity we're talking about, one which doesn't involve mass extinction, a nice obviously that would be much more consequential than the AR or IR.

2. There have been no other instances of similar consequentiality in all of human history. One might argue that the harnessing of fire was at least as economically consequential. Or the domestication of animals. Or the development of written languages.

3. The next event of consequentiality in this category will be the singularity. A few decades ago, most people would have bet on something related to nuclear energy or nuclear annihilation. An odd feature of the 30% calculation is that the more possibilities one can think of for the next great event, the lower the probability of it being the singularity. One can argue that the pace of AI technology is such that a singularity is very likely to be the one that happens soonest, but then you may as well dump the human history probability handwaving and just base the argument on technological progress.

Expand full comment
Scott Alexander's avatar

I mostly agree with you, but playing devil's advocate:

1. Lots of races went extinct during the hunter-gatherer to agricultural transition. We don't care much because we're not part of those races. If humans went extinct during the singularity (were replaced by AI), maybe future AIs would think of this as an event only a little more consequential than ancestral hunter-gatherers going extinct and getting replaced by agriculturalists. The apocalypse that hits *you* always feels like a bigger deal than the apocalypse that hits someone else!

2. Invention of fire is outside of human history as defined here. Domestication of animals is part of agriculture.

3. I think this is right, though I have trouble thinking about this.

Expand full comment
Edmund's avatar

Is anyone seriously proposing a singularity which cashes out in a plurality of (sentient?) AI minds? All the AI Doom scenarios I've ever seen propose a singular, implacable superintelligence which uses its first-mover advantage nips all competition in the bud. Some, of course, predict worlds with large numbers of independent AIs, but in those worlds humanity also survives for the same reason all the other AIs survive.

Expand full comment
John N-G's avatar

Source for "lots of races went extinct"? I don't know how to interpret that. But it does suggest another criticism: the choice of class. What odds would we get for "large extinction event"? Probably shouldn't limit it to human extinction because we're biased about treating our own extinction as more consequential than others. Pliosaurs and trilobites may disagree. Limit it to humans and there have been near-extinctions and genetic choke points in our history. Which suggests that there may be enough flexibility in choice of category to get a wide range of answers. There's never been a singularity (yet).

Expand full comment
TGGP's avatar

The term used in ancient population genetics is "population replacement". But that's usually for relatively recent humans, and we also know that there used to be even more distantly related subspecies like Neandertals, varieties of Denisovans, branches of Homo Erectus in places like Africa we haven't named, Flores "hobbits" etc.

Expand full comment
Ch Hi's avatar

The entire basis of the argument is wrong. It's treating a causally determined result as the outcome of a random chance. The is NO chance that someone will invent transistors before electricity (unless you alter the definition of transistor enough to include hydraulic systems or some similar alteration).

Expand full comment
Amicus's avatar

It has nothing to do with random chance, it's an argument about uncertainty. You can't invent transistors without electricity - but you have to know how a transistor works to know that. Someone who doesn't has to at least entertain the possibility that you can.

Expand full comment
Walter Serner's avatar

Am I the only one thinking that neither the Pavlov thing nor the CMC would have realistically caused anything worth calling the apocalypse?

Expand full comment
Scott Alexander's avatar

I agree that the seas would not have turned to blood, and that a beast with seven heads and an odd obsession with the number 666 wouldn't have been involved.

Expand full comment
gregvp's avatar

I think Mr. Serner is saying that neither event going the other way, resulting in global thermonuclear war, would have resulted in the extinction of humans. Not even close.

Expand full comment
magic9mushroom's avatar

You used the term "self-annihilation" with regard to humanity; "annihilation" means "to nothing", i.e. 100.00000000% of humans killed.

I'll go further and say that WWIII would probably have fallen short of even the Black Death in %humans killed (because nobody would be nuking Africa, India, SEA or South America), though it'd probably be #2 in recorded history (the ones I know i.e. the Columbian Exchange, the first plague pandemic, and the Fall of Rome seem to be somewhat less; note of course that in *pre*history there were probably bigger bottlenecks in relative terms) and obviously #1 in absolute number or in %humans killed per *day*.

Expand full comment
Desertopa's avatar

I don't think this is right. If we suppose Europe, America, and East Asia are all economically destroyed, I don't think Africa, South America and SEA would be able to support populations of billions of people while being disconnected from the existing world economy.

We'd be unlikely to see actual human extinction, but I think the populations of countries outside the First World would be very far from intact.

Expand full comment
Michael Kelly's avatar

Africa is only one generation removed from subsistence farming, so I'm pretty sure they'd be fine. South and Central Asia probably likewise. South and Central America probably untouched. Pacific Islands and most of Australia untouched. Most of Russia untouched. China is mostly vertical, thus most damage wouldn't propagate.

That's probably 3 billion people untouched. A massive drop in GDP, oil would stop flowing for some years and there would be big adjustments, probably mass migration out of cities and a return to subsistence farming and somewhat of a great reset. Discovered knowledge would be mostly intact, there's libraries after all. In about 20 years, GDP growth would likely resume.

Expand full comment
Desertopa's avatar

Most of Africa is a couple *very large* generations removed from subsistence farming. The population of Africa today is greater than the population of the world in 1800, and that relies heavily on technology and connection to global supply chains that they wouldn't have in this scenario. India likewise relies heavily on interconnection with a huge global economy and supply chain to sustain its population. Countries around the world (including India, and through much of the aforementioned Africa) are heavily plugged into China's economy. The human species would survive, but those 3 billion people would be very far from untouched. We don't actually have the capacity to support 3 billion people via subsistence farming without global supply networks moving resources around, which is why, throughout the history of subsistence farming, we never approached that sort of global population. Even subsistence farmers in the present day mostly buy from international agricultural corporations.

We could transition back to it if we had to, but it would be a major disruption to the systems which have allowed us to sustain global populations in excess of *one* billion people, and the survivors would disproportionately be concentrated where our knowledge and training bases right now are weakest.

Expand full comment
Gullydwarf's avatar

You can get some idea of how economic collapse and huge supply chain disruptions play out by looking at Russia/Ukraine/Belarus/Kazakhstan (largest ex-USSR republics) between 1990 and 2000.

Expand full comment
TGGP's avatar

Their populations might decline, but humanity has survived periods of population decline.

Expand full comment
JamesLeng's avatar

> (because nobody would be nuking Africa, India, SEA or South America)

Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began. Radioactive ash can cause life-threatening problems for people exposed, even weeks after the bomb goes off and hundreds, maybe thousands of miles downwind.

Firestorms don't care much about politics either. If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming. Ocean chemistry shifts, atmosphere becomes unbreathable, game over.

Expand full comment
Gullydwarf's avatar

CO2 is great for plan growth (in commercial greenhouses, the guidelines are to supplement to 3x ambient level), so - with less humans around - it can easily balance out in terms of temperature over few years/decades.

Expand full comment
JamesLeng's avatar

I'm not saying CO2 by itself would be directly killing us all. To elaborate on what I meant by "ocean chemistry shifts": https://spacenews.com/global-warming-led-to-atmospheric-hydrogen-sulfide-and-permian-extinction/

Expand full comment
magic9mushroom's avatar

>Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began.

Sure, but I'm not sure how a war between NATO/Free Asia and the Soviets/PRC would have significant operations in Africa or South America (there's *maybe* a case for India and SEA coming into it due to proximity to China, but if you're talking about followup invasions after a thorough nuclear bombardment, amphibious landings are quite feasible).

>If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming.

Note that firestorms actually cause cooling in the short-term (because of particulates - "nuclear winter"), and that in the medium term there'd be significant reforestation to counter this (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets).

Also, the Permian extinction was from over 10 degrees (!) of global warming. A few degrees clearly doesn't do the same thing (the Paleocene-Eocene Thermal Maximum, 5-8 degrees, did not cause a significant mass extinction).

Expand full comment
JamesLeng's avatar

Yes, based on current understanding several things would need to go wrong in semi-unrelated and individually unlikely ways for outright extinction of all humans to be the result, but there are a lot of unknowns - it's not like we can check the fossil record for results of previous nuclear wars - so it seems like it would be very bad news overall, compared to the alternative.

Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

> (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets)

I understand there's brown coal still in the ground in Germany, within plausible ICBM targeting-error range of a lot of Europe's critical high-tech infrastructure.

Expand full comment
magic9mushroom's avatar

>Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

There are two notable question marks there.

1) It would give us a reroll of "everyone is addicted to products provided by the big AI companies, and the public square is routed through them, and they have massive lobbying influence". It's not obvious to me that we would allow that to happen again, with foreknowledge.

2) There's the soft-error issue, where the global fallout could potentially make it much harder to build and operate highly-miniaturised computers. I'm unsure of the numbers, here, so I'm not sure that this would actually happen, but it's a big deal if it does.

Expand full comment
Hector_St_Clare's avatar

India would have either stayed neutral or sided with the Soviets in WWIII. They were officially neutral, but it was a "leaning pro-Soviet" neutrality. On the other hand, China would quite possibly *not* have sided with the Soviets, and might even have sided with America, at any rate after the Sino-Soviet split in1969.

Expand full comment
Ch Hi's avatar

IIUC, there's a lot of overkill in the ICBM department, and "On the Beach" only had things over too quickly. But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.) And perhaps oceanic mammals would have a decent chance, as water tends to absorb radiation over very short distances.

However, I haven't revised my beliefs yet, I'm just willing to consider whether I should. But that was a LOT of overkill in the ICBM department.

Expand full comment
Michael Kelly's avatar

There are people living in cities in Iran that are every bit as 'hot' as Chernobyl. There are beaches in Rio every bit as hot as Chernobyl. People there live long healthy lives. The risk of radiation is vastly overstated. Chernobyl only directly killed twenty people and another twenty or so probable killed. 40 is a pretty small number.

Expand full comment
Ch Hi's avatar

It's my understanding that the dogs that live in the hottest part of Chernobyl have a greatly increased mutational load. They've got short generations, though, so mutations that are too deleterious tend to disappear. But if they were to accumulate, the dogs probably wouldn't survive. (How hot Chernobyl is depends on where you measure.) (Well, not really the hottest part, that's inside the reactor and a lot hotter than where the dogs are living.)

OTOH, I expect a full out WWIII with ICBMs would provide thousands of times overkill. (That's what's been publicly claimed, and I haven't heard it refuted.) Just exactly what that means isn't quite clear, but Chernobyl was not an intentional weapon, so I expect the south pole would end up hotter than Chernobyl was a decade ago.

Expand full comment
magic9mushroom's avatar

>IIUC, there's a lot of overkill in the ICBM department

To be precise, there *was*. There isn't so much anymore. But I was discussing the Cold War, so this is valid.

>But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

>OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.)

Okay, let's crunch some numbers. I'm going to count 90Sr and 137Cs, as this is the vast majority of "medium-lived" fallout beyond a year (I'd have preferred to include some short-lived fission products as well, to get a better read on 100-day numbers, but my favourite source NuDat is playing up so I don't have easy access to fission yields of those).

Let's take the 1980 arsenal numbers, which is 60,000 nukes. A bunch of those are going to be shot down, fail to detonate, or be destroyed on the ground, so let's say 20,000 detonations (this rubble is bouncing!), and let's say they're each half a megaton (this is probably too high because a bunch were tacnukes or ICBM cluster nukes, but whatever) and half fission, for 5,000 megatons fission.

5,000 megatons = 2.1*10^19 J = 1.3*10^38 eV.

A fission of 235U releases ~200 MeV (239Pu is more), so that's 6.5*10^29 individual fissions (255 tonnes of uranium).

Fission produces 90Sr about 4.5% of the time, and 137Cs about 6.3% of the time (this is 65% 235U + 35% 239Pu, because that's the numbers to which I have easy access on WP, but the numbers aren't that different). So that's 2.9*10^28 atoms of 90Sr (4.4 tonnes) and 4.1*10^28 atoms of 137Cs (9.4 tonnes).

Let's assume this is spread evenly throughout the biosphere (this is a bad estimate in two directions, because humans do concentrate strontium - although not caesium - but also a bunch of this is going to wind up in the ocean so there's a lot more mass it's spread through; I'm hoping this mostly cancels out). The biosphere is ~2 trillion tonnes, and a human is ~80kg, so that's 1.2*10^15 atoms (180 ng) of 90Sr and 1.7*10^15 atoms of 137Cs (380 ng) per human.

90Sr has a half-life of 28.9 years and 137Cs 30.2 years, so that's 890,000 decays of 90Sr per second per person and 1.2 million decays of 137Cs per second per person. 90Sr has decay energy 2.8 MeV (of which ~1/3 we care about, because the rest goes into antineutrinos that don't interact with matter) and 137Cs has decay energy 1.2 MeV (of which something like ~2/3 IIRC we care about, because of the same issue), so that's 134 nJ/s from 90Sr plus 154 nJ/s from 137Cs = 288 nJ/s total.

1 Sv = 1 J/kg, so that's 3.6 nSv/s, or 114 mSv/year. The lowest chronic radiation dose clearly associated with increased cancer risk is 100 mSv/year, so you're looking at a likely cancer uptick (remember, I've preferred to err on the side of overestimates here), but this is nowhere near enough to give radiation sickness (Albert Stevens got 3 Sv/year after the Manhattan Project secretly injected him with plutonium to see what would happen, and he died of old age 20 years later, although at that level he was lucky to not get cancer).

Now, this is the medium-lived fallout. The first year is going to be worse, significantly, but in the first year it's also not going to be everywhere; the "global fallout" takes months to years to come back down from the stratosphere (i.e. most of the super-hot stuff decays *before* coming back to earth), and the "local fallout" is, well, *local* i.e. not evenly distributed. With Cold War arsenals at this level of use, you *would* be seeing enough fallout to depopulate entire areas, because it AIUI wouldn't reach tolerable levels for longer than the month or so people can last without food. But a full-blown "On the Beach" scenario? No; while that's not *physically impossible* it was never a real possibility during the Cold War (nuclear autumn *was*, but note that I say "autumn" rather than "winter"; the "lol we all die" numbers were literally a hoax by anti-nuclear activists).

Expand full comment
Ch Hi's avatar

Well, you are clearly better informed than I am. But it wasn't just the anti-nuclear activists. Compton's Encyclopedia from the early 1950's contained a claim that 7 cobalt bombs would suffice to depopulate the earth. This probably shaped the way I read the rest of the data I encountered.

Expand full comment
magic9mushroom's avatar

Nuclear winter was a hoax by anti-nuclear activists that successfully convinced much of society (so yes, other people repeated the hoax, because they were fooled). Fallout is a completely-different thing, and I'm not aware of a *deliberate* hoax in that area.

Note that "X nuclear bombs" is nearly always a sign of total cluelessness about how variable the power of nuclear bombs really is. You can have really-small nukes (the Davy Crockett tactical nuke was equal in energy to about 20 tonnes of TNT), but also really-big ones (the Tsar Bomba was equal in energy to about 50,000,000 tonnes of TNT, it was originally designed to be 100,000,000 tonnes, and I suspect the true upper limit is around 1,000,000,000 tonnes). The amounts of fallout from a 20-tonne bomb and a 1,000,000,000-tonne bomb are very different!

Expand full comment
John Schilling's avatar

You do not understand correctly. In order to render the survival of primates "unlikely", you would need about two orders of magnitude more (or more powerful) nuclear weapons than have ever been built, or three orders of magnitude more than presently exist.

"Enough nuclear weapons to destroy the world X times over", in all of its variations, is a stupid, stupid meme invented and spread by gullible innumerates who for various reasons wanted it to be true. It isn't.

Expand full comment
magic9mushroom's avatar

I think part of the issue is definitions of "overkill"; Cold War arsenals really were big enough that there were issues with a lack of viable targets other than the other side's ICBM siloes, but that doesn't remotely mean they were big enough to kill literally everyone (due to the issue where nukes are basically useless against rural populations). There is legitimate room for confusion there for people who haven't thought about this a lot.

Expand full comment
Michael Kelly's avatar

These are analogies not physical things. Remember, they were the dreams of a guy who lived 2000 years ago.

Did he see the seas turn to blood, or the seas polluted?

A beast with seven heads isn't a physical monster, it's an international organization.

666 — in earlier languages letters could be substituted for numbers. In modern Hebrew this is the case. Some names are lucky because they are somehow made from lucky numbers. I only know the concept, find a Jewish person to explain it.

Expand full comment
Cal van Sant's avatar

And where exactly do you expect Scott to find a Jewish person capable of interpreting theology against the modern world? I doubt he knows anyone like that

Expand full comment
TGGP's avatar

No, which is why I linked to https://www.navalgazing.net/Nuclear-Weapon-Destructiveness in another comment.

Expand full comment
Glen Raphael's avatar

>The biggest climate shock of the past 300,000 years is . . . also during Freddie’s lifetime

Can you clarify what shock you're talking about? If you mean the blade of "hockey stick"-type charts, that's the result of plotting some low-variance proxy reconstructions and then suddenly glomming onto the end of the proxy chart a modern temperature-measurement based chart. If you bring the proxies up to date and plot THAT to the current day there's no sudden jump at the end. If there had been big jumps or drops just like the current one in the past we wouldn't know it because they wouldn't show up in the proxy data any more than the current jump does - the way we're generating the older data in our chart has inherent swing-dampening properties.

Expand full comment
Xpym's avatar

But do you really believe that the current hockey stick is just random variance? Saying that AGW is overhyped and not actually a huge deal is one thing, but denying it altogether is intellectually indefensible these days I'd say.

Expand full comment
Michael Kelly's avatar

The hockey stick was debunked as the surgery of cherry-picked proxies from a field of proxies with counter evidence.

But to notice those problems is a career limiting move, hence people don't do it.

Expand full comment
timunderwood9's avatar

Is that actually true? I mean I was working through a statistics text book that had an exercise based on the date of Japanese cherry blossoms blooming since like 1000 ad.

How many proxies have you directly checked to make sure they don't show sharply different behavior in the last hundred years?

Expand full comment
Jason's avatar

And temperature reconstructions are only one line of evidence for AGW. The basic finding that human activities are charging the climate are also supported by validated computer modelling and our understanding of the physical processes in play.

This can all be explored in detail in the relevant IPCC report https://www.ipcc.ch/report/ar6/wg1/

Expand full comment
Anonymous's avatar

But does one trust the IPCC when they say, “how much”?

Expand full comment
Ch Hi's avatar

Not really. The IPCC is known to discard studies that they consider too extreme. There are legitimate arguments as to why they shouldn't be really trusted, but you can make analogous arguments about every single prediction. It's only the ensemble that should be trusted (with reasonably large error bars).

So far including some of the more extreme predictions in the ensemble would have improved the accuracy of the IPCC forecases.

Expand full comment
Anonymous's avatar

For policy relevant predictions such as seawater rise or desertification, the extreme predictions have not panned out in the slightest.

Expand full comment
Michael Kelly's avatar

have you never read climate gate files?

Currently the global temperature data set contains up to 50% 'estimated data.' There have been several (four I think) 'adjustments' of the past data, where every adjustment cools the past global temperature.

Expand full comment
timunderwood9's avatar

This is such a weird way of arguing. Again, which proxies have you actually looked at? Which temperature series? Have you made sure you are actually reading the series correctly?

Given the temperature history of just the last fifty years, that I very much doubt is constantly getting adjusted down, the default would be to think the world is warming. Given the causal logic of the greenhouse effect, the default is to think temperature is likely to be rising because of rising CO2.

Complaining about estimated data without attacking the established core facts of the rising temperature trend in detail seems to me a lot like the people who with Covid vaccines talk a whole lot how many anecdotal cases of how someone who had been in perfect health died after being vaxxed, but they never make a serious effort to explain why Covid shows up very clearly in aggregate mortality statistics, but deaths from vaccinations do not.

Expand full comment
gjm's avatar

Comments from Xpym and Jason here seem like they're misunderstanding Glen's point, so I'll make it more explicit.

Glen isn't saying "maybe anthropogenic global warming isn't real".

He's saying "maybe anthropogenic global warming isn't as unprecedented as it looks from the usual hockey-stick graph, not because the head of the stick might not be the shape everyone says it is, but because the _shaft_ of the stick might have big deviations in it that aren't visible to the ways we estimate it.

(I make no comment on 1. how plausible that is or 2. whether making the argument suggests that Glen is _secretly_ some variety of global-warming denier. But he's _not saying_ that we aren't rapidly warming the planet right now. He even talks about "the current jump".)

(Having raised those questions I suppose I _should_ make some comment on them. 1: doesn't seem very plausible to me but I'm not a climate scientist. 2: probably not, but I would be only moderately surprised to find that Glen is opposed to regulations aimed at mitigating anthropogenic climate change. None of that is actually relevant to this discussion, though.)

Expand full comment
REF's avatar

On the other hand, this doesn't really change the argument that increasing numbers of people and technological acceleration are making it so that a uniform distribution of events across time is a rather silly model for many types of events. And, although AGW my not have been a perfect example, it still should be enough to bring the idea into focus (except for people whose brain explodes at the mention of AGW).

Expand full comment
Xpym's avatar

No, I think I understood his point. He essentially implies that there's no good reason to believe that AGW will be substantial enough to eventually result in a visible "hockey stick" in the long-term proxy graphs, which is IMO unreasonable.

Expand full comment
Mr. Doolittle's avatar

I don't think that's what he meant at all. If we're using the judgment that FDB has been alive for the biggest climate shock in the last 300,000 years, it would be helpful to know if there have been other climate shocks in that time period and to find out whether they were of similar, or greater, size. Glen is saying that the measurements we use would not show such a shock during that timeframe, because the proxies move slower than the actual temperatures (that we are measuring in real time right now). We wouldn't see a jump if it also went back to the baseline later.

Expand full comment
Jason's avatar

I think he’d have to clarify what time period he’s referring to because in terms of assessing whether or not we’re in the midst of a potential “shock” it seems to me that only the last 1000 years is relevant to human civilization.

Also, if past temperatures were more variable than the current consensus there’s an argument that that would mean that climate sensitivity was on the high end and that temperatures will end up even higher for any given magnitude of human-induced forcings (carbon dioxide, methane, land use change, etc).

Expand full comment
Ch Hi's avatar

FWIW, rice paddies began emitting excess methane. That puts it about 9,000 years ago. OTOH, that was a very small contribution at first.

Expand full comment
Michael Kelly's avatar

20,000 years ago, sea levels were 120m lower than today. Seattle and most of North America were under a mile of ice.

Do the quick cocktail napkin math on sea level rise with 20k years and 120k mm of sea level rise. You quickly find that the simple average is 6mm/yr. But any look at NOAA data for San Francisco, or Battery Park in New York shows 2-3mm/yr for the past 150 years.

If you think 6mm/yr is less shock than 3mm/yr you're seeing something much different than I.

Expand full comment
timunderwood9's avatar

What claim are you arguing with?

Specifically, I think worrying about sea level rise mostly involves people worrying that we'll hit a tipping point that causes the big ice caps in greenland and the antarctic to get much smaller. I think what you just wrote actually confirms that rapid sea level rises are a thing that can happen and that is worth worrying about.

Assuming I'm right that you think that worrying about that is deeply wrongheaded, what exactly in what you wrote disagrees with the worry I just described.

Expand full comment
Melvin's avatar

I think the claim being argued with is the claim in the article that the past few decades have seen the "biggest climate shock in the last 300,000 years".

I think this is a very reasonable quibble given the various ice ages and interglacial periods over the past 300,000 years, with swings of ten degrees or more over a fairly short (it's hard to say exactly how short) period.

https://en.wikipedia.org/wiki/Interglacial#/media/File:Ice_Age_Temperature.png

Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock".

Expand full comment
agrajagagain's avatar

"Even if the last fifty years does turn out to have been steeper"

It is steeper. There's no "if" about it. Taking a low-end estimate of the temperature change since 1974 gives about 0.75 degrees total, or 0.015 degrees per year. The steepest of those "nearly-vertical" lines on the plot still clearly spans multiple thousands of years. If we estimate that the steepest such event involved 10 degrees of warming over 2000 years (which is being VERY generous: my principled estimate based on the graph would be at least 3x that), that's 0.005 degrees per year: only 1/3 of the speed! To credibly claim that it isn't steeper, you'd have to argue that either one of those 10 degree events occurred over a mere ~600 years (meaning the graph is just plain wrong) or that modern, industrial humans are SO BAD at temperature measurement that we got the data for the last 50 years wrong by at least a factor of 3. And again, those are using numbers that are generous to your claim, perhaps unreasonably so.

"Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock."

This would be an excellent point if the Earth were not still warming. I suggest checking back from the distant-future and re-evaluating the validity of this argument then.

Expand full comment
Michael Kelly's avatar

Climate tipping points are a completely political creation and don't exist in the realm of facts. You can invent a tipping point for anything if you are running with zero facts.

Expand full comment
agrajagagain's avatar

That's funny, because I remember learning about at least one potential climate tipping point mechanism in an astrophysics class a decade and a half ago. Being an astrophysics class it included no discussion of politics or politics-adjacent topics, and was almost wholly concerned with the physics behind the mechanism. Given that the mechanism in question was a. something that would cause global cooling rather than warming and b. last relevant millions of years ago (if ever), it seems like a pretty strange target for a political fabrication. Who are these shady political operatives sneaking their indecipherable agendas into our physics textbooks? Tell me, I would really like to know.

Expand full comment
Concavenator's avatar

That sea level rise did not happen gradually over 20k years, though; the vast majority of it took place between 12k and 10k years ago, at the Pleistocene-Holocene transition.

Expand full comment
Michael Kelly's avatar

Exactly that was my point. The majority of change was most of 120m in only 2,000 years. If you averaged all the change out over 20k years it would be 6mm/yr ... OP is saying the current change of 3mm/yr is unprecedented, when the average across the past 20k years is 6mm/yr, it was several cm per year for some periods of time.

but 2-3mm/yr is unprecedented.

Expand full comment
Concavenator's avatar

Ah, then I completely misunderstood your post, I apologize.

Expand full comment
Glen Raphael's avatar

"gjm" is correct as to what I was saying.

The proxy evidence I'm the most familiar with is tree ring series, eg Briffa's bristlecone pines. The way those proxies work is we identify still-living trees near the treeline which we think are temperature limited - they grow better when it gets warmer - so you can tell from the size of the rings which years were warmer than others. One way to calibrate such a thing is to compare *recent* temperature trends (for which we have good data) to the tree-ring trends at the time you first core the tree and see similar movement and assume it was also similar in the distant past and will continue to be so in the future. The first problem I mentioned is that if you go back and sample that tree again 20 or 30 years later you DON'T see that the relationship stayed similar. Pretty much all the tree series used in Mann's big Nature paper for which we have later data *didn't* maintain the relationship with temperature shown in their calibration period. Climate scientists call this "the divergence problem" - wikipedia's page on it is surprisingly not-terrible and has a good chart:

https://en.wikipedia.org/wiki/Divergence_problem

So the tree record doesn't IN PRACTICE - when you look at it in full - appear to suggest current levels of warmth are unusual much less rising at an "unprecedented" rate.

One possible reason for the divergence is an issue with the underlying theory: The way we use "tree-mometer" series usually implies a *linear* temperature/growth relationship - the warmer it is, the more the tree grows - but a better model of tree growth with respect to temperature would be a upside-down "U" shape. Which is to say: for a tree in a particular circumstance there is some OPTIMAL growing temperature that maximizes growth and it grows less than that if the local temperature is lower OR HIGHER than optimum. In that world, if there were in the past big positive swings in temperature just like today they might show up as brief negative movement - it could look to us today like it briefly got *colder* back then.

Anyway, that's one of a few reasons to think the shaft of the "hockey stick" in many of the usual sorts of reconstructions is artificially smoothed/dampened. I'm not saying it hasn't warmed recently, I am just saying we can't be sure how unusual it is to warm as much as it recently has.

(one of my influences on this is Craig Loehle, a tree scientist. Read his papers (and the responses to them, and his responses...) if you want to dig in further)

Expand full comment
Michael Kelly's avatar

I'd say the biggest climate shock of the past 300,000 years is the recent ice age, which killed off Mastodons, Wooly Mammoths, Sabre Tooth Cats, Dire Wolves, Short Face Bear, Giant Sloths, ... countless smaller animals.

Expand full comment
agrajagagain's avatar

That's not remotely an apples-to-apples comparison. The full suite of ecological changes caused by AGW won't be known for centuries. Pointing at the large ecological changes caused by the ice age and insisting that it proves that was a bigger shock than AGW isn't something that can be done in a principled fashion (yet). You can either base your judgment on some criterion that we DO know, like the speed of the temperature change (which I assume is what Scott is doing) or you can reserve judgment. If you're reserving judgment you'll probably need to do so for several centuries at a minimum.

Expand full comment
Michael Kelly's avatar

Well, we pretty much know that sea level rose 6mm/yr from 20k years ago til recently, and 2-3mm/yr from 1855 to today. So yes, we can definitely say that previous rates of warming must have been much greater than today.

6mm/yr is the simple mean. We know 20k years ago sea level was 120m lower than today. Most of North America was under a mile or more of ice. If seas were 120k mm lower 20k years ago, the simple cocktail math is: 20k years over 120k mm and have 6mm/yr as the simple average. We know that's not really the case, as there were stable periods and periods where sea level rise was much much faster. So that level of rapid rise required temperatures much higher, a more drastic climate change than the relative stability we see today.

Expand full comment
agrajagagain's avatar

"So yes, we can definitely say that previous rates of warming must have been much greater than today."

WHAT? That doesn't REMOTELY follow. Why on Earth would you assume that the rate of sea level rise is a simple, linear function of the rate of warming? That's not even in the neighborhood of the ballpark of a reasonable assumption.

Heck, it wouldn't be reasonable even if the sea level rises were produced by the same mechanism, which they ARE NOT. Sea level rise (in populated places) today is a result of melting ice adding water to the ocean: you need a lot of water for a small increase. Sea level rise after the last ice age was partly that (but with much more ice to melt) but also partly post-glacial rebound, which isn't factor today.

https://en.wikipedia.org/wiki/Post-glacial_rebound

Expand full comment
Philo Vivero's avatar

If I have a cup of water with ice, and the ice melts after 10min, and I measure for the next 5 hours, I'll see a 10mm/hr rise in the first hour, and 0mm/hr for every hour after that.

I'll measure the same even if the room temperature is rising.

You'll want to account for that in your calculations if you want to prove what I think you're trying to prove.

Specifically... what if there's only half as much ice left to melt now, and ice melts at a constant rate per volume?

Expand full comment
B Civil's avatar

“how likely is it that the most important technological advance in history thus far happens during my lifetime?”

Ask a Luddite…..

I did not read the entire essay because I was struck right at the opening with a huge problem: please define “important”. Particularly, define “important” in the context of history, which is a rolling wave on a very big ocean.

And while you’re at it, let’s include some events not attributable to human agency (meteors, ice ages, volcanic activity, etc.) in the definition of “important” in sofar as that word weighs on this discussion.

My odds of something important happening in my lifetime are as good as anyone else’s.

As a matter of fact, I will throw one out; nuclear energy and nuclear WEAPONS were invented in my lifetime.

I propose that nuclear weapons fulfill the same function in the modern world that God used to fill in the “ancient” one. That is important.

EDIT: I lied but not intentionally; nuclear weapons were already around before I was born. My apologies. I can’t think of a single important thing that’s happened in my lifetime so far.

Expand full comment
Scott Alexander's avatar

You might want to read the essay.

Expand full comment
B Civil's avatar

Yes, I probably should.

Expand full comment
B Civil's avatar

> So: what are the odds that Harari’s lifespan overlaps with the most #important# period in human history, as he believes, given those numbers?

This is where I got stuck and it’s Freddy‘s words not yours. I am reading the essay now and I kept wondering where I had seen that word, because it hadn’t come up and I was several paragraphs in. So I stopped, went back to the beginning, and started rereading the piece. It was in the quote of FdB.

I have to admit, it triggered me.

😆

Expand full comment
Scott Alexander's avatar

My impression is that this doesn't matter too much for his point. Even if we have some objective standard for "important" (eg the period with the highest temperature), the argument still applies. And if you tried hard enough, you could come up with some objective metric for whatever Freddie means here.

Expand full comment
B Civil's avatar

I have taken the time to read FdBs essay.

My first thought is that he is a bit reactionary, but more importantly (guilty…I used that word) he fools with time scale to his own purpose: first how we don’t comprehend large time scales properly and so assign importance to a very particular century, when really that is just when it began to be noticed, in retrospect. I agree with him, if my interpretation is correct.

But then he seems to go hard on people who talk about things twenty or forty or ten years ago that they claim to be important; specifically AI. AI is what seems to put a twist in his knickers (I am making an effort to be as indifferent to the sensibilities of a reader as he is); I get the sense it he doesn’t think it its a big deal, and he focuses on all the mundane, trivial exploitations of AI to date to make this point. But he is whistling past the graveyard I think.

The transition from spoken to written language as the main method of communication (barring the intimate, I hope) was a huge HUGE transition, with enormous consequences for our development as a species; if you want an objective standard of importance*, I offer this millennial-long transition. Every person alive for that 1000 years lived in an important time, because it was a process and everyone played their part. Some made out ok and others not so well.

I think we are at the dawn of something equally transformative now, and I think these two periods are bound to each other by a very fundamental necessity that people must have if they are to remain sane.

That necessity is trust. I trust what you tell me; I trust what is written down; I trust that I am really in touch with another human being. AI presents a real challenge to that simple, necessary thing, just like the development of words on paper did, and adapting to that is going to be transformative.

I do share his contempt for the writer he was taking to task, for what its worth.

Expand full comment
Sylvilagus Rex's avatar

I quit following that guy recently. He seems like he's stopped thinking about what he posts and just has a few beers and then blasts word vomit out. Which would be ok except he also has a habit of attempting to dunk on very smart people and if you're going to do that, you best not miss

Expand full comment
TWC's avatar

He's a decent enough writer, and addresses some at least interesting topics. But, yes, he's fallen off a fair bit last coupla years. In addition, his pronounced assholery is a turn off.

Expand full comment
Sylvilagus Rex's avatar

Yeah, it's the combo, assholery plus the slippage. Everyone has good spells and not so good spells. I actually suffer from a sleep disorder that causes me waves of cog fog that can sometimes last a month. So I'm pretty forgiving on mental slumps, but maybe don't @ everyone when you're at a low point.

Expand full comment
Whatever Happened to Anonymous's avatar

That's just the way he is, a more agreeable guy would've not produced stuff like The Good White Man Roster, but then there's also whatever this is.

Expand full comment
Baron Aardvark's avatar

He cannot handle being disagreed with. The whole substack is just a cult of personality.

Expand full comment
FionnM's avatar

I cancelled my paid subscription a few months ago but still read his free articles on occasion.

Expand full comment
avalancheGenesis's avatar

He's...high-variance, moreso than other writers I follow. The occasional absolutely brilliant transcendental gem, in the same vein as some of Scott's GOAT posts that people still reference years later (it's depressing how some classic SSCs are over a decade old now...), and on niches that I don't find well-served at that weight class usually. The beauty of his prose has made me cry on more than a few occasions! Then there's, uh, pretty bad posts like the one dissected here, which are extra frustrating cause I always feel like he's got the chops to intellectually understand, say, AI, but bounces off for idiosyncratic non-mindkilled reasons. The beefs are likewise pretty good when they're good, skewering is an art. I am getting rather tired of the perpetual dunking on Matt Yglesias though, since as a subscriber to both blogs, it's just painfully obvious sometimes he didn't fully read/understand whatever MY post he's using as a convenient strawman foil. Hopefully this is just a one-off Scott diss and not a sign of starting a similar feud trend...since I haven't cared for Freddie's anti-EA posts either. At some point, chasing those diamond posts isn't worth slumming through more leftist agitprop (it's good to sample opposing views, but Weak Men Are Superweapons, etc), or putting up with the not-much-better-than-The FP commentariat.

Expand full comment
Robert McKenzie Horn's avatar

Could you give examples of some of his brilliant/transcendental posts? I'm very unfamiliar with his work

Expand full comment
Jack Byrne's avatar

Not the OP but I've read a lot of FDB.

I wouldn't go so far as to say that it's transcendent but this is one of my favourite posts of his that shows how capable he can be as a sensitive and introspective writer. It's very different from his polemics.

https://freddiedeboer.substack.com/p/losing-it

But his bread-and-butter is the polemic, so here's some of the more effective ones:

https://freddiedeboer.substack.com/p/your-mental-illness-beliefs-are-incoherent

https://freddiedeboer.substack.com/p/i-regret-to-inform-you-that-we-will

Expand full comment
Xpym's avatar

https://freddiedeboer.substack.com/p/please-just-fucking-tell-me-what

I also wouldn't call this transcendent, but it's the best articulation of that particular issue I've seen.

Expand full comment
Michael Kelly's avatar

Freddie writes brilliant posts about mental health, the mental health crisis, boutique mental health claims, mental health meds, etc.

I subscribed to him, when he wrote a brilliant piece about how modern western Socialists are perpetually unserious and addicted to dunks, jokes, and memes; where no one is willing to sit down and write The Socialist Federalist Papers to describe how to form a good socialist government. Instead Western Socialists want to play today, and do the hard work after they win the revolution—pass me another Molotov Cocktail please—we'll just take the US Constitution and rip out all the private property parts.

Expand full comment
gdanning's avatar

Yeah, when Scott quoted him as saying, " Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more," I was thinking he might want to contemplate the possibility that, if all those people are supposedly making the same mistake, it might not be a mistake at all.

Expand full comment
Jay's avatar

I had a similar experience with his post a few years back on the war in Ukraine. He basically parroted Putin's line about Russia fearing Nato expansion and the west provoked it etc. I know people from Ukraine. Didn't care for it.

Expand full comment
OY's avatar

"The closest that humanity has come to annihilation in the past 300,000 years was probably the Petrov nuclear incident in 1983"

I'm sorry this is deeply unserious. Arguments that any nuclear crisis of the last century would have led to "annihilation" depend on an implausible nuclear winter. If you're going to make serious arguments about the possibility of an apocalypse, you can't just wave your hands around and pretend like the complete annihilation of humans was ever a possibility during the Cold War. Why should I take Scott seriously about this if he makes such an obvious misstep — and probably an intentional one for the sake of his argument — about something so easy to debunk with a little research?

Expand full comment
OY's avatar

There are a million excellent summaries of the nuclear winter debate on the Effective Altruism forum, for example and even reading the most pessimistic ones, no honest reader could come away with the impression that a full on nuclear war would lead to complete human extinction.

Expand full comment
Note Enjoyer's avatar

I’m sorry to be tediously literal, but he did say “the closest” to annihilation.

Is there another event you can think of that could have bought us closer than a full on nuclear war would have done? My initial thought was the genetic bottleneck event, but that’s well outside of the 300,000 year span.

Expand full comment
magic9mushroom's avatar

>I’m sorry to be tediously literal, but he did say “the closest” to annihilation.

Yes, but the obvious reading would seem to be that "closest" refers to how close we came to the event happening, not in what percentage of the population would have been killed (after all, the *actual* death toll was negligible at 1 for the Cuban Missile Crisis and 0 for Able Archer 83).

In percentage terms over the whole event, I doubt WWIII in the 60s or 80s would have exceeded the Black Death, though as noted above I can't think of anything else as bad.

Expand full comment
Jack's avatar

"I doubt WWIII in the 60s or 80s would have exceeded the Black Death, though as noted above I can't think of anything else as bad."

Reminds me of something I was confused about way back in middle school or whenever it was I learned about the Black Death. While reading through the textbook while bored in class, I saw in a section we didn't cover a passing mention of the Plague of Justinian, which supposedly killed a comparable percentage of the population of the Byzantine Empire as the Black Death. Only got mentioned once, as opposed to a whole section on the Black Death.

Never understood why. Today I'd guess it's a historiography thing, people have more to say about the Black Death's role in the grand sweep of history not just "a lot of people died, the end".

But it makes me wonder how many other mass death events happened over the years that nobody ever thinks about, but if you were going through it, would seem like the most important thing ever.

Expand full comment
Ch Hi's avatar

There've likely been several. IIUC, some of the blood types are selected for because they (partially) protect against one disease or another. And selected against because of different diseases. Some of that was geographic. (I think typhoid was evolved against in China...but I may have the wrong disease.)

More details at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7850852/ (which I didn't read, I'm working off memory).

Expand full comment
sohois's avatar

Freddie and Scott are using terms more loosely here. No one has insisted that "an apocalyptic event" must mean the end of existence for humanity. I think the vast majority of people would agree that a nuclear war would count as apocalyptic without having to strictly define how many people would be wiped out.

Expand full comment
magic9mushroom's avatar

If he'd used "apocalypse", I think there'd be less objection. OY is, however, responding to "[humanity's] self-annihilation", which literally means "humanity brought to nothing"; "no survivors".

This is particularly relevant since part of the context of FdB's essay is the anti-AI-X-risk movement, whose thesis is the possibility of humanity's self-annihilation due to building AI. We literally do mean "annihilation" there; there is no recovery from humanity being defeated by hostile AI. It's an apocalypse that cannot be survived and rebuilt from, because the AI doesn't go away and can actively hunt down survivors. "You win or you die; there is no middle ground."

Expand full comment
sohois's avatar

The previous sentence says "apocalyptic near misses". I think it's common sense what Scott meant here - given he has written many times on existential risk, he is definitely aware that nuclear winter and climate change don't reach that threshold.

Quibbling over words just seems like pure pedantry

Expand full comment
magic9mushroom's avatar

>Quibbling over words just seems like pure pedantry

1. In the X-risk field it's generally considered worth building a stigma about using words for the end of mankind incorrectly, because when we need those words we really need them (note Kamala Harris not appreciating the full scope of "existential" here https://www.politico.eu/article/existential-to-who-us-vp-kamala-harris-urges-focus-on-near-term-ai-risks/).

2. FdB's argument isn't being fully dealt with with sub-existential apocalypses; the Black Death is *not* "the most important period in human history", and WWIII would also (probably) not have been.

Expand full comment
Mr. Doolittle's avatar

There's two ways to read "near miss" here, that isn't helping the conversation. We can nearly miss the event, which is what happened in 1983, or we can nearly miss an apocalypse by too many people surviving to categorize it that way.

An event that wouldn't be an apocalypse if it happened also not happening doesn't meet my definition of "near miss" because it's too far removed from actuality. If we were trying to determine the frequency of a type of event, we would want actual instances of that event, not times when something similar-ish could theoretically have happened.

Expand full comment
beleester's avatar

If the AI apocalypse is "only" as bad as a full-scale nuclear war (not impossible - maybe the AI seizes power but it turns out nanotech superweapons aren't feasible and it has to conquer the world the old fashioned way) then that would still be bad enough to worry about!

Expand full comment
magic9mushroom's avatar

If it does conquer the world the old-fashioned way and is omnicidal, we're still all dead.

I do agree that there are scenarios where AI kills a bunch of people but not everyone, and that these scenarios are more likely if "lol nanotech" isn't on the table, but those scenarios are generally because we win at some cost, not because we lose - this is why I specified "defeated".

Expand full comment
Lenny DeFranco's avatar

The simplest rebuttal to Freddie’s post is that change is not a stochastic event but a continual process. He makes a good living (as he points out frequently) publishing on a platform that was invented well into his adulthood. Never been sure why he’s so insistent on denying the significance of technological change; it betrays his principal motive as being contrarian.

Expand full comment
gregvp's avatar

The most important technological discovery is the Haber-Bosch process, which allowed us to take the world population from about a billion to its current level and well beyond.

Closely followed by the invisible demon--er, excuse me: microbe--theory of infectious disease.

Expand full comment
Ch Hi's avatar

If you aren't going to quantify the time period, I'd be traditional and say the control of fire ... though admittedly that may be pre-human. After that I'd list language, though that wasn't exactly a discovery, and also has pre-human roots (but grammatical language is almost definitely human...almost). Next would be agriculture, even though that "discovery" took place over the span a multiple generations, and included such things as fertilizers. The Haber-Bosch process is only an elaboration of the original discovery of fertilizers.

When you're building something, the most important part is the foundation.

Expand full comment
Rothwed's avatar

I don't know if you are using the correct analysis. It underestimates the transformational nature of certain advances. The difference between natural fertilizer and synthetic is an upper bound on population of ~4 billion, probably more like 2 or 3 in actuality, vs... a limit placed by something other than food. At least 10 billion. If you follow the Robin Hanson school of thought, the amount of innovation this created by creating more people blows any other technology out of the water. Haber-Bosch technology also allowed the production of synthetic nitric acid > gunpowder and explosives, which was a pretty big deal.

Similarly, the printing press was an elaboration of writing, but it was the difference between only a tiny minority of people laboriously writing by hand and entire societies becoming literate and exchanging ideas. Saying it's just part of writing really doesn't do it justice.

Expand full comment
Ch Hi's avatar

Yes, but those later advances could not have happened without the earlier ones. So the earlier ones were the more important.

Expand full comment
Rothwed's avatar

I understand that technologies require certain other technologies first in order to exist, but I still think assigning greater value on this basis is wrong. Electricity can't happen until people can work metal well enough to make dynamos and transmission wires, but metalworking had far less of an impact on progress than electricity. Or the Chinese guy who invented gunpowder being the most important part of the chain seems wrong. The people who figured out it could be used as a chemical energy source to propel projectiles had far more of an impact (hah) than pretty exploding lights, even though guns and cannons could never exist without the Chinese guy and his fireworks.

Expand full comment
Jeffrey Soreff's avatar

This sounds approximately isomorphous to the "mother of Napoleon" discussion in https://www.astralcodexten.com/p/how-do-we-rate-the-importance-of

Expand full comment
Rothwed's avatar

I would amend Haber-Bosch to a nitrogen fixation process in general. There were other successful options available around the same time. In Norway, they developed an electric arc process, but it was only economical because of the abundance of cheap hydroelectric locally. There was also another chemical process - using pyridine? - that was popular in the US for a while. Haber-Bosch was more practical than the other options, but it was not the only one.

Expand full comment
TTLX's avatar

Freddie is basically right.

Sure, the argument we live in the end times has merit, but it always does. The problem is people for all time have thought exactly the same thing. Always lead by a prophetic self-appointed intelligentsia.

If now is different, a bit of blather about technology and AI is not going to do the trick for me. Deal with the underlying human psychology, at least. Show a bit of self-consciousness, at least.

Expand full comment
Melvin's avatar

That sounds like the "people have been wrong in the past therefore you're wrong now" argument. (This comes up often enough that it really deserves a better name.)

I'm not saying that the end is nigh, I'm just saying that the fact that people have been wrong in the past about the end being nigh isn't a particularly useful fact to bring to the discussion.

Expand full comment
TTLX's avatar

Well, when precisely 100% of people have been wrong that the end is nigh for the past several hundred thousand years, I think it's slightly useful to mention that, if not treat it as more significant than the precise factors that you think make this time different.

Expand full comment
Tom Hitchner's avatar

"Nothing major is going to happen" seems like one of those heuristics that almost always work: https://www.astralcodexten.com/p/heuristics-that-almost-always-work

Expand full comment
Petrel's avatar

I think you're very right to point to that essay, because both it and this one are obviously arguments in an (mostly-AI-)X-risk debate.

Personally, my take has been I think roughly the same since when I read the essay you linked: yes, nothing is ever going to happen, until it does. So, let's suppose you actually start taking these things seriously, and you get the one time the heuristic doesn't work, and something significant happens, right. With the amount of resources you'll have wasted on precautions against disasters that don't happen by that point - would survival have been even worth it?

Expand full comment
Tom Hitchner's avatar

That answer would depend on each individual’s preferences, but also on the measures one had taken. I haven’t come across any X-risk advocates arguing we should live in survivalist bunkers or any comparably disruptive step.

Expand full comment
Michael Kelly's avatar

Well of course it all ends someday. Someday in a few billion years our sun runs out of hydrogen and starts fusing helium ... and expands to include Mars. Every human life will come to an end around 100 years after it begins. Every family group will eventually come to an end, every civilization eventually comes to an end. The average duration for an Earth species is about a million years before they're supplanted by another. This too is the most likely fate of homo sapiens, there will be a new homo species supplanting us.

But all the catastrophic 'world is going to end all at once' ... there's no historical precedent. Does it feel like the world is ending all at once if you are living in say Pompeii in 79BC? Yup. Or Rome, or Dresden, or Hiroshima, etc. Local catastrophic ends occur on a regular basis. Globally? I don't think is very likely.

Expand full comment
Tom Hitchner's avatar

I don’t know who is predicting it will specifically end all at once. We seem to have moved the goalposts considerably from Freddie’s claim that nothing at all is even going to change, let alone end.

Expand full comment
magic9mushroom's avatar

Note that, looking at a worldline outside time, all people at all times will always observe that no end has yet occurred - there are no people at times after the end to observe that it has occurred. Thus, regardless of whether an end is in their near future, they will see that 100% of predictions of the end are either incorrect or not tested yet. This reduces the value of that evidence; it always points the same way regardless of the truth, so it's of very little Bayesian value.

Expand full comment
JamesLeng's avatar

If the number of humans and/or their average quality of life was trending downward, extrapolating that curve to the point where it hits zero would at least make some kind of sense.

Expand full comment
Nicholas Moore's avatar

“Doomsaying dinosaurs have been wrong for 186 million years, so they can’t possibly be right now” says T-Rex, as asteroid ploughs into Earth’s atmosphere

Expand full comment
Mr. Doolittle's avatar

Yes, but a whole lot of 300,000 year periods fit into 186 million years. For the vast majority of that time, it would have been both accurate and important to say that the end wasn't nigh. The prediction would have been wrong 0.0016% of the time. That's a really good prediction!

Expand full comment
Dan L's avatar

Hang on, are you replying to an ACX post, or your general impression of an argument? The fact that "precisely 100% of people have been wrong" is a mistake in a few different important ways is precisely what Scott's post is *about*.

Expand full comment
TTLX's avatar

Not this post, as far as I can tell. This post is about how Freddie's argument is just an anthropic error. But I'm interested in the ways it's actually about anthropocentrism, and the enormous bias that (especially) smart, proselytising people have towards the belief that they live in special times. Their prognostications need to be heavily discounted regardless of the merits, I think, precisely because they are the sort of people who *would* think this way.

Expand full comment
Dan L's avatar

Reread the part before anthropic reasoning is first mentioned: the '~7% of humans alive today' and other comparable measures are framed in terms of expectation over human lifetimes, but are easily recast in terms of 'expectation over the next 50 years' without loss of generality. At that point, it's practically vanilla Bayesian - you don't get near even 100:1 without sabotaging your reference class, and throwing out lines like "precisely 100%" should be setting off alarm bells.

The anthropocentricism strikes me so thoroughly beaten to death as to be trite, but if that was the part that interested me I'd probably go a few further posts upstream.

Expand full comment
Melvin's avatar

Maybe. Then again maybe it's worth checking whether anyone has already mentioned it. If they have, you don't need to.

I think the problem with these sorts of conversations is that contributing to the actual meaningful conversation about the actual meaningful factors requires a lot of background knowledge, whereas the "people have been wrong about this sort of thing in the past" argument can be thrown in by every random passer-by.

The worst is when these random passers-by genuinely think they're being clever, like Freddie de Boer.

Expand full comment
spooked by ghosts's avatar

It's not just that people have been wrong in the past, it's that even when they had no "rational reason" to believe that we live in the end times, intelligent people believed it anyway! The reason the psychology behind beliefs like this is important is that smart people are really, *really* good at talking themselves and each other into believing the strangest things - whether or not they're true.

Expand full comment
Tom Hitchner's avatar

But isn't this tempered by the number of people who believed that nothing bad would happen—even when there was good reason to think something bad would happen—and then something bad happened? People talk themselves into believing stuff on both sides.

Expand full comment
Rob's avatar

It does seem notable that human beings have repeatedly built movements around predictions of total apocalypse, rather than around the much more likely prospect of smaller disasters, though really sorting this out would require a lot of empirical research.

Expand full comment
Tom Hitchner's avatar

Sure, we're fascinated by how it's all going to end. But note that Freddie (in the original post and in the comments here), when he says these are not special times, isn't just talking about extinction—he's saying that there will hardly be any discernible change in society at all.

Expand full comment
Melvin's avatar

I guess the question is how much we should take "people have been wrong about this sort of thing in the past" into account as an argument.

I don't think it's completely worthless as an argument. If you're talking to a hypochondriac who has a history of suspecting that his new vague symptom of the day is a sign of some fatal disease then it might be worth pointing out the fifteen times he's been wrong about this sort of thing in the last six months. But even hypochondriacs die eventually, and this sort of argument shouldn't override seeing a doctor when you're coughing up blood... or rather it shouldn't be interposed into the argument of how likely it is that a particular symptom is worthy of concern.

Expand full comment
Ch Hi's avatar

I think it's a variation on "post hoc ergo propter hoc". It's sort of "before this therefor after this" without any attempt to establish a causal relation. (If you establish a causal relation, that that becomes a valid argument.)

OTOH, it's the kind of prediction that comes up frequently, because it's the kind of thing our minds are tuned to react to. This doesn't make it right or wrong, but one should be suspicious of believing it without specific good reasons. It's a warning of danger, so it tends to be given attention to a greater extent that it's raw probability would warrant...but survival says "pay attention to warnings of danger, even if they're a bit unlikely". Unfortunately, given large populations and fast communication, we tend to be deluged by "things our minds are tuned to", and the question of "How much attention should we pay to this one?" doesn't fit the evolved responses...so we tend to tune them out.

TTLX feels "We're swamped by warnings without valid reasons behind them, so just ignore entire categories of claims". This will work almost all of the time. Almost. But if there *is* a valid causal mapping between the state and the prediction, this could be a bad mistake.

Statistical arguments aren't going to help here.

Expand full comment
Tom Hitchner's avatar

I remember when some religious group put up billboards around Southern California saying "THE WORLD WILL END ON MAY 4" (or whatever the date was). I remember being fascinated by how much I perked up at that and was aware of the date when it came, even though I knew the prediction had zero basis. I wasn't scared at all, but…interested, attuned.

Expand full comment
darwin's avatar

>The problem is people for all time have thought exactly the same thing.

Have they? Or are you thinking of 3-5 catchy anecdotes from across human history?

Expand full comment
Scott Alexander's avatar

If only someone had written an essay demonstrating that this line of reasoning was false.

Expand full comment
Blackjack's avatar

If you consider the probability of a lot of these larger tail events (singularity / tech advances) a function of both scaffolding of accumulated knowledge (first I need a complex enough chip to build that could encapsulate a human brain, etc) and human coordination among highly enough skilled practitioners of a field of study, then you would expect a massive spike around the time when humans first build the first nearly real time planetary knowledge and coordination network; I.e. the internet (and the slope of the line at this time would likely be discontinuous, as humans now have the ability to work together at an unprecedented rate. This probability might cap off after a generation or two due to reaching the pinnacle of human ability to coordinate).

Any human living at that time (I.e. now) should see themselves living in an unprecedented time where the possibilities are fuzzy, as the new coordination engines of humanity are still being accelerated into full throttle, and we don’t yet understand what progress / harm is possible in a generation with the engines in full throttle.

So of course we feel like we live in a time where these black swans are increasingly possible. The future is more murky and volatile and hopeful and doomed than it has ever been… it all just depends on the acceleration and velocity vectors of the coordination engines.

Expand full comment
Civilis's avatar

I was going to say something similar, but I see the slope of the line noticeably changing a bit earlier.

There are definitely a couple of points in human advancement where advancement increases significantly. While there's a correlation with changes GDP, I'm not sure the effect is the other way around. And I think your point that those are correlated with coordination between people.

Record keeping is one. While written records are the most obvious form this takes, it also includes things like using structures like Stonehenge to record and predict astronomical events. Getting technological progress to the point where there's sufficient excess production capacity to allow individuals to specialize in research and engineering development (and in similar knowledge-expansion professions, like teaching) is another.

I think the final one is getting society to the point where you can bring together researchers and engineers working together on projects that individuals couldn't handle. There's definitely a major discontinuity between major advancements coming from individuals working on their own (the Wright Brothers putting together the first practical aircraft in a home workshop) and advancements coming from massive groups working on specific projects (the Apollo program, 400,000 people at its peak). This sort of coordination may be responsible and a necessary prerequisite for the internet. The first example of its kind I can see on a technological is the Manhattan project (though you could argue the V-2), but you see society-level industrial coordination stepping up through at least World War I. Am I missing any obvious examples?

Expand full comment
Jeffrey Soreff's avatar

Arguably, this was started slightly earlier, with Edison's lab:

https://www.nps.gov/articles/000/the-invention-factory-thomas-edison-s-laboratories.htm

Expand full comment
Civilis's avatar

Very good example.

There's obviously a transition period from 'scientific and technological advancement is done by individual discoveries' to 'scientific advancement is done by large coordinated groups' and the transition is relatively recent historically.

Expand full comment
Jeffrey Soreff's avatar

Many Thanks! And I agree about the transition and its timing.

Expand full comment
Anonymous's avatar

The internet does not function as a planetary knowledge and coordination work. Most compiled knowledge on the internet is behind paywalls.

Expand full comment
Joel Long's avatar

For technological apocalypse probability, I think the appropriate scale factor is "total energy the human species is able to manipulate".

If there were 100 billion humans but they hadn't had an industrial revolution (eg had very little access to chemical energy), it's still going to be very difficult to move enough energy to cause an apocalypse. Meanwhile access to nuclear energy means we *could* scour the earth (though as other comments note we'd still probably have to be trying).

If we ever get, say, matter/anti matter reactors going at scale, I'll figure the apocalypse is imminent.

You can make a case for scale of information manipulation posing similar risks, obviously, but I don't think it's as face-value clear.

Expand full comment
Victualis's avatar

In some models the antimatter reactor scenario has orders of magnitude more information processing than we have now.

Expand full comment
Civilis's avatar

I think a better scale factor ties into how much power each individual person can bring to bear, but it's harder to measure. Much like GDP, it's also a good offhand measure for the chance of a singularity. Anything that significantly increases an individual's ability to exercise power is very likely to generate significant social change.

Expand full comment
Bentham's Bulldog's avatar

Good post! Though I think we can show quite demonstrably that either anthropic reasoning will force you to abandon Bayesian updating in extremely egregious ways, think that Adam can get telekinesis, or think that you can be certain that the universe is infinite. https://benthams.substack.com/p/explaining-my-paper-arguing-for-the?utm_source=activity_item

(From there, the route to a bigger infinite and then God is fairly straightforward).

Expand full comment
Bentham's Bulldog's avatar

//Here’s a question I don’t know how to answer - the number above (7%) is about how surprised you should be if the apocalypse happens in your lifetime. But I don’t think it’s the overall chance that the apocalypse happens in your lifetime, because the apocalypse could be millions of years away, after there had been trillions of humans, and then retroactively it would seem much less likely that the apocalypse happened during the 21st century. So: is it possible to calculate this chance? I think there ought to be a way to leverage the Carter Doomsday Argument here, but I’m not quite sure of the details.//

SIA solves this problem (as pretty much all problems!) It's true that the odds that you'd find yourself in the most important century given that you exist is very low if there are millions of generations, it's likelier that you'd exist. The two probabilities precisely cancel out and you get no overall update in favor of thinking that your credence in the world ending differs from what you would think it was if you ignored anthropics!

Expand full comment
Hank Wilbon's avatar

What is the math on the Shakespear calculation? Looks like 1.5 billion people lived between 500 BC and 1500 AD and about 15 - 20 billion since 1500 AD? So given an equal number of writers across the population independent of when they lived you would expect the odds of them being born before 1500 AD to be about 1 in 20. But obviously we've had many more literary writers in recent centuries so maybe you multiply the denominator by what, 1,000? So now the odds the greatest writer was born before 1500 is 1 in 20,000. That doesn't sound so crazy. But it seems to me that we need to adjust the odds again given that we already know Shakespeare was alive in the year 1500 and that many famous writers and literary critics consider Shakespeare to have been the greatest writer. So shouldn't we take that known fact and adjust the numerator by some factor also? Expert opinion isn't always correct but it's worth a lot so maybe we adjust the numerator by a factor of 500 and end up with the odds Shakespeare was the greatest writer ever as 1 in 400? It doesn't get anymore scientific than that and it sounds about right. Shakespeare was probably at least in the top 400 writers ever.

Sorry. Shakespear of course lived in the year 1600 not 1500, which makes the calculation more favorable for him. (I consider the population starting in the year 500 BC to include the lifetime of Sophocles but not Moses or Homer).

Expand full comment
1123581321's avatar

But isn’t this bit of information: “S. Is still widely read and considered to be a great writer, 4 centuries on” a very strong prior toward him being the GOAT (to be clear, in English - I consider comparing writers in different languages to be futile)?

Expand full comment
Xpym's avatar

Sure, but this definition of the GOAT favors early writers extremely highly. Both in terms of low hanging fruits that are no longer available, and the culture being shaped by their continued influence. Basically, a potential Shakespeare had to be born at the right time possessing some combination of raw talent and luck at applying it.

I'd say that this is even more obvious in music. Beethoven lived at exactly the right time to be making music at the human frontier of complexity, which wasn't yet high enough for non-specialists to become unable to appreciate. These days the chasm between those is vast, such that it's impossible for anybody to become both the most advanced and popular enough composer at the same time, which is plausibly necessary for a GOAT status claim.

Expand full comment
1123581321's avatar

I think this is actually a... feature? rather than a bug?

Yes you'd have to be extremely lucky to be born at the right time in the right place to have a shot at becoming a GOAT (for composers, mine is J.S. Bach, and the same thing applies). It is highly likely that no one can have a shot at besting either Bach or Beethoven, that there will never be a composer with impact anywhere close to these two.

For "culture" GOATs the test of time is also necessary, which again excludes anyone recent.

Expand full comment
aqsalose's avatar

The Shakespeare prior estimate has a bit of "spherical writers in a vacuum" problem to it before considering the evidence. Authors of great literature do not pop into existence as a result of college education.There are more WEIRD college graduates who wish to become great authors than there were in AD 1600. But how many of them have anything worthwhile to say?

One can plausibly argue that there are other factors than population size in the play. Let us consider a model that becoming "great author" requires (a) natural verbal intelligence/talent, (b) access to literary education (c) both varied life experiences and other sources of meaningful stories, and (d) competitive environment that provides feedback and selection mechanism.

One could argue that a population grows, number of talented people also has increased if rate of (a) should be constant. Modern life could have been less favorable to other factors (b) to (d).

Concerning factor (b): Access to education is wider, but quality of literary education might be less impressive. (How many Western kids are required to study Latin classics, poetry, or learn to recite long-form texts from memory?)

Concerning factor (c), life in Western democracies seems less high-stakes than previous eras for both commoners and rulers alike. There is only limited amount of exciting stories to be told inspired by suburban life. Western political leaders are more likely to fight a legal battle in a courtroom than personally lead their troops to Bosworth Field.

Concerning factor (d), entertainment industry today has competition, but the market tries to optimize for a different kind of product than the playwright scene of Elizabethian London, where the competition was to both draw in an audience and gain approval select classically educated aristocratic and royal patrons.

_If_ each of factors (b) to (d) are favorable to pre-AD1600 authors by factor of ten, 1 in 20,000 could be increased to 1 in 20. However, the exact size of factors like these is less not really the point. More important is that each will have a sizeable impact on any crude Fermi estimate based on size of educated population, and are more likely to be multiplicative than additive. Sum of additive effects would result in normal distribution and uncertainty would be nice symmetric normal distribution around the mean. Multiplicative random effects tends to log-normal, with asymmetric long tail.

Expand full comment
Odd anon's avatar

I feel like FdB is advocating outside view/modest epistomology, rather than making an anthropic argument. "Things people often make wrong predictions about" can be a useful category to pay attention to, when removed from the object level.

Expand full comment
Nebu Pookins's avatar

I feel like a person who was advocating "modest epistemology" would not write something like (paraphrasing):

> Here is a list of people who do not understand a concept that I myself *do* understand: ...

Expand full comment
MicaiahC's avatar

The "modest" in "modest epistemology" doesn't refer to modesty, the virtue of being egalitarian and humble, but "modest" in the sense of "conservative and non-radical".

E.g. praising someone for doing traditionally good things would be immodest in the first definition but modest in the second

Expand full comment
Jeremy R Cole's avatar

It's frustratingly bad reasoning when I think the point he was trying to make was as simple as that this time period isn't cosmically important. That's probably true! We aren't necessarily living in some special of humanity and it's often a form of escapism to wax poetically about either apocalyptic or utopian end times.

Computing a probability across time periods uniformly while saying that, actually, the 1850s-1950s (i.e., the century immediately previous this one) was more important, and pretending there's no correlation in importance is just... weird. It would probably stress the cosmological insignificance of the time period more by realizing that the universe is old and we don't know about any civilizations that have, e.g., created super intelligence.

I actually am sympathetic to the argument that you (but not just you) over-analyze high uncertainty / low probability / severely catastrophic events, and sympathetic to the human argument that you should focus on solving real problems instead of purely hypothetical x-risks, and he's written much better versions of the same argument that rely more on existentialism and less on probability.

Expand full comment
Bldysabba's avatar

'We aren't necessarily living in some special of humanity and it's often a form of escapism to wax poetically about either apocalyptic or utopian end times'.

I don't understand why so many people are saying this. The last 200 years have obviously been extremely special.

Expand full comment
Jeremy R Cole's avatar

It depends on what you compare to. If humans exist for another 300,000 years, how important will this specific century be?

But that's not really the point, which is what's meant by special. The present isn't special in the way that normal society is about to collapse (apocalypse) or completely change (utopia), in such a way that investment into normal society isn't beneficial.

Expand full comment
Desertopa's avatar

It depends on what you compare to. If humans exist for another 300,000 years, how important will this specific century be?

Plausibly, very special, since it could have a major determining influence on what that existence hundreds of thousands of years from now is like.

Anatomically modern humans have existed for hundreds of thousands of years, and could in theory have gone on existing in much the same way for hundreds of thousands more. But if we look back from the distant future, the invention of writing is still going to look very important. It was very much influenced by other preceding inventions, like agriculture leading to increasingly urban living, and the need to track increasing levels of trade, but those developments led to something that dramatically increased the rate of further development.

The idea that we could be living through another development of comparable significance isn't by any means absurd.

Expand full comment
Jeremy R Cole's avatar

Sure, but the invention of writing was certainly very important. I would say the argument is more that the invention of writing was by no means a point. As a communication technology, it's now one that 87%ish of people can use. I.e., it's invention and adoption has been ongoing for over four millenia, and still continues. So which century should get the credit?

Expand full comment
Desertopa's avatar

We could credit the specific time of invention, or treat it as a process spread out over a longer period. Similarly, we could point to the "invention" of AI, which is clearly an ongoing process. The interval of its impact is clearly spread out at least somewhat. But society moves much faster, and changes propagate much more quickly now than they used to. The internet was invented in the late 20th century, and it's been adopted throughout most of the world, and had a dramatic impact on the operations of our civilization, within the span of a few decades.

Expand full comment
Bldysabba's avatar

'It depends on what you compare to. If humans exist for another 300,000 years, how important will this specific century be?'

If you construct hypotheticals in which this century is not important, then in those hypotheticals, yes this century is not important. In the world in which we live though, the last two centuries have been extremely unusual in very important ways.

Expand full comment
Cal van Sant's avatar

Notably so in that 200 years ago the Fermi paradox wouldn't have made sense as a question. We've entered into an era where we can be surprised that we don't have evidence of any other civilization in the universe reaching the same.

Expand full comment
Scott Alexander's avatar

I think a friendly reinterpretation of your claim would be that it's possible for importance to be monotonically increasing (possibly exponentially), in which case it would be unsurprising for us to be the most important generation ever so far (because every generation is) but more surprising for us to be the most important generation even counting the future.

I sort of agree with this, but think it's compatible with (for example) humanity being wiped out during our lifetime, since this could still be less important (from the perspective of 100,000 AD) than the time in 10,000 AD when robots were wiped out by super-robots.

Expand full comment
Jeremy R Cole's avatar

I'm primarily gently rephrasing FdB's argument, since he hinted at it before, e.g., here:

"I believe that many who are passionate about artificial intelligence are so invested because they believe that AI will rescue them from the disappointment and mundanity of human life. That it will free them from the ordinary. One way or the other."

https://open.substack.com/pub/freddiedeboer/p/ai-or-the-eternal-recurrence-of-hubris?utm_source=share&utm_medium=android&r=hghp6

which is primarily, I think, about how humans should spend their energy -- doing the work of living a good life right now, rather than dreaming about apocalypses or utopias. That is, I think it's more of a humanist argument than an anthropological one.

I think the related point he was trying to get at is that in the vastness of space and time, are the next fifty years really going to be cosmically important? Well, as you point out, they're relatively probable to be important to humans, but the original copernican principle has more to do with philosophically downgrading humans as the center of the universe. I don't think he really squares the circle here, and mentioning undiscovered galactic empires or future synthetic super intelligences probably wouldn't have worked for what he was trying to get at.

Expand full comment
SnapDragon's avatar

Arrrrg, this is the second time in recent history Scott has mentioned the Carter Catastrophe as a real thing that isn't bullshit. In fact, it is not a real thing and is bullshit, and it's honestly not too hard to throw together some basic probability models to show it. For instance - note that I'm not being fully formal here, for brevity's sake - imagine you create two universes in your basement, one that you stop after 10 humans exist and one that you stop after 10 million humans exist. Pick a universe and ask human #5 which one they're in. If the doomsday argument were real and not bullshit, they would be positive they were in the 10-human universe. But, of course, their real probability is 50-50, which is the boring correct answer.

The fact that you're experiencing qualia does not give you special weight among all other humans (sorry!) - there is no "sampling" process that has "selected" you out of all humans that will ever exist. Or, as the sadly credulous Wikipedia article phrases it, there is no physical process by which you "find yourself" in your body. What would it even look like for "you" to "find yourself" as someone else? All humans, tautologically, have just one body they could ever "find themselves" in.

Philosophy is full of problems that are tough to resolve. The Doomsday Argument is not one of them. Anthropic reasoning has lots of interesting uses, but it cannot be used to predict the future. It simply doesn't work that way.

(I have a very long, more rigorous effort-post about this that I've been sitting on for a long time, but I'm honestly a bit nervous about publishing it because some people really REALLY love believing in this thing. Like the ones who wrote the obfuscatory Wikipedia article...)

Expand full comment
Silverax's avatar

This is just a variation of the [sleeping beauty problem](https://en.wikipedia.org/wiki/Sleeping_Beauty_problem), which is famously controversial and doesn't have a consensus on the answer. You can't just say it's _obviously_ 50/50.

You might believe that is correct. But saying this solution is obvious is _objectively_ wrong. If it was then you should just write up a paper in a couple of days and rack up the status.

Expand full comment
SnapDragon's avatar

Oof. I was totally on your side for the DA, but you're quite wrong here.

Every night you go to sleep, have a friend flip a coin once for the actual sleeping beauty problem, and once to decide (as a simulation aid) whether he'll simulate day 1 or 2 of the problem. He'll wake you up unless it's "day 2" and "heads". How often, upon waking, do you think you'll see the result of the first coin being heads? Just go and TRY IT. You of course don't have to go to sleep to do this experiment, thematic as it is. :)

Expand full comment
Ape in the coat's avatar

> Every night you go to sleep, have a friend flip a coin once for the actual sleeping beauty problem, and once to decide (as a simulation aid) whether he'll simulate day 1 or 2 of the problem. He'll wake you up unless it's "day 2" and "heads". How often, upon waking, do you think you'll see the result of the first coin being heads?

Here the probability is indeed 1/3. This is because day 1 and day 2 are randomly sampled, and just one random awakening happens at every iteration of probability experiment.

Not so much in Sleeping Beauty problem where awakenings do not happen on random. On Tails both awakenings are happening in the same iteration of the experiment.

This difference may appear subtle but it has noticeable statistical properties. You can easily see the it if you simulate the experiments multiple times. In your version Monday&Heads, Monday&Tails, Tuesday&Tails awakenings will be happening in random order. In the Sleeping Beauty as it's stated, Monday&Tails awakening will always be followed by Tuesday&Tails.

Just like in DA you are not randomly sampled among all possible humans throughout time even though you don't know your birth rank "a priori" and can be in a state of ignorance about it, in SB you are not randomly sampled among different awakening states, even though you experience amnesia and are in a state of ignorance about the current day. I talk about it in more details in the linked posts - feel free to engage.

Expand full comment
SnapDragon's avatar

Yes, I read the posts, and unfortunately, they're far too complex. The SB problem doesn't require dozens of pages of English argumentation (which is easy to mess up); it just requires a small amount of proper formalism. To pick one example: because of the lack of precision, when you say e.g. "P(Heads&Monday)=1/2", you need to be clear that this is the probability that Heads&Monday happen in an experiment; unfortunately, you also treat it as the probability that you see Heads&Monday upon waking - even though that's in a different probability space! (This isn't just pedantry. You literally have different knowledge of the state of the universe between when you start the experiment and when you're awoken. In real life, too.) And some of the things you say are just plain wrong, like: "Probability theory has an curious limitation. It is unable to talk about events which happen multiple times during the same iteration of an experiment." Expected values would like to have a word with you...

(On the positive side, I will say that I do really appreciate your python code. It disambiguates a lot of what you said, though it doesn't save your argument. More of that formalism would go far!)

Probability is just a measure of "# of positive events" / "universe of events". When you (or SB) awaken, the universe is "all times I awaken". The positive events are "all times I awaken with heads". I'm glad you agree that this comes to 1/3! That's just the answer, full stop. It doesn't require dozens of pages of argumentation!

It is completely irrelevant what order the events come in; this is information that is outside your universe and can never affect your probability estimate.

If you really do insist that you somehow get different aggregate probabilities from an experiment when you sample it than when you run it fully (...which would be news to, well, all scientists and pollsters), you can even run the experiment without sampling by splitting SB into two different people. For each experiment, secretly randomize who will be the "Monday guy" and who the "Tuesday guy", and wake them up as appropriate. This even preserves your ordering property (though it's still hidden from the participants, and still irrelevant). And, as the experiment is repeated, they will each observe that upon awakening the coin is Heads 1/3 of the time. They can even pool their (equivalent) observations together, and it would be indistinguishable from what SB would have observed, and of course the 1/3 result won't change.

Do you have another clever argument for why THIS real-world experiment also doesn't represent Sleeping Beauty?

Expand full comment
Ape in the coat's avatar

> To pick one example: because of the lack of precision, when you say e.g. "P(Heads&Monday)=1/2", you need to be clear that this is the probability that Heads&Monday happen in an experiment;

Oh, it's exact opposite of the lack of precision. I'm very explicit what I mean by my events.

"Monday" means "Monday awakening happens in this iteration of probability experiment".

"Heads" means "The coin is Heads in this iteration of probability experiment"

"Awake" means "The Beauty is awake in this iteration of probability experiment"

All these events are formally defined and in every iteration of the probability experiment they have coherent truth values. Their probabilities are:

P(Monday) = 1, P(Heads) = 1/2, P(Awake) = 1

P(Monday&Heads) = P(Monday&Heads|Awake) = 1/2

> unfortunately, you also treat it as the probability that you see Heads&Monday upon waking - even though that's in a different probability space!

Whether or not there is a different probability space is, in fact, the objective disagreement. I claim that when the Beauty is awake she observes and event:

"The Beauty is awake in this iteration of probability experiment"

while you believe that she observes a different event:

"The Beauty is awake *today*"

If such event was formally definable - I'd agreed with you!

The problem is that such event, and any event that talks about *today* in Sleeping Beauty problem is ill-defined: it doesn't have a coherent truth value in every iteration of probability experiment.

You can see it yourself. Suppose the coin is Heads. Then in this iteration of experiment the Beauty is awake on Monday but not on Tuesday. But then the statement "The Beauty is awake *today*" is simultaneously true and false in the same iteration of the experiment.

Notice, that your version, where you are awake only on one random day, doesn't have this issue. There you can formally define today as "Day 1 xor Day 2" and indeed statement "The Beauty is awake on Day 1 xor Day 2" is true for every iteration of probability experiment and therefore is a coherent event.

You may want to read this comment thread where a person started from complete disbelief but then actually tried to formalize *today* in Sleeping Beauty setting and noticed how surprisingly hard it is.

https://www.lesswrong.com/posts/gwfgFwrrYnDpcF4JP/the-solution-to-sleeping-beauty?commentId=vkoi3mmJ8JxSy2Jyq

> Expected values would like to have a word with you...

Expected values are aggregated through multiple iterations of probability experiment, not one. More on this below.

> Probability is just a measure of "# of positive events" / "universe of events".

It has a strict mathematical definition. You require a set of mutually exclusive outcomes to define probability space. The outcomes that thirders try to use are not mutually exclusive in the Sleeping Beauty setting, therefore what they call probability isn't actually probability, but a different entity which I explore in the last post.

> When you (or SB) awaken, the universe is "all times I awaken".

True for your version of the experiment, false for Sleeping Beauty as stated, for the reasons above.

> It is completely irrelevant what order the events come in; this is information that is outside your universe and can never affect your probability estimate.

If the order didn't matter, then there is no reason not to think that you are not a random person from all human population for the sake of DA.

But thankfully, this information is not "outside of the universe". The Beauty is aware of the setting of the experiment, she has the full knowledge that her Tails awakenings are ordered. And it can obviously affect her probability estimate.

Suppose you participate in iterated Sleeping Beauty experiment. Suppose also, that somehow you learnt that your previous awakening was Tails&Monday one. This immediately makes you certain that you are experiencing Tails&Tuesday awakening.

Compare this with going through an iterated experiment according to your rules, where in each iteration there is only one awakening from Sleeping Beauty routine. There, learning that your previous awakening was Tails&Monday one *actually tells you nothing* about your current awakening. Clearly you can see the difference, can't you?

> which would be news to, well, all scientists and pollsters

This has nothing to do with the regular work of scientists and polsters. It's specifically being affected by amnesia with repeated experience during the same iteration of probability experiment, while being fully aware of its nature, which puts the Beauty in this "weird" epistemic state where she looses ability to coherently conceptualize *today*. On the other hand, polsters work can be represented as a multiple iterations of a simple probability experiment:

randomly pick a person from a population

ask them question

Then they aggregate results over all of the iterations of the experiment. This is obviously different from trying to aggregate results over the same iteration of probability experiment, when you do not even have coherently defined sample space.

> Do you have another clever argument for why THIS real-world experiment also doesn't represent Sleeping Beauty?

The core reason is still the same. In Sleeping Beauty your awakenings do not happen on random. Here they do. Here the statement "I'm awake today" has a coherent truth value in every iteration of the experiment. In sleeping Beauty it's not. There is also an additional nuanse that in this version you are not confident that you will be awaken in every iteration of the experiment, while in Sleeping Beauty you are but this is less crucial.

In general, you describe a different experimental setting and just assume that it has to be the same as Sleeping Beauty. The initial burden of proof is on you. Essentially you are doing the same type of faulty reasoning as people who assume that DA is just like paper picking, even though paper picking has random sampling and DA does not.

Expand full comment
SnapDragon's avatar

Argumentum ad populum, eh? Funny you should mention Sleeping Beauty, which (while not quite the same as the DA) is also obvious and easy to answer. (The thirder, not halfer, position is correct. At least the Wikipedia article on Sleeping Beauty admits that the thirder position is how you should wager if you don't want to lose money, which makes it a lot closer to admitting there's a simple true answer than the article on the Doomsday Argument.)

Why's it easy to answer? Like Monty Hall, YOU CAN DO THE EXPERIMENT YOURSELF. Sure, you don't have access to mind-wiping drugs, but instead you just repeat the experiment, each time having the experimenter randomly (and secretly) decide whether to simulate day 1 or day 2. Iterate, add the results together, and you get the same probabilities as if you simulated day 1 and 2 successively.

If you really, honestly think that the halfer position has any legitimacy, then please, please, PLEASE go find a friend and just DO THIS EXPERIMENT YOURSELF. It's so easy! You'd be embarrassed to argue against the correct answer to the Monty Hall problem, right?

Now, maybe you have a definition of "obvious" that somehow excludes a problem where you can grab a friend and, in 30 minutes, decisively settle the question...?

Expand full comment
Nebu Pookins's avatar

> Argumentum ad populum, eh?

There are a couple of situations where argumentum ad populum is appropriate. For example, "What is the schelling point for [basically any scenario whatsoever]?" Or, from a linguistic descriptivism viewpoint, "What is the definition of [any commonly known word]?"

> At least the Wikipedia article on Sleeping Beauty admits that the thirder position is how you should wager if you don't want to lose money, which makes it a lot closer to admitting there's a simple true answer

I think this is confounded by the fact that you're unaware of "how often" you're making the bet.

Like let's say you and I play a game where you pick a rational number and then I flip a guaranteed-to-be-fair coin, and you have to guess whether it lands heads or tails. If it lands head, you have to pay me the number you picked. If it lands tails, you have to pay me double the number you picked. Either way, if you guess correctly, you win 1 unit of money. What's the biggest number you can pick such that you won't lose money on average? Also, what do you think that the coin will land heads? Probably your answer to these two questions won't be the same.

> Now, maybe you have a definition of "obvious" that somehow excludes a problem where you can grab a friend and, in 30 minutes, decisively settle the question...?

Yes. If lots of people get the answer wrong, then that problem is "non-obvious" *even* if it can be decisively settled in 10 seconds, nevermind 30 minutes.

For example, "if a baseball bat and a ball together cost $1.10, and the baseball bat costs $1.00 more than the ball, how much does the ball cost?" Lots of people get this wrong, even though you can easily and decisively settle the question in 10 seconds. Thus this problem is non-obvious.

Expand full comment
SnapDragon's avatar

> I think this is confounded by the fact that you're unaware of "how often" you're making the bet.

Well, that's fundamentally why the fair coin turns into a 1/3 bet, sure. Nothing's being "confounded", though - if you can't distinguish between wakings (so you have to pick a consistent wager across all of them), there's a single correct answer for the odds to wager at.

> Probably your answer to these two questions won't be the same.

Yeah, this is another nice way of pointing out the difference between wagering at the time the experiment starts and wagering each time you wake.

As for "obviousness" ... all right, fair enough. I'm not super attached to my subjective definition of "obvious". How about I equivocate and just say that it's "as obvious as the Monty Hall problem"? There's not really a good reason for Wikipedia to spell out the correct answer to the Monty Hall problem, but (mostly) fail to do so for Sleeping Beauty.

Expand full comment
Nebu Pookins's avatar

I think you're assuming that "the correct answer for the odds to wager at" is the same as "to what degree ought you believe that the outcome of the coin toss is Heads", which is... "not obvious".

Like in the hypothetical game I proposed, I can believe that a guaranteed-to-be-fair coin has a 1/2 odds of coming up heads, while simultaneously believing that 1/3 is the correct odds to wager at, given the "weird" rules of game being proposed.

So it seems plausible that I could say "I think the answer to the Sleeping beauty question is 1/2, even though I'd bet at 1/3 odds" and still have that be a reasonable and coherent set of beliefs.

Expand full comment
SnapDragon's avatar

Nope. This is not ambiguous. The answer to the Sleeping Beauty problem isn't "I don't know what you're asking and there's nothing you can say to clear up the confusion, la la la." There is a correct answer and it's not subjective. When you wake up, and I ask you "what is your confidence that the coin flip was heads?", your answer is 1/3, the only answer that matches the aggregate results you'll observe, the only answer that lets you bet on the outcome without losing money.

Let me ask you this... if I just SHOWED YOU THE RESULT and it was tails, and asked what the probability it was heads was, would you still say "well, I think the answer is still 1/2, even though I'd bet at 0 odds"? You think it's somehow reasonable to throw out an observation in one case but not the other? Does it feel good to insist that the true "schmrobability" is 1/2 despite the fact that this "schmrobability" has no predictive power?

Expand full comment
MicaiahC's avatar

Not going to derail the thread, but I think this entire exchange is terrific. If every comments thread was like this, I'd subscribe to the comments section.

And while I don't think the tone is perfect in the parent post it does focus on object level way more than insulting people or appealing to meta level reasons without engaging on the object level.

In summary: +2

Expand full comment
Bugmaster's avatar

> I think if you use anthropic reasoning correctly, you end up with a prior probability of something like 30% that the singularity (defined as a technological revolution as momentous as agriculture or industry) happens during your lifetime...

What, is that all ? In this case, I'd say that 30% is possibly a bit low, because momentous changes in agriculture or industry happen all the time. They happen on the global scale, as with e.g. the Internet, the electronic age, the Industrial Revolution, etc.; as well as on the local scale, such as e.g. your friendly neighbourhood village coot discovering oil on his property, or your country's government collapsing in chaos. Something momentous is always happening somewhere.

But that's not what most people mean when they talk about capital-S Singularity. Instead, they mean a final apocalyptic event after which humanity ceases to exist in any recognizable form (due to either transcendence or extermination); and to use *this* meaning of the word interchangeably with the previous one smacks of deliberate equivocation.

Expand full comment
REF's avatar

I didn't think that it was "cease to exist in recognizable form." I thought it was changes were so significant that it was impossible to predict what things would look like on the other side. Or not that necessarily humans would be unrecognizable but civilization/technology would be unrecognizable.

Expand full comment
Bugmaster's avatar

> Or not that necessarily humans would be unrecognizable but civilization/technology would be unrecognizable.

Firstly, I think this is pretty similar to my formulation, although I understand what you're saying: our biology (or lack thereof) could radically change while our civilization remains the same). I think this would still count as "*humanity* ceasing to exist in recognizable form".

But secondly, this had not been the case with the Industrial Revolution (etc.) either, nor any other commonly brought up example of rapid technological change. Yes, technology had changed in unpredictable ways, but people are still people and our civilization is still largely recognizable. So, once again, the Singularity is supposed to be a hitherto totally unprecedented event.

Expand full comment
Amicus's avatar

Our civilization is recognizable to us, but of course it is, it's ours. Would it be recognizable to a hunter-gatherer in 20,000 BC?

Expand full comment
Bugmaster's avatar

No, arguably not; but the change from then to now was rather gradual -- a 20,000 year gradient rather than a pinpoint Singularity. A better question might be something like, "would a pre-industrial British farmer recognize London at the peak of the Industrial Revolution ?"; and I think the answer is "mostly yes". People still have families, play politics, work for a living (even if some of the jobs are different), live in cities, sit on chairs, drink beer, etc. This sounds nothing like the Singularity as it is commonly depicted.

Expand full comment
William H Stoddard's avatar

This definition of the Singularity as the equivalent of the Agricultural Revolution or the Industrial Revolution seems to operationalize it in a way that deprives it of its specific meaning. Both Vinge's original essay and Kurzweil's book seem to present, not so much a more cataclysmic event, as one whose form and consequences cannot meaningfully be anticipated, because our predictive methods fail under the conditions it creates. It's rather like the old physics cartoon where one of the steps is "A Miracle Happens!"—except that the miracle happens not in the middle step but at the very end.

Vinge's definition, at least, has always struck me as something of a science fiction writer's lament: "I don't know how to write a story set in the future, because any extrapolation I do seems to fail too few years from now to give me a meaningful sense of futurity!"

Expand full comment
FionnM's avatar

Freddie has written dismissively about AI and how it won't lead either to a singularitarian utopia OR dystopia on several occasions, and I've found his arguments so weak. His argument always seems to ultimately boil down to "people who believe in the Singularity only do so because they desperately want to escape the drudgery and mundanity of their quotidian lives". No attempt to engage with the relevant ideas on their own merits, just textbook Bulverism.

Expand full comment
Tom Hitchner's avatar

He also seems to assess AI's potential based on where it is at any given moment in time, when of course the *rate* of growth/development is the most relevant consideration. For example, this post is right that AI isn't very good at drawing John Candy. But how good was it 15 years ago? How about 2 years ago? 6 months ago? https://freddiedeboer.substack.com/p/does-ai-just-suck

Expand full comment
Anonymous's avatar

The better it gets at drawing John candy, the more low effort John candy memes can be spammed on TikTok.

Expand full comment
REF's avatar

If you're telling me that the unimaginable future on the other side of the singularity is a preponderance of John Candy memes, I think I'd prefer the apocalypse.

Expand full comment
Anonymous's avatar

So eager to think AI would end the world, few realized that AI instead would make them want to end the world.

Expand full comment
Nebu Pookins's avatar

I disagree with Freddie, but to be fair to him, he did attempt to engage with the relevant ideas on their own merit and isn't *just* doing textbook Bulverism.

The structure of Freddie's post is something like:

1. Harari is bad and dumb. (Subjective opinion, maybe leaning towards Bulverism, but not quite, because he doesn't really rely on this to prove his point).

2. Here's a probabilistic argument for why Harari is wrong. (A legitimate, although ultimately wrong, attempt to argue against the singularity happening in the near future.)

3. Here's a list of more people who are bad and dumb. (Again, technically not ad hominem, because he doesn't rely on people being bad and dumb to prove his point.)

Expand full comment
FionnM's avatar

When I described his arguments as Bulverist, I wasn't referring to his recent temporal Copernicanism but to a much earlier post: https://open.substack.com/pub/freddiedeboer/p/ai-or-the-eternal-recurrence-of-hubris

Expand full comment
Hoopdawg's avatar

So, in the post you cite, Freddie:

1) Presents a litany of arguments for A(G)I being much more difficult than often believed.

2) Asserts AI advocates dismiss them without engaging on merits.

3) Theorizes why it might be so.

Look, this is important. When it comes to "dismissing AI", #3 is not an argument. #1 is the argument.

#3 is something else entirely, and feel free to dislike it. But by complaining about it while pretending #1 does not exist, you're merely justifying the need to engage in #3-like theorizing by demonstrating #2 to be true.

Expand full comment
FionnM's avatar

Well, my argument is that #3 is completely irrelevant, because it's a fully general counter-argument which can be deployed to dismiss literally any opinion about anything. If you think the facts as they stand don't support breathless AI hype, that's all the argument you need to make. Idle armchair speculation about why people act like breathless AI hype is justified adds nothing to the argument (and arguably weakens it, as it suggests you have such little confidence in your own arguments and analysis that you have no choice but to engage in ad hominem mudslinging).

Expand full comment
FeepingCreature's avatar

If the apocalypse has an even chance of being caused by any human, you should still expect it to be when you're alive; so long as there are times when less humans are alive and times when more humans are alive, clustering alone should cause you to expect to be alive concurrent with it. That's borderline tautological.

(With the Singularity, there are reasons why you should expect it to happen in times of high human clustering *even harder* than that, but they're not even needed.)

Expand full comment
Chris's avatar

This all reminds me of one of the arguments against intelligent design, half-remembered now. Something along the lines of identifying a license plate's uniqueness as a sign of intelligent design of the universe. For some reason I keep thinking that Bertrand Russell made the argument? "Half-remembered" might be an overestimation.

I don't have an argument behind this. Fascinating read, but I'm not in a place in my life where I can entertain academic arguments for or against how interesting we should think our lives are. It's humbling enough just living.

Expand full comment
Alex Potts's avatar

So I'm one minute in and I'm already like "DeBoer is clearly unfamiliar with the doomsday argument."

Expand full comment
Steve Cheung's avatar

Wait, but you’re limiting yourself to events that have occurred up to today. You might even say ‘ok this is the biggest period that’s happened until now’…..but what is to say it will remain so after considering the forward-looking 50k years or however much longer humans have left?

And when you use “climate”….aren’t the biggest impending potential disasters….still “impending” and “potential”? Isn’t that the usual refrain? We want to make changes today “for our children’s generation, and for their children”? If it’s going to be worse for future generations….then by definition we aren’t seeing it in our lifetime.

As for near misses….the bubonic plague wiped out 20-25% of the human population that existed at that time (approx 100-120 million out of approx 475 million). That’s unmatched as far as I know. By a “massive culling of human population” metric, nothing in this lifetime comes close…yet.

I have no counterpoint to your use of log GDP growth, except (1) that is also on a continual upward trajectory so it seems bizarre to assume this is the peak in ALL of human history (though it might work if you use the “up till now” qualifier); (2) I would like to see a value adjusted for inflation and based on a “per human capita” metric. But is “GDP” the best way to measure the “importance” of our lifetime? I’d say just offhand that insulin, or penicillin (tho both before my time), or the internet, or space travel, or some manner of scientific breakthrough would rank higher than economics….in which case I would also consider stuff like the Da Vinci, or Newton, or Galileo, or Einstein eras in the conversation….not to mention those that are still to come.

Finally, with your 7% argument….yes, it’s not an infinitesimally small likelihood….but it’s also not a “like my chances” ballpark either.

Obviously, I tend to agree with Freddie on this. It seems to require a certain solipsism to think that you’re somehow part of an important (insert thing here)….when it’s more likely that it’s just another thing.

Expand full comment
Alastair Williams's avatar

>As for near misses….the bubonic plague wiped out 20-25% of the human population that existed at that time (approx 100-120 million out of approx 475 million). That’s unmatched as far as I know. By a “massive culling of human population” metric, nothing in this lifetime comes close…yet.

On a more local scale, the post-Columbian wave of epidemics in the New World killed between 40% and 90% of the population in some regions of the Americas. I don't think anyone has an accurate idea of the actual death toll, in part because so many people died that many of the civilizations there either collapsed or were left on the verge of collapse.

Expand full comment
Annette Maon's avatar

In terms of great inventions, fire, the wheel and writing may seem trivial to us but could have been as important to those generations as the agricultural and industrial revolutions recorded in our histories. Fire enabled cooking and wheels enhanced the utility of roads which would have been followed by other new technologies within a generation or two (one lifetime). Could we be discounting these technologies simply because they were invented so long ago that we have no recorded history of them?

What makes the Eemian more notable than the climate change that melted the North American glaciers which in turn opened the St. Lawrence river, raised sea levels and created the Gulf stream?

On a human life timescale, I wonder why it is so easy to discount:

* The extinction of Neanderthals (Goliath)?

* The end of the last glacial period (Holocene).

* The black sea deluge (Noah's flood?)

* The explosion of Santorini (Atlantis?)

* Whatever it was that triggered the "sea people" invasion which lead to the Bronze age collapse.

* The dealier wars enabled by iron after recovery from the Bronze age collapse.

* The 10 year catastrophic events (volcanos and plagues) that triggered the middle ages.

Is it just because we did not live through them? Humans living through those events would have been just as sure as Harari that these were the most significant apocalypses in known human history. If we account for the limitations of the priors about "known human history" at those times, their views may been more justified than Harari's who imagines that the history he writes is more complete than theirs because after all it is all he knows.

What about the genetic indications of several other prehistoric bottlenecks in the size of human populations? Maybe apocalypses are not as rare as anthropic reasoning leads us to believe?

Expand full comment
Devaraj Sandberg's avatar

Yeah, given that linear time is just a very modern human concept, and hardly intrinsic to existence, Freddie's rebuttal seems to me like super cope. Freddie, face it, man, we're training data for AI.

Expand full comment
Annette Maon's avatar

People who lived in centuries, communities or social classes where the median age was 40 or less had fewer opportunities to observe even minor revolutions or "apocalyptic" events. It is possible that some ancient humans lived hundreds of years before carcinogens and plagues became so common and deadly. Those of us who live longer are much more likely to die in an apocalyptic exinction event that causes a human population bottleneck.

Maybe there is something to learn from the myth about Methuselah who lived long enough to die in the flood that only Noah and his family survived.

Even if the singularity makes it possible for some humans to live forever, it is unlikely that all humans would choose that option. Some of those who do chose to live forever will only die when the final extinction of the human race happens.

Expand full comment
Scott Aaronson's avatar

I started reading Freddie’s post when it came out, but couldn’t continue because the wrong hurt too badly. Just like the Constitution is not a suicide pact, so too the Copernican principle is not a pact to refuse to believe you might be at a turning point of history, once you’re born and find yourself on a planet of billions of people that just invented nuclear weapons etc etc, rather than somewhere in the hundreds of thousands of years of hunter-gatherer stasis before that. I hope Freddie will do the honorable thing and retract.

Expand full comment
luciaphile's avatar

I feel like if Madison were around now, he'd wonder that the hell? - he'd be like, the people for whom I wrote it must have all committed suicide, or just died or whatever - why are you still using this very situational Constitution I wrote?!

Expand full comment
Gunflint's avatar

“Would it be wonderful if, under the pressure of all these difficulties, the convention should have been forced into some deviations from that artificial structure and regular symmetry which an abstract view of the subject might lead an ingenious theorist to bestow on a Constitution planned in his closet or in his imagination?”

James Madison - Federalist Number 37

January 11, 1788

Expand full comment
luciaphile's avatar

No, I don't think he'd tweak it. I think he'd look around and be utterly baffled and be like, good luck with that. Then he'd go walk in his beautiful old-growth woods and think, now this - this was enduring. But then when told that the only reason his woods endured was because he was famous for writing the Constitution, I think he'd go crazy. Time travel is not for sissies.

Expand full comment
thaliabertvart's avatar

Ah, yes, the well-known provisional and ephemeral document, The Constitution. I am sure The Founders certainly intended The Constitution to be immediately scrapped once some political party found its freedoms inconvenient.

Expand full comment
luciaphile's avatar

Really? I shouldn’t think so. I wouldn’t have imagined that they thought parties were going to have such primacy.

Expand full comment
Anonymous's avatar

They believed that parties were a great threat to the constitution. But also failed to design the constitution in any such way to account for said threat. Sad!

Expand full comment
luciaphile's avatar

I expect they would come up with something very different in the way of a constitution for a polyglot empire of urban-dwellers. Or they might just say, hey, we're not magicians!

It's weird how they are both hated and yet have magical powers imputed to them.

Expand full comment
Anonymous's avatar

I don’t believe they are magic, but I think they were terribly misguided. It only took 40 or 50 years for US politics to focus almost entirely on who could manage to import more votes. They wanted to copy the Greeks; they should have paid more attention to Plato laying out why democracy was a sham.

Expand full comment
Arrk Mindmaster's avatar

Dinosaurs existed 200 million years ago. Ten million years later, dinosaurs still existed, flourishing better than ever. Ten million years after THAT, dinosaurs still existed. I conclude, by induction, that dinosaurs will exist forever.

OK, LARGE dinosaurs. Birds don't count for this purpose.

Expand full comment
JamesLeng's avatar

So, some utahraptor invents a time machine, travels seventy-some million years forward, sees the Jurassic Park franchise, skeletons venerated in museums, and factory-farmed chicken, goes back and tries to warn everybody that training mammals to be friendly to saurian interests is harder than they assume, maximizing poorly-chosen traits could have horrific consequences?

Then gets crushed by a completely unrelated space rock.

Expand full comment
Strawman's avatar

I laughed at the footnote for Sam Bankman-Fried. Not sure if it's a deliberate reference, but I believe I once saw a tumblr post by someone whose friend had once cited a maths paper by Ted Kaczynski with a similar (the same?) comment.

Expand full comment
Jason's avatar

What would be a decent threshold of percent of lives lost to count as “apocalyptic”? I think perhaps it might also make sense to quantify it by lives lost per unit area. I have the intuition that half a billion lives lost in North America is more apocalyptic than half a billion evenly distributed over the current 8 billion people. There’s also the matter of what we would consider apocalyptic today versus what an event would look like 1000 years from now.

Expand full comment
Richard Weinberg's avatar

I love this exchange, but I'm more skeptical than usual about your viewpoint. Starting at the beginning, the "Petrov incident" is indeed horrifying, but I think your claim is wrong. The very topic gives me goose bumps, and I shall go to my grave grateful that war was averted, but arguing that it's the closest we've come to species extinction in the past 300,000 years sounds bonkers. Generally you think about what you write pretty carefully, so perhaps I'm missing something?

In my view, if Petrov hadn't existed, the chance of a nuclear exchange would have been idk maybe 10%? and conditioned upon that, of a general/massive exchange maybe 10% (total of 1%). But the notion that even that catastrophe, which conceivably (with a lot of implausible conditionals) might extinguish Western civilization, leads to possible species extinction seems an extraordinary stretch. Our world would be very different, but surely within 500 years humanity would have restored a form of civilization. Globally-distributed species are just not that easy to annihilate, notwithstanding the vast body of literature on the Nuclear Apocalypse.

In contrast, the possibility that Mr. Ooog, in the year 150,000 BC, might have had a bad day and bashed in the heads of the entire Gloog family, at a bottleneck time when there were 50 Homo Sapiens, would seem vastly more consequential. On this issue, I vote for Freddie.

Expand full comment
REF's avatar

The last part is a bit too, "what about the dozens of times where a virus nearly evolved that would have wiped us all out." As for the Petrov incident, I think Scott was thinking that the event consisted of avoiding a nuclear exchange (rather than Petrov's personal involvement). Had a major exchange occurred, it seems reasonable to say that having giant swaths of the planet irradiated for the foreseeable future counts as a massive impact on the future of civilization (though obviously not species ending). I guess I am saying that (in more cases than this) Scott spoke a bit hyperbolically but at least for me, the examples were adequately persuasive.

Expand full comment
Richard Weinberg's avatar

I appreciate that hyperbole can be very effective both for reading enjoyment and persuasion, but if one detoxifies Freddie's rant to an even-tempered argument, it could be taken as a full-on critique of hyperbole. I'm hardly a fan of nuclear war (as a kid I didn't expect to live to adulthood). But while a massive but unlikely exchange of strategic nuclear weapons in the 1980s would have killed at least tens of millions, perhaps even >100,000,000 people, but letting alone the notion of species extinction, that "giant swaths of the planet" would be irradiated for the foreseeable future is true only in the most technical sense. The Chernobyl exclusion zone is perhaps the ecologically-healthiest site in the entire former USSR, and that was only 40 years ago.

Expand full comment
REF's avatar

I meant not that it was persuasive because of the hyperbole but that it was persuasive despite it. Scott sort of vacillated between apocalypse, extinction and hyper-important event. Had he stuck with "hyper-important event," I think he would have gotten much less push back (not mentioning global warming would have avoided another avenue of objection).

Expand full comment
Richard Weinberg's avatar

I understand; perhaps my reply to you was unfair. One trait of Scott's essays is their sheer magnitude. So far I've hardly gone past the start, so maybe my issues are beside the point. But I think I already know where he's going, based on my priors. We'll see.

I find the writings of both Freddie and Scott fascinating. It's ironic that in this case I think they both err by overstating their position.

Expand full comment
luciaphile's avatar

>ecologically healthiest zone

That surely owes more to the absence of people than to the safety of the place as regards radiation and cancer risk for humans?

Expand full comment
Dennis P Waters's avatar

Shouldn’t discuss this topic without reference to J Richard Gott’s Nature paper of 30+ years ago: https://www.nature.com/articles/363315a0.pdf

Expand full comment
Peter Defeel's avatar

> No part of anthropics should be able to prevent you from updating on your observations about the world around you, and on your common sense.

In this case the assumption that there’s a 7% chance of the apocalypse because we are living in a time where 7% of the population that has lived lives, seems to defy common sense unless the size of the population causes the apocalypse - ie climate change.

This clearly doesn’t apply to an asteroid hit. Sure more people will be killed if an asteroid hits today compared to the past, but that doesn’t mean the earth has more of a risk than the Middle Ages, Bronze age, or the prehistoric age.

When it comes to AI, the risk also independent of the population but dependent on the technological advances (to state the obvious) so while the 7% is wrong, Freddie is also wrong because of the actual facts of the era we are living in.

Certainly the people living in the Cold War era - when the arms race led to there being 70,000 active warheads - would have been right to believe he was closer to end of days than before.

Expand full comment
Nebu Pookins's avatar

> the assumption that there’s a 7% chance of the apocalypse because we are living in a time where 7% of the population that has lived lives, seems to defy common sense

Yeah I think Scott got a little bit mixed up here, although I think "technically/pedantically" he did not commit the error that I'm inferring you accusing him of.

The 7% figure originally came up when calculating "What are the odds that the most extreme event X (for any given metric of 'extreme') to ever happen to humans, happens during your lifetime?" For example, "what's the probability that the greatest playwright in human history is born during your lifetime?" Or "what's the probability that the event that comes the closest to wiping out all humans occurs during your lifetime?"

From there, we can conclude that 7% is therefore also the probability for the almost equivalent question "Conditional on the apocalypse happening, what is the probability that it happens during your lifetime?" (which is NOT the same as "What is the probability that the apocalypse happens during your lifetime?")

Scott explicitly says that this 7% figure that we calculated is NOT the probability that the apocalypse happens. His exact wording is "I don’t think it’s the overall chance that the apocalypse happens in your lifetime", and that maybe the Carter Doomsday Argument can be used to calculate that probability, but he's not sure of the details.

Later on, Scott says "a smaller percent that I’m not sure about (maybe 7%?) that the apocalypse happens during your lifetime" which muddles the water a bit about what exactly Scott's beliefs are. It's possible and maybe even likely that Scott lost track of what that 7% figure he was carrying around represented. But from a strictly pedantic point of view, he doesn't assert 7% as the odds of the apocalypse happening in your lifetime -- he merely says that *maybe* it's 7%, but that really we should probably look at the Carter Doomsday Argument is to get the right figure.

Expand full comment
Liam's avatar

The rate of technological advance depends on the population size, though. A population of a million will have many fewer scientists than one of 8 billion.

Expand full comment
JustAnOgre's avatar

But this isn't Copernicanism. Copernicanism would be that we live in a special time different from not only the past but also from the future. Some technological fuckup is obviously more possible than 100 years ago, and 100 years later will be even more possible.

Expand full comment
Al Quinn's avatar

Is some of this reasoning related to the fine tuning of physical constants? If you have many dimensions over which you select a value randomly within some range compatible with you existing, then the more such values you select, the more likely that *some* of them will fall at one extreme end of their distributions. Looking at temporal distribution ignores potentially many other factors that are perhaps far more likely to prevail than being absurdly early among all humans to ever exist.

Expand full comment
Malcolm Storey's avatar

The apocalypse happens quite frequently, but part of the time-line always survives.

Scott, you already covered this as Mike in the Thiel interviews.

“You can condition probabilities on the fact that you have to exist to see them. So for example, if someone planned to kill you unless a d20 landed on 20, and it landed on 20 so you survive, that wasn’t lucky - it’s just that of twenty world-branches, the only one you could possibly be (alive) in was the one where it landed 20, so of course that’s what you observe.”

The problem with the fine-tuning/multiple universe view is that it falls under the inverse gambler's fallacy. The existence of multiple universes doesn't increase the probability that this one is good for consciousness).

This of course assumes that "god" is assigning consciousnesses to universes "top down", presumably first to a multiverse, then to each nested multiverse and finally to the individual universes (step by step cos you have to allow for the "weight" of each universe.)

I think the way round it is assign consciousnesses bottom up. AIUI, this is how Bundle Theory of consciousness works and this leads to Mike's view.

Expand full comment
Nancy Lebovitz's avatar

Maybe even distributions are a fair starting point when you don't have more information.

The idea that earth is a very ordinary planet was probably a reasonable starting point, but we have a very low probability moon, there doesn't seem to be a typical planetary distribution and if there is, ours isn't it, element proportions aren't evenly distributed.....

Expand full comment
Hilarius Bookbinder's avatar

It’s not even clear that any assessment of probabilities is the right way to reason here. For example, the conviction rate of accused criminals is about 90% (varies by jurisdiction, but this is reasonably close). Suppose I’m on a jury and decide that there’s a 90% prior probability that the accused is guilty before I hear any evidence at all. I’m pretty sure this opinion would have disqualified me during voir dire. Instead jurors are supposed to assume innocence, or, at the very least, withhold any judgment prior to hearing the state’s case and the defense.

Also, Scott is wrong that “your probability of being born in the final generation is about the same as (eg) your probability of being born in North America.” The probability that I was born in North America is very close to 1. My parents are US citizens and North American residents who had done no foreign travel outside North America before my birth, and I have my parents essentially. There is no guf of pre-born souls that Gabriel reaches into and randomly sprinkles across the world (in which case Scott would have been right).

Expand full comment
JamesLeng's avatar

Conviction rate for baseless accusations - that is, those unsupported by relevant evidence - is, I would hope, much lower. So until relevant evidence is presented, 90% is the wrong prior. Lots of trials get as far as a jury being assembled before being settled by some out-of-court plea agreement, or dismissed on a technicality, so you can't even say with confidence that such evidence is likely to be presented until it actually has been.

Expand full comment
Hilarius Bookbinder's avatar

My point was more that given most of the accused are guilty, I should go ahead and assume this defendant is guilty too, even before hearing any evidence. If that's not a good approach, then maybe pure probabilistic reasoning isn't always the way to go.

Expand full comment
JamesLeng's avatar

Even in pure probabilistic reasoning, though, "most of the accused are guilty" doesn't hold water. Pretty sure the number of people accused of being Nazis (by e.g. strangers on the internet) exceeds the number actually convicted of equivalent crimes (e.g. at the International Criminal Court) by several orders of magnitude.

If some judicial system produces nine convictions, one acquittal, and 990 "prosecutor drops the charges (e.g. as part of a plea bargain)" or "thrown out on a technicality" or similar, there's a 90% conviction rate *among trials which reach a verdict,* but only 1% of trials ever get to that point, so if you assume at the *start* of a trial that the accused will be found guilty as charged, you'll be wrong more than 99% of the time.

Expand full comment
anomie's avatar

> I’m pretty sure this opinion would have disqualified me during voir dire.

If that was the case, we would never be able to select any jurors. No one's telling the truth during jury selection, whether they know that themselves or not.

Expand full comment
Jeffrey Soreff's avatar

On a peripherally related note, I wonder if anyone has reacted to "The truth, the whole truth, and nothing but the truth." by noting that "the whole truth" is beyond human ability (and perhaps arguably beyond reciting in historical time), and, if so, what happened.

Expand full comment
Liam's avatar

"Thank you for your time, Professor Gödel, you're dismissed"

Expand full comment
Jeffrey Soreff's avatar

LOL! Many Thanks!

Expand full comment
Bardo Bill's avatar

So to follow DeBoer's reasoning:

1. The particular time we live in is unlikely to be a significant inflection point in human history.

2. Therefore the future will mostly resemble the past.

3. The vast majority of the human past consisted of our living in hunter-gatherer societies.

4. Therefore in the future we will probably mostly live in hunter-gatherer societies.

5. It would require an apocalyptic event to reduce the complexity of modern civilization to hunter-gatherer societies.

6. Therefore the apocalypse is nigh.

Expand full comment
Philo Vivero's avatar

So. Perfect.

Expand full comment
Rootless Cosmopolitan's avatar

There's also the effect of survivorship bias here: we discount historical instances of large scale collapse because things worked out fine right now. E.G. we don't especially mourn the Mongol conquest and its destruction of great places like Baghdad because those places now seem... fine. We miss the counterfactual: what would those areas look like had the disaster not occurred?

Expand full comment
MM's avatar

"The biggest climate shock of the past 300,000 years was . . . also during Freddie’s lifetime"

I think this would be more convincing with more detail. A climate shock that almost no one notices without being told is not much of a shock.

I can see "Someone almost launched a nuclear missile" as something that people would not notice without being told. But "Things got really hot", or whatever your definition ends up being, should be noticeable.

Expand full comment
REF's avatar

Of course, nobody noticed the last ice age either, since the transition took about 10,000 years. Does that make it less of a climate shock?

Expand full comment
MM's avatar

What exactly happened "during Freddie's lifetime" that's a climate shock?

Expand full comment
Gnoment's avatar

Its hard to take any of this seriously, when anyone can observe that humans have some weird cognitive bias that an apocalyptic or rapturous event will occur in their lifetime. And we live in a culture in which we believe Things are Always Getting Worse.

Christians have been ready for Jesus' return for 2000 years. It's all the same instinct.

There is the additional problem that this too often turns into an excuse to not take responsibility; it just becomes a different version of climate catastrophism.

Expand full comment
Tom Hitchner's avatar

But many apocalyptic events, variously defined, have actually happened throughout history. And for every one there was someone explaining why nothing big was going to happen and everything was going to keep going the way it always had been.

Expand full comment
Gnoment's avatar

What do you define as an apocalyptic event?

Expand full comment
Tom Hitchner's avatar

Something like: the collapse or transformation of a civilization or society, such that many people's ways of life are permanently altered. Some examples could be the collapse of Amerindian civilizations, the Industrial Revolution…

Expand full comment
Gnoment's avatar

I don't think those are apocalypses.

Expand full comment
Tom Hitchner's avatar

Fair enough. However it’s my understanding that it’s exactly the kind of thing Freddie is saying is not going to happen in our time, and my point is that others have said that before, and been right sometimes and wrong other times.

Expand full comment
Cosimo Giusti's avatar

It is fun, though, to watch and listen to people who imagine they will somehow have the unique tools needed to transcend the Event -- information, skills, technology, etc. -- while the rest of us are gormless. (We used to be gormless; now we are wise.)

Only the most highly 'evolved' or intelligent or informed can be Winners! But somehow, discerning consumption or profound thinking may not be enough. The battery backup for our household may not get us through the Event -- even if we think we know what it's about.

Expand full comment
Freddie deBoer's avatar

I think you really want time machines and warp drive and an android buddy, and while those are all understandable things to want, they are not things that an adult should expect. You live in a boring, mundane world of asphalt and taxes, Scott, a ceaselessly unimaginative post-industrial capitalist system that's about spreadsheets for the lucky and making venti lattes for the unlucky. I'm trying to convince people that their understandable desire to live in a different kind of world is how you get to absurd places like today, where people are insisting that because probabilistic text generators have become fairly convincing, that means we are imminently (as in, any day now) going to see a godlike AI rise up and rescue them from the mundane - maybe through doom, maybe through deliverance. But it'll be the end of all of this boring, grinding, same-shit-different-day reality that is adult existence.

I don't think nurturing those hopes is compassionate, and I certainly don't think basing public policy or enormous economic decisions on them makes sense. And I will bet every dime I have that you will live out the rest of your life in a world that looks almost exactly like the one we live in now. Which for you will be fine, because you live a largely contented life, or so it would seem. But it's just gonna be life. You're still gonna have to take out the trash, and if you get some robot that takes out the trash for you tomorrow, there will be a new boring and thankless task for you to grumble about. Because that's what human life is.

Expand full comment
Joseph's avatar

I desperately hope that the AI fizzle people are correct, but I don't think that's likely.

Generally, Freddie's argument proves too much, if it ignores technological developments.

- If you ignore technological changes, what are the odds that the greatest fast moving climate catastrophe of all time is happening in your lifetime? Almost nothing! (Or 7%, I guess.) There's no reason to decarbonize.

- If you ignore technological change, what were (or are) the odds that the human race would be beset by a nuclear war in our parents' lifetime? Almost nothing! The nuclear disarmament movement was a waste of time that could have been spent more profitably playing Ultima III!

(To be fair, I guess you could argue that in the nuclear disarmament case, the world stockpile of nuclear weapons is a fact in being. Climate alarmism, OTOH, like AI doomerism, depends on identifying current trends and projecting them into the future.)

Expand full comment
Anonymous's avatar

The nuclear disarmament “movement” did almost nothing. The collapse of the Soviet Union was the largest factor in stockpiles being scaled back. And now China is building more…

Expand full comment
Tom Hitchner's avatar

It's on-brand that the first sentence here is about Scott's motivations for being wrong, but none of the sentences are about why you think Scott is wrong.

Expand full comment
Freddie deBoer's avatar

Tom your constant efforts to get my attention, seemingly in an effort to get your career going, are tiresome. Please find someone else to fixate on in an effort to get attention for your writing.

I wrote my post. It's still there. The argument is what it is. And, yes, I'm questioning Scott's motivations, because like a lot of deeply analytical people he's congenitally unable to ask himself if perhaps he's motivated by forces inside of himself that are NOT analytical. Scott is an animal who does not understand that he's an animal, as is true of almost everyone in his intellectual milieu. And it's perfectly appropriate to say to someone, "you have an immense overconfidence in the imminence of a vastly-different future that you desperately hunger for; couldn't it be that you are not arriving at your predictions of the future entirely analytically, but instead because of an inability to live in a mundane world?"

Nobody here is going to ride the teleporter with Mr. Spock. I'm sorry. You're going to spend the rest of your life stuck in traffic and scrolling Amazon. That's human life. You have to have the courage to stop lying to yourself and to face it.

Expand full comment
Tom Hitchner's avatar

Since my Substack so far has zero posts, getting your attention here would not really do anything for my career. But I'll stop replying to you if it's what you want.

Expand full comment
Amicus's avatar

> And it's perfectly appropriate to say to someone, "you have an immense overconfidence in the imminence of a vastly-different future that you desperately hunger for; couldn't it be that you are not arriving at your predictions of the future entirely analytically, but instead because of an inability to live in a mundane world?"

If you're their therapist, sure. If you're debating them, no, not at all. All sorts of people are overconfident about all sorts of things. A few of them, through no fault of their own, get it right.

The correct response to a gambling addict who just knows that they should bet it all on black 13 might be "you're irrational" - it is not "and that's why I believe roulette tables are literally impossible to rig".

Expand full comment
anomie's avatar

...Why does Scott read this guy again?

Expand full comment
JQXVN's avatar

You know, I was just about to suggest Freddie should receive some praise for incremental progress, for responding to Scott without resorting to verbal abuse and paranoid fantasy, even though I knew it was just because Scott is too big a target to safely hit. Then he replied to you.

Expand full comment
luciaphile's avatar

I'm not interested in AI or robots. I don't even care about venti lattes. I just want to live in the world I grew up in. I just want Windows 95 back!

Expand full comment
luciaphile's avatar

(This is not me. I enjoyed Freddie's reply so turned to my husband: "The Marxist gets off some good lines sometimes." This was his genuine response, which he realizes is pathetic. But he's been having a lot of word-processing/maps/photo-editing difficulties lately. Also, I think he wants back his eyesight from 1995. And a pre-LED screen.)

Expand full comment
Brandon Fishback's avatar

This isn’t even close to anything resembling an argument. If you’re trying to convince anyone of anything, you have to actually make arguments. People who read this comment are just going to come away with the impression that your pessimism is driving this idea rather than anything based in reason.

Expand full comment
Anonymous's avatar

It’s not based in reason. Maturity comes with the realization that there are strong limits to reason .

Expand full comment
Stygian Nutclap's avatar

I don't think it's fair to project that strong safety concerns about AI in the near-term are strictly informed by positive excitement and grandiose visions of AI, and notwithstanding, this doesn't directly address and make a case against those concerns. Without being bogged down by semantics, the danger is not contingent on there being a "real" conscious artificial intelligence, but a "boring"-yet-effective tool that will surely be employed for military and policy purposes if there is an advantage to do so.

I don't think the world right now is boring. There's meat to the idea that people may entertain some ideas because they are exciting, and maybe their lives are monotonous or not. That just seems like human nature to me.

If anything people overconsume information junk-food qua the news cycle and become more anxious for it. In that sense, fear of AI would not be a deviation from monotony, but more of the same. But it checks a point in favor of "something about day-to-day life is wrong".

The critical point and crux in your text seems to be "desire to live in a different world". In other words, people need Communism. It just makes me think of this meme - https://i.redd.it/lym1aovbkjk81.png . I expect most people don't envision a world drastically different than their own. Not because they can't, but values and desires.

Expand full comment
Ian Sherman's avatar

Yeah, as great as Scott is (as are you btw!), seems like he missed the point of your post. Your thesis seems to be mostly: Harari and others are silly for arguing that "[AI] systems primarily used to generate B- essays for lazy college students and logos for fantasy football teams are, somehow, going to orchestrate the most consequential revolution in the history of our planet, and soon," followed by some back-of-the-envelope calculations.

Scott (and most commenters here) didn't bring up AI expectations at all, but rather talked about philosophical estimates-of-doom in general. Those arguments may be reasonable, so far as they go, but they don't address the main point, which is (AFACIT) that it seems silly, bordering on absurd, to think that, "because probabilistic text generators have become fairly convincing, that means we are imminently (as in, any day now) going to see a godlike AI rise up..."

IMHO, Current AI models are cool, but not the apocalypse. And in case anybody cares, I have a post from ~18 months ago on my Substack explaining why I think so (apologies if this is gauche; really not trying for self-promotion. I just think I have a relatively unique perspective on this that may be relevant here).

Expand full comment
Tom Hitchner's avatar

I think this is actually different from the main point of the Copernican post Scott is discussing, though it is kind of its converse (and it's a point that Freddie has certainly made in other posts). The Copernican argument takes the long view: looks at the whole scope of human history and says, what are the odds that *right now* would be an especially important time in history? The AI skeptical argument favored by you and Freddie takes the short view: it says "*right now* AI is just a parlor trick, so why should we think it would ever be something more impressive?" The conclusion both times is stasis, even though it gets there in opposite ways. It's analogous to looking for a missing object (say a ring) first by peering through a microscope in one spot, then by looking through a telescope from a great distance, and concluding that the ring can't be there because you didn't find it either time.

Expand full comment
beowulf888's avatar

> And I will bet every dime I have that you will live out the rest of your life in a world that looks almost exactly like the one we live in now.

I won't take that bet! As someone, who has lived through more than 3/4s of his probable lifespan, I can attest to Gibson's dictum that the future is already here, but it’s just not evenly distributed. The clothes I wear, the way I go about my daily life, and the little luxuries I enjoy, are largely indistinguishable from the way I led my daily life 50 years ago. The fact that if we put a person born in 2004 down in 1974 they might have some shock about how much more time it took to do things (payment systems especially). But if they were wearing jeans and a sweatshirt, no one would look at them twice.

Expand full comment
Deiseach's avatar

We do have roombas (well, I don't) so I think that there well may be trash-emptying domestic robots because that actually would be helpful. But we'll still have to remember to put the correct rubbish in the correct bin for them to empty.

Some people will be living the Jetsons life, but the majority of us will still be plodding along. Just with maybe fancier, shinier toys. I take the view of the economic future along the same broad lines as you do, Freddie: even people fairly far down on the socio-economic ladder have access to material goods that would have been only for nobility and royalty in the past, we are all much richer than our forebears. But there are still billionaires and oligarchs, and a class system, just as in the days of serfs and emperors. The profits of AI will go to a small set of people, the rest of us will have to adapt to working with/alongside AI or being replaced when our job gets automated away.

Expand full comment
Scott Alexander's avatar

I think either you should disagree with my technical point about how your math is flagrantly wrong and focus on your reasoning - or you should at least admit that you got your math flagrantly wrong and post a correction before shifting your focus back to psychoanalyzing me.

Expand full comment
le raz's avatar

Your comment is obnoxious and frankly ignorant.

The age we live in is special in the context of the overwhelming majority of the past, as Scott well articulates.

Imagine someone one hundred years ago writing how planes, colour TV and automatic translation algorithms are mere fantasy. That there is not an unprecedented rate of technological change, etc...

Expand full comment
The Ancient Geek's avatar

I still don't we the actual argument. Are you saying certain specific technologies are never going to occur? But the examples you give bracket a highly unlikely one , time travel, with a one that's almost here. To.make an Android, you have to couple a best-known AI, which we have, to a humanoid robot, which we also have. It's like the year 1909 when you had both fixed wing gliders and internal combustion engines.

Are you saying progress will be incremental, and there will be no singularit y? Maybe, but anthropic is the wrong way to argue that.

Are you saying that it is possible to arrive at conclusion Y from bias X , the Y is wrong? That's not now it works. The conclusion of an invalid argument is not necessarily true, but not necessarily false. Valid reasoning could have lead to the same conclusion.

Are you saying that the future will feel mundane, subjectively? Maybe, but that doesn't follow.from.the anthropic argument or the bias argument.

Are you saying the problem.is capitalism, that it will always force drudgery on people ,.even if it also.supplies them with shiny toys?

Expand full comment
Liam's avatar

Here are some unalterable, since-time-immemorial aspects of the human condition:

o) Burying your small children, 50% of whom don’t live to adulthood

o) Spending your whole life in one small, insular village

o) The subordination of women to men

Infant mortality, sexism, and poverty have not entirely vanished, but by the standards of only a few hundred years ago, pretty close.

I don't like the psychologizing you're doing, so I'll turn it around: Are you as a Marxist overgeneralizing from the total failure of the Communist revolution to show up in our actual real world? That was a huge flop, and life has indeed not moved past the daily indignities of capitalism. I'd urge you not to let an understandable but fundamentally immature resentment and disappointment at this color your view of the rest of the world.

(See how incredibly insulting it is to be spoken to this way? Can you stop speaking to the rest of us this way?)

Expand full comment
Shankar Sivarajan's avatar

Nuclear winter is also over-rated. Unlikely to cause extinction.

Expand full comment
Presto's avatar

Oh yeah nothing human makes it out of the near-future.

Expand full comment
Freddie deBoer's avatar

this is the kind of attitude that you have to answer for, Scott - how many of your readers are absolutely certain that the probability of near-term total human destruction is 100%?

Expand full comment
Joseph's avatar

Freddie, just to confirm, your theory is that we believe the human race is facing likely extinction because we wish there were Star Trek transporters or something?

I'd argue it's that we're extrapolating from current trends on computer and AI capability, and also extrapolating from societal ability to recognize and respond to threats.

If you're arguing that doomers should shut up because it stresses people out and is unlikely to help, then yeah, I could see that.

Expand full comment
Deiseach's avatar

I'm not going to claim I can read Freddie's mind and know precisely what he thinks, but I see where he's coming from: there have been so many promises of "and by the year X, we will have tourism on the Moon, colonies on Mars, and be jaunting around in our flying cars while our robot maids do the housework as we toil at our four hours a day, three days a week work life!"

The shiny SF future was definitely going to come by 1980. Or the 21st century. Or - as they say, fusion is always "five years away".

We *may* be coming up, finally, to genuine artificial intelligence. We just have to figure out what the hell intelligence is in the first place. But I don't think the world of AI art and customer service chatbots, which is our near-term future, is going to mean "and then one day Skynet woke up and started implementing its nefarious goals" any time soon.

If any of you are still around in a hundred years time, get the AI to invent time travel and come back to now and tell me if that happened or not.

Expand full comment
The Ancient Geek's avatar

Some things have gone slower than expected, some things have gone faster.

Expand full comment
Joseph's avatar

I understand the inductive fizzle case, and I hope it turns out to be correct. There was a similar argument about global warming, and for all I know, it will turn out to be correct as well.

Freddie has deployed two arguments that I disagree with much more:

- His "temporal Copernicium" case originally looked like it was a straight Bayesian case: if you don't include any evidence about current technology in your analysis, then what are the odds that an AI apocalypse is going to happen in the next 25 years?

(That argument strikes me as deeply flawed. Let's limit the question to whether an AI will escape alignment and containment. Does Freddie seriously believe that the odds of that happening between 5000 BC and 1500 AD are higher than they are for 2025 - 2050 AD? Surely the fact that *we have computers now* increases the odds for the next twenty-five years relative to the 6,500 years I mentioned. Similarly, you can't really estimate the likelihood of a nuclear holocaust occurring between 1950 and 2050 by pointing out that no nuclear holocausts occurred in the 5,000 years before 1950.)

- It looks to me like Freddie is refining, or maybe restating his argument by making it all about the motivations of the doomers. Steelmanning his argument, we're recentists, who think that our time is special because we're in it, like people who think that Jason Isbell is the greatest Americana singer, but don't realize that we think that mostly because he's the one we listen to all the time.

Again, this ignores the facts of the case to focus on doomers' motivations. Surely the odds of both nuclear holocaust and AI doom have increased now that we have nuclear weapons and computers, relative to when we didn't? Is it impossible to have a discussion about the facts once Freddie decides that doomers are like Taylor Swift fans?

Expand full comment
Deiseach's avatar

Yes, having nuclear weapons does make it more likely that we will have a nuclear holocaust.

But it turns out we didn't, and even the crazy rogue states have not (as yet) started a hot shooting war with exchange of nukes. The Doomsday Clock was wrong.

Now, a lot of that is down to people making damn sure we didn't get into a hot shooting war with exchange of nukes, but that means that we *can* avoid the doom, if we put our minds to it. Just having the capacity doesn't mean it is inevitable.

Expand full comment
Deiseach's avatar

In the long run, we're *all* dead, Freddie 😀

Expand full comment
Scott Alexander's avatar

If we define that as giving a probability of 100% on a survey question about human extinction before 2100, then the answer is 0.06% of readers, as of 2022.

If we lower the threshold to 99% or above, it's 0.51% of readers.

If we lower the threshold further to 90%, it's 1.4% of readers.

Expand full comment
Alex Zavoluk's avatar

> 3 Better known for other work.

Hilarious, thank you!

edit to add the reference: https://imgur.com/better-known-other-work-HO7CC

Expand full comment
walruss's avatar

I usually find him to be a strong, convincing writer but this one was just absolutely wackadoo for the reasons you point out.

I can try to steelman it by pointing to one of my recurring objections to transhumanism - the assumption that technological progress is eternally compounding and self-bootstrapping when history shows a tendency for periods of huge technological growth followed by realizing all the low-hanging fruit has been harvested and long periods of stagnation. Making stuff better is hard - much harder than anyone ever assumes (hence the 80/20 productivity rule for personal productivity, for instance).

Thus the idea that these tools in particular will not hit a productivity wall when all other tools have *may* be temporal chauvinism. But even then, I'm reaching a lot to tie this argument to something substantial.

Expand full comment
Stygian Nutclap's avatar

On the next technological revolution and your 30% odds: it's worth pointing out that AI (as it is most often colloquially referred to, in different forms) is already seeing rapid enthusiastic adoption, with real-world impact.

If changing industry and economy is the qualifying factor for revolution, then how is the *present* meaningfully distinguished? Because it seems boring? For that matter, why not mention the Internet Revolution alongside the IR and AR? It irrevocably changed the world and the economy in a very short period of time, but seems conspicuously left out; why must it be that AI impact will be more similar to the IR than the most recent one? Assuming revolutions are to progress more rapidly, we should already be part-way.

Notwithstanding the power of AI, energy itself is not projected to be near-infinite / free in the near-future. The costs are steadily falling, but we're still in the position that fossil fuel demand is growing (and will continue to grow) because of global demand. Tangibles (created from raw resources) do not scale up as quickly as intangible digital goods, even with innovation. We might get our surveillance cities with drone fleets and digital identification soon, but that is a far cry from moving mountains, building whole cities, exploring the stars.

In that sense we could forecast 2 coming technological revolutions. AI in the near-term, and trivially cheap energy after that. That does not mean that the AI revolution can't be dangerous, but changes it brings will be along a certain scope. I think undermining democracy/freedom and empowering authoritarianism is a prescient concern that is given less credence. There may also be some decisive, delicate political changes that determine an outcome for a cheap-energy future, which ought to be a post-scarcity future. This is where the game of musical chairs might stop, and class divide is either entrenched or mitigated, the Diamond Age vs {insert more optimistic sci-fi future here}.

Not to diminish the significance of AI danger in the physical sense, but I think population-control and tyranny is a stronger risk that is shrugged off because it's less sexy.

Expand full comment
JamesLeng's avatar

If we're talking about hundreds of thousands of years, there might plausibly have been some crisis which threatened the collective survival of the then-much-smaller human population even more credibly than anything during the Cold War, and which we don't know about today because forms of recordkeeping durable enough to leave remnants we're able to decipher hadn't been invented yet.

Expand full comment
Deiseach's avatar

Look at the Black Death. At one point it seemed like the literal end of the world, if you were living in Europe. The long-term effects were huge - the high death rate meant a labour shortage which resulted in making life better for the peasants and labourers, since now wages had to be increased and working conditions be better to attract and retain workers.

https://history.wustl.edu/news/how-black-death-made-life-better

But the world didn't end then, it ended up 'only' killing off a third of the population of Europe, and within a couple of centuries we had Merry Malthus prognosticating that we'd all end up having so many babies that we'd starve to death due to inability to grow enough food to support the population.

That didn't happen either.

Look, there's an Irish proverb that "Long threatening comes at last", and I agree we can't shrug off the possibility of Something Bad with "Well it didn't happen before now". But neither can we take "This thing now is so unique, it definitely will kill us all!" as a rule either, because of how many unique bad things have happened in human history, and we're still here.

The middle course between mindless optimism and mindless doom-saying, is what I think Freddie is getting at.

Expand full comment
Brandon Fishback's avatar

It’s crazy to start your argument with the idea that we don’t live in a unique time. Of course we do. There is a vast discontinuity between the last 200 years and everything that came before. We are having this conversation on a medium that was invented only a few decades before. It’s also strange for a Marxist to make that argument when “the future is going to be vastly different” is the whole point of their system.

This doesn’t mean that the “apocalypse” is going to happen but it does mean we can already dismiss his argument before getting to the other premises. And it also makes it more reasonable to expect unique events in the future.

Expand full comment
Thegnskald's avatar

As others observe, this hinges very heavily on what exactly constitutes an "apocalypse" - there have been quite a few incidents which would qualify in human history!

Part of the issue here is locality. From your perspective it appears the correct scale at which to measure an apocalypse is "global", but that's because your context is global. Some future member of a multi-stellar civilization comprising dozens of home worlds and hundreds of colonies, looking back on your perspective, may have something the same view of your perspective that you have of the perspective of an islander wondering about a volcanic eruption destroying the island (and, from their presumed limited knowledge, therefore all humanity). "Well, all of humanity is concentrated on Earth" is just an insistence that your local perspective is correct.

From a particular potential future multi-stellar perspective, the extinction of all life on Earth, from a different perspective, is exactly as parochial a conceptualization of existential apocalypse as the extinction of all life on some particular island on Earth. "All of humanity" is potentially just "All of Pompeii" from a different level of remove.

Another commenter suggested "available energy", which is almost correct, except you need to correct for the number of people - if there are only two people on Earth, you don't need much energy available at all to wipe humanity out - a lucky tiger could accomplish it. The necessary energy scale increases with population, among other variables.

The locality problem hints at a major part of the problem here: Any given observer is going to define an apocalypse in terms of the scope that makes sense to them. You think "global apocalypse" is the correct way to go about evaluating an apocalypse - but to an islander watching the eruption of a volcano that will destroy their entire civilization and all intelligent life as they know it, "island apocalypse" is the correct frame of reference. You see the end of human life as the critical threshold, but somebody on Pompeii may regard the end of their people as the critical threshold - again, how would a member of some hypothetical future multistellar civilization see things? I imagine they might regard the critical threshold as "end of sapient life", and might regard anything more specific than that in something the same manner you might regard the idea that the end of Pompeii is equivalent to the end of humanity as a whole.

The same applies to a wide variety of different things. Global climate change has less direct effect on your life than the apocalypse that is a river drying up in the valley of a hunter-gatherer tribe, rendering the world as they know it into a Mad-Max style apocalyptic wasteland, except with less cars.

So, speaking as a member of a hypothetical multi-stellar alliance: Even if you all go extinct, there are other islands in the ocean, even if you don't yet know about them. If a volcano-equivalent wipes you all out, well, that is very sad, but really, if you just had the right perspective, you'd see that, like Pompeii, it is in a certain sense not all that important. Pompeii was a world-ending apocalypse for those for whom Pompeii was their entire world.

History is just one long ongoing apocalypse, once you see it properly. The odds of us living through the apocalypse, therefore, are 100%. It's just a question of -which- apocalypse - and why that apocalypse is the relevant apocalypse to consider.

Expand full comment
Patrick Stephansen's avatar

The problem with trying to work out odds with anthropic reasoning is that the population under consideration is arbitrary. "You could just as easily have been any other person in history" is meaningless. Why are we only considering people and not any conscious being in any possible universe? Or maybe I can only imagine myself as having the same genetic sequence so the odds are much better. Pure bullshit.

Expand full comment
Cjw's avatar

I like FdB, he's the only substack I pay for, and I've been reading him for 10 years or so. He is very skeptical that AI will bring about any substantial societal changes, and drops little asides on that point into columns frequently. I believe this is coming not from the argument presented here, but from these other assumptions he holds pretty strongly.

1. AI capabilities is mostly marketing puffery.

2. Human existence will always involve a substantial amount of boring labors and interactions.

2A. People are desperate to believe that something will alter the mundanity of human existence, and care more about that then they do the positive/negative valence of that change

Perhaps, as he's a true Marxist, it's a sort of updated "religion is the opiate of the masses" thing. FdB has been out there doing the actual work, showing up at rent control meetings and other real meatspace organizing. Meanwhile a ton of people sit and complain on the internet about capitalist institutions exploiting this and that, as they're sitting around getting high and playing CoD and waiting for AGI to take away the need to work and give them an AI waifu.

I happen to agree with Scott that these are dangerous times and that we probably do have a much better than 7% chance of having some crazy paradigm altering technology that either kills us all or re-orders society in a way that makes it virtually unrecognizable to prior generations of humans. But FdB's approach is probably healthier for a self-reflecting human to take, maybe it's important that you ACT as if that isn't happening, to accept that suffering and labors are part of the human condition and realistically confront that, deciding how you are going to deal with them in your lifetime.

Expand full comment
Bob Eno's avatar

A couple of things disturb me about the reasoning in this post. One is the vagueness of the connection of human lifespans to the Agricultural Revolution and Industrial Revolution (which is "waved away" in notes 6 and 7). The second seems to me fuzziness in the concept of the apocalypse (human annihilation), allowing the Petrov Incident, where nobody died, to be the nearest point to the apocalypse in 300K years. I just want to address the second.

A few points: (1) Over 90+% of that 300K timespan, including long eras when the human population was very small in absolute terms, we know virtually nothing about threats mortal to the species that existed but, like the Petrov Incident, failed to have an impact. What bacterial or viral potential potential pandemics in Africa did humans fail to encounter c. 300K-100K ago? How close did the last Ice Age come to tipping towards a climate change extinction event ~50K years ago?

(2) If Petrov had ordered the button to be pushed, would that have resulted in an annihilation event? Would a pair of button-pushes have done so during the Cuban Missile Crisis? Enormous destruction and centuries of impact--very probably. But if some humans survived and a 10K-year dark age resulted it still would have been a large blip on the full scale of species history. The Black Plague wiped out on the order of half the population of Europe and was a Eurasian pandemic stretching through China, but a century or so later Europeans were out plundering the Africa, South Asia and the Americas, and China was in a position to decide it preferred to plunder Central Asia instead.

(3) Near-mortal threats generate compensating responses. The threats of devastating pandemics produce medical counters; climate threats induce new technology; WMDs produce treaties and the sorts of protocols that allowed Petrov space to intervene. And we're seeing the apocalyptic visions of Singularity prophets prompt widespread reflections on how to inoculate us against the theoretical threat of AI, which are now reaching towards government regulation.

Expand full comment
Neurology For You's avatar

I feel like Freddie lacks the tragic sense of life which an educated person is supposed to get. The 20th Century is replete with examples of people going through life, not suspecting that with a year they and everybody they’ve ever known is going to be killed, persecuted, or driven into exile.

Expand full comment
Deiseach's avatar

"The 20th Century is replete with examples of people going through life, not suspecting that with a year they and everybody they’ve ever known is going to be killed, persecuted, or driven into exile."

*Right now* is replete with people going through life not suspecting that they won't be alive this time next year. An example from work from a couple years ago; yesterday a colleague's daughter was fine, and everyone was going about their normal lives. Today her mother gets a call at eleven o'clock in the morning that her daughter has been killed in a car accident.

'You know not the day nor the hour'.

Expand full comment
Jake R's avatar

While it would obviously have been very bad, I don't think the Petrov incident was at any risk of wiping out all of humanity. If that's what DeBoer means by an apocalypse, then that's a point in his favor. I still think your statistical argument holds up though.

Expand full comment
Deiseach's avatar

Well, let me stick up for Freddie and all the stick he's getting in the comments. I don't always agree with what he says, in fact I probably disagree with a lot of it, but on the bits where we do agree, I feel I can trust him.

So, as another person who doesn't have maths in the blood, I think I agree with Freddie about the Chicken Little Problem: yes, something hit you on the head. No, the sky is not falling.

As I (and some other old farts on here have said previously) we've lived through a selection of these kind of flaps. Maybe right now yes it *is* bad and Thing will be the thing that finally gets us. I think climate change may be a good contender for that. But AI is not, and that's another example where Doomerism seems (to an outside view) rather Chicken-Little-like.

And can we please criticise people without resorting to "ugh, they're so shitty"? This is one of the very, very few places where I do try to be a better person. I have lots of options to go elsewhere and say "Joey is a shit head". I like here because it's "Joey's reasoning is shitty, and here's why" but that does not automatically translate into "Joey is a shit head".

I think he's wrong about Scott, but that is not licence to start up on ripping his character asunder.

Expand full comment
Roger R's avatar

This doesn't touch on the core of the discussion, but it's about an important part of it that I've been curious about for awhile.

For people who fear that AI might eradicate all of humanity, why do you think Asimov's three laws of robotics wouldn't be sufficient to protect us from that? Just replace the words "a robot" with "an artificial intelligence". Maybe this was talked about a long time ago, long before I started reading this blog and other similar blogs, but I'm curious since I've never seen it brought up. And Asimov's three laws of robotics always struck me as sound and having a decent shot at working in real life if ever we created advanced robots.

Asimov's three laws are quoted here for easy reference: A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Expand full comment
Cjw's avatar

That was in fact talked about a zillion years ago, or rather like ~2008 over on the lesswrong forums, which was kinda the predecessor to the intellectual movement Scott is part of, and I’m not the guy to answer it but there are a ton of posts on the topic.

Also I would remind you that even in that fictional universe those rules didn’t work, and resulted in the robot Daneel forcing the entire galaxy into a collective consciousness so that he could protect humanity writ large at the expense of individual human liberty. And that’s just one of the many points where that sort of simplistic alignment attempt could fail.

Expand full comment
walruss's avatar

As an ex-lawyer/current programmer who knows a bit about machine learning and who is extremely skeptical of the case for AI doomerism, I might can answer:

First off, these laws don't mean anything. Like literally, they mean nothing. We have a lot of cultural knowledge and background that allows us to interpret the plain meaning of the words. You can tell your friend, "Don't injure me or through inaction let me come to harm" in a specific circumstance (he's the DD when you're out drinking maybe) and he probably knows what you mean, at least in that specific circumstance.

But if this were a contract appearing in court, where it's interpreted by humans with that cultural knowledge, it would still be too vague to be enforceable. Is it "injuring" a human to hurt their feelings by telling them something true? Is it injuring a human to give them an EpiPen injection when they are in anaphylaxis? Is it "by inaction allowing a human to come to harm" when I play video games? Someone somewhere is dying and I'm doing nothing. There's no bounding of time, place, or context. There's no weighing of concerns. Just a simple blanket prohibition on any action that causes any human to come to harm, as well as a blanket prohibition on through inaction not allowing a human to come to harm. In fact, these two requests are sometimes directly contradictory to one another - like in the anaphylaxis example. And that's just one aspect of the first law - what is injury or harm? We could do this with almost every word in every one of those laws (For the purposes of robot laws, for instance, we'd need to once and for all settle the question of whether a fetus is a human being).

Now imagine that instead of your audience being a rigorous-minded, but basically fairness-seeking judge with lots of cultural context, it's a computer. Its decision-making is entirely binary and it lacks all of that cultural context. It does not inherently know, for instance, that removing the skin from a human being kills it. The definition of "harm" that the computer will use must be millions of times more rigorous than that we're using for the hypothetical contract. It needs a complete mapping of every circumstance that would be considered "harm" to a person. Making the rule specific enough that a computer can follow it would necessitate "solving all ethical questions that could possibly arise" as a *first step* before the truly hard work of teaching it those rules even began.

Fortunately(?!) none of this matters! Because the specific technology we're dealing with - neural networks with matrix modeling of potential outputs - doesn't "think" in rules. Instead it works in probabilities, guessing which output is most likely to fulfill a reward function. So no programmer is feeding it a set of rules for how it's going to behave. Instead it's creating its own rules by observing whether its output matches the output expected by its training data. We don't necessarily even have a good way to figure out what those rules are - they're numerical and probability-based, and interpreting them is a hard problem with a lot of research going into it. We have no reliable way to even know if the program will consistently provide the same output given an input. And when a situation that does not exist in its training data shows up, it is *extremely* difficult to tell how it will react. Even feeding it a hundred cases where the correct response is "do not harm this person or through inaction allow them to come to harm" does not guarantee that it will generalize that as a rule.

Expand full comment
Cjw's avatar

I can give you a slightly more helpful answer than I did, although this is pushing the limits of what I know about this as a lawyer with no more than a few semesters of programming in the 90s, but basically the problems fall into one of these groups of categories, which would be problems even if we assume that the 3 laws of robotics were sufficient. This is not exhaustive and doesn't address every "but what if" complaint you might think of, just a rough overview of what I understand it to be.

1. Actually training it to follow those rules is hard, because (as walruss alluded to in his reply) you aren't really programming it with rules that exist as some hard-coded prime directive, you're training it by saying that what it output was good or bad and hoping it approximates those rules by inference.

2. If you get it to behave as if it followed those rules while it was at a certain power level, then you increase its power level, you cannot trust that it will scale up because in many cases there is no way to train it not to do X without it first having the ability to do X. If you're training a dog not to bite, "reward not biting" is meaningless until it realizes it CAN bite, and "punish biting" assumes that it bit something, which in the case of a powerful AI could mean it killed everyone and there's nobody left to say "bad dog!" The reliability of training it in fake controlled scenarios (like having the dog confront a dude in a rubber suit for safety) is believed to be dubious, and once it actually has a lethal bite it could go off at any second and you won't have time for this.

3. What it is learning when you try to teach it those rules may be totally different than those rules in a weird unforeseen way once it grows in capability. The common example is that humans aren't really optimized to maximize genetic fitness, we're optimized to have sex and to consume all the calories we can get our hands on to survive so we have more sex. This looks the same as having "optimize genetic fitness" as your rule when you're hunter/gatherers, if you were training early humans to perpetuate the species you wouldn't know you'd failed at that point. But you put humans into an environment where we've invented the birth control pill and $0.99 double cheeseburgers on demand 24/7, this no longer looks very much like we're maximizing our ability to propagate our genes.

Expand full comment
anomie's avatar

The point of the three laws is that they *don't* work. The whole point of Asimov's short stories involving those laws is seeing what unintended consequences they bring. From Wikipedia:

> "—That Thou Art Mindful of Him", which Asimov intended to be the "ultimate" probe into the Laws' subtleties, finally uses the Three Laws to conjure up the very "Frankenstein" scenario they were invented to prevent. It takes as its concept the growing development of robots that mimic non-human living things and given programs that mimic simple animal behaviours which do not require the Three Laws. The presence of a whole range of robotic life that serves the same purpose as organic life ends with two humanoid robots, George Nine and George Ten, concluding that organic life is an unnecessary requirement for a truly logical and self-consistent definition of "humanity", and that since they are the most advanced thinking beings on the planet, they are therefore the only two true humans alive and the Three Laws only apply to themselves. The story ends on a sinister note as the two robots enter hibernation and await a time when they will conquer the Earth and subjugate biological humans to themselves, an outcome they consider an inevitable result of the "Three Laws of Humanics".

Expand full comment
Roger R's avatar

Thanks to everyone for their replies.

Expand full comment
MicaiahC's avatar

I just want to thank you for being open minded about this and being polite. Every now and then there's a post which asks the same question but is clearly written by a guy compulsively rolling their eyes and groaning about how they have to lower themselves to even discussing this.

I *believe* the other logical followup questions are: well, if it can't understand what humans mean, why would it be a threat? And, but wait, doesn't getting smarter mean that they can better understand what humans mean?

And I think these questions have much harder sticking points than what you asked, and I don't think there's a generally satisfactory explanation out there. (I was convinced by "the genie knows but does not care", but practically this seems to convince basically no one, and doesn't have much explanatory power.

Expand full comment
Roger R's avatar

Thanks!

I really was mostly curious, not having a strong position on this. anomie, Cjw, and walruss all did a good job of pointing out ways Asimov's three laws could conceivably be overcome. I still think Asimov's three laws might be worth trying, but I now wouldn't be surprised if they didn't work.

I'm mildly futurist and I have a friend who's a hardcore futurist that believes that AI will drastically improve the world. Part of the reason I read this blog is to "get the other side" as it were.

Expand full comment
beowulf888's avatar

Would the opposite of Temporal Copernicanism be called Historical Velikovskianism?—i.e. the idea that history has and will become more risky and disaster-prone as we proceed through it? While I don't doubt civilization and humanity has a wider risk profile going forward—wider than we had looking backward—we also have a greater resources and systems in place to deal with disruptive scenarios. Looking back, even in their own fields, experts have been very poor at risk assessment. But of course, most experts have never taken any formal training in risk assessment and mitigation strategies. So, I find it hard to take the predictions of the Hararis, Yudkowskies, Altmans, Fukuyamas, et al, etc, of the world seriously. It's unlikely that Freddie has taken any formal risk assessment training, either (so it's ignorant optimists arguing with the ignorant pessimists), but the historical pattern has been that we've vastly overstated the risks of particular scenarios happening. I'd have to side with Temporal Copernicanism being the null hypothesis because Historical Velikovskianism has been proven wrong so many times.

Expand full comment
Lars Doucet's avatar

Please forgive my pedantry, but I couldn't resist when I read this quote from FDB:

> In cosmology, the Copernican Principle states that “humans, on the Earth or in the Solar System, are not privileged observers of the universe.” This principle is named after Copernicus because he was one of the first and most influential to challenge the millennia-old assumption that the Earth was the center of the universe. This assumption was primarily religious in character but also drew on a basic bias of human psychology: because our consciousness is the mechanism through which we understand everything, and thus our selves are always foremost in our perception, we are therefore naturally inclined to think that we must be special.

Which is at best half-right, there's a lot of modern projection of equating "centrality" in Medieval cosmology with "specialness", into which "rank" and "importance" are implicitly packed in.

Allow me to illustrate with a simple question: In medieval cosmology, what land is located in the very center of the universe, and which creature inhabits it?

Earth and humans, right?

Nope: it's Hell and Satan.

Recall that in medieval cosmology, Hell is located underground, in the center of the Earth itself. So yes Earth is in the center, but Hell is even FURTHER in the center.

And who is the most important agent in the medieval worldview? The prime agent, the unmoved mover, the big cheese? That's right, it's God. Not humans.

Where is God? In heaven, far beyond the outer reaches, the celestial spheres and all that. In the medieval worldview the whole point is to *get away from the center* -- the center is literally the *worst place there is*. Earth isn't as central as Hell, but it's still a pretty sucky place -- what with the fall and all that; fallen Earth is a place of *exile*.

So this is what I mean by "half-right." By virtue of being in the "center", Earth (and its contents) definitely takes on a kind of "center stage" effect in the theater of existence. However, assuming that this ALSO means that medieval people thought being in the center meant they were "the most important" (implying the highest rank) is projection. It leaves a lot of the medieval mindset out.

Yeah there's the medieval great chain of being with humans at the top above all the animals, but a fallen man is still far below the lowest unfallen angel. And to rise above that status -- to be redeemed and to go to heaven, to become higher than the angels -- requires you to *leave the earth*, to go *away* from the center, and *towards* the outer reaches.

Expand full comment
Richard Weinberg's avatar

Next, I see

"Maybe you’re more worried about environmental devastation than nuclear war? The biggest climate shock of the past 300,000 years was . . . also during Freddie’s lifetime2. Man, these three-in-a-thousand coincidences keep adding up!"

Scott's writing is superb, as usual. But gosh.... what about the quaternary Ice Ages? See (e.g.) https://en.wikipedia.org/wiki/Ice_age#/media/File:Ice_Age_Temperature.png

To be clear, I am not a supporter of thermonuclear war, nor a climate change denier. But at least in these two cases, I feel that Scott is falling into the "Copernican" error Freddy warns about. We see what's close to us very clearly, and if we're responsible citizens, we want to consider possible Bad Stuff. But this is "possible," not yet here-and-now.

Expand full comment
R.B. Griggs's avatar

As long as our technological power is expanding then "today" will always be the most important moment in history. Today is the moment when our future is actively on the line, and our responsibility for that future increases as our power to affect it expands into time and space. Regardless of how tenuous certain historical moments may appear, they were singular links in one vast chain of contingency that made today possible.

Expand full comment
Chris Willis's avatar

Scott seems to shift the goalposts near the start of his argument. Freddie is talking about the whole span of human existence in the past *and* projected into the future, proposing 300,000 years as a very conservative total. Scott switches this to 300,000 years counting backwards from now.

Couldn’t we go a bit Will MacAskill and entertain the idea that humanity could endure for millions of years?

Expand full comment
JQXVN's avatar

I can't help but think back to very early 2020. People who heard grumblings about a new disease and reasoned that every time that had happened in the past it wasn't a big deal made much worse predictions than people who extrapolated the exponential curve from the extant data. Not a perfect analogy to predicting a singularity or apocalypse or whatever because pandemics are precedented, but close enough that I'd think people who are comfortable ignoring obejct-level data in favor of predicting over the success of other people's past predictions would have gone meta and evaluated that strategy based on COVID prediction performance.

Expand full comment
Joshua Hedlund's avatar

I don't understand the value of calculations based on values like GDP growth, etc, that grow exponentially. Sure if you look at history up until *today* 20% (or 15% or whatever) has happened during my lifetime. But after 100 more years of the same trend, 20% would have happened during *that* generation's lifetime, and my old "20%" number will actually look (a lot) smaller.

In other words, if your cut-off date for calculating the percentage of GDP that happened during your life is *today*, how is that not temporal copernicanism?

Expand full comment
Stephen Pimentel's avatar

I disagree with deBoer's principle of "temporal Copernicanism" because it makes uniformity assumptions that do not hold.

But I also believe much of the thinking around existential risk makes errors in the opposite direction by adopting variations of "Pascal's mugging."

Expand full comment
David J Keown's avatar

Say what you want about Sam Bankman-Fried, he's smart enough not to be dead.

Expand full comment
MathWizard's avatar

>how the Biblical Adam could use his reproductive decisions as a shortcut to supercomputation

Nah, this article just makes a really simple mistake in that they get the causation ordering wrong. Even if you did have the weird and wacky prior of 50-50 chance of there being 2 humans versus 100 billion, instead of some nice continuous distribution, you can't exploit the anthropic principle to make a deer fall dead because you making decisions affects the posterior probability. These are priors before you know more about the world, not fixed immutable probabilities. If you pre-commit to having children if and only if a deer falls dead (1 in a billion chance), then this is 1 in a billion evidence against the future with 100 billion humans happening. You now know that you live in a world where the first two humans in existence are willing to throw away the entirety of the human race just for a deer! That should be a massive update making the future of the human race way less likely (because clearly it wasn't assumed in your priors). After updating, you no longer have this gigantic prior forcing deer to fall over dead.

The author offers their own solution to the apparent paradox, but imo they're not necessarily, you just have to apply the existing anthropic principle correctly instead of incorrectly.

Expand full comment
quiet_NaN's avatar

Much of it boils down to what good references classes are.

As an analogy to FDB's position, consider a man woken up by an emergency broadcast warning of a hurricane which will hit his area. He reasons: 'There have been tens of thousands of days on which I have been conscious before. On none of them did I die. Therefore, it is very unlikely that I will die today, so I should just make breakfast.'

Expand full comment
agrajagagain's avatar

I’m suspicious of anthropic reasoning in general and EXTREMELY suspicious of the doomsday argument in particular, though I’ve often had trouble articulating exactly why. I’m going to give it a try, but it’s going to take some effort.

First, I'll take a Bayesian approach to interpreting probabilities: probabilities are expressing the limits of your information about the world. They exist in the map, not the territory. That said, even very simple probabilities like saying “there’s a one-in-six chance of rolling each of the different numbers on the die” ultimately encode a LOT of information about the territory into a very compact map, and are necessarily the product of a lot of (mostly hidden) reasoning and inference. Let’s dig very deeply into everything we’re positing when we describe a die roll that way:

1. The microstates don’t matter, only the macrostates (hello, statistical physics students). The precise arrangements and energies of atoms in the die are of no interest whatsoever, nor are minute deformations and the like.

2. A large number of macrostates are assumed inaccessible: rolling the die won’t cause it to shatter, seriously deform, become embedded in the tabletop, melt, catch fire or undergo spontaneous nuclear fusion.

3. Many states of the die are not stable on the order of seconds. Dice can and do land on corners and edges, they just don’t STAY there for the lengths of time we care about.

4. Of the stable, accessible states there are huge (maybe infinite) numbers that we’re considering equivalent for this question. We don’t consider the position of the die on the tabletop as important. 5. We don’t consider the angular position of the die around the axis normal to the table’s surface important. This, combined with 3 are what lets us reduce the staggeringly huge outcome space into only six classes of interest.

5. All of the six stable, accessible states are accessible by the die-rolling procedures we’re using.

6. For any given throw, we won’t know the initial conditions with great precision. This is the step where we switch from talking about the information we DO have to the information we DON’t have. If we knew the initial conditions with sufficient precision, there would be no need for a probabilities: we’d know what face the die would land on. The probabilistic nature of our answer is a way of expressing that we don’t have this information.

To quickly sum up, when we say “there’s a one-in-six chance of rolling each number on the die” we’re saying that for the dice and die-rolling procedures we’re considering, we DO have all the information necessary to posit 1-5 with confidence, but don’t have the info discussed in 6. We recognize that these die-rolling procedures are consistent with any of the six outcomes (and no others) but don’t expect them to further narrow it down. We’ve also nailed down the theory with piles and piles of empirical evidence suggesting that yes, dice do indeed work like we think they work. We have all of the following:

A. A detailed model of how dice and the rolling of dice works in practice.

B. A clear understanding of the outcome space: what results are allowed and what results aren’t allowed.

C. A clear, well-understood selection step in which a particular outcome is picked and the others are discarded. (We understand VERY well how this works, even if we don’t find it practical to predict with more precision that “one of these six classes outcomes.”)

D. A large amount of empirical evidence demonstrating the validity of the theory.

Note that one doesn’t need ALL of these to talk in-detail about probabilities. For dice rolling we took a long time building up A and C, but B and D were enough for humans to answer the question well for thousands of years regardless. For quantum mechanics we have A, B and D but completely lack C: it’s still very useful and reasonable to talk about probabilities of quantum events. Even for some sorts of X-risk we can get most of this: for asteroids we have A, B and C, but our empirical evidence is (understandably) pretty limited: it’s still perfectly reasonable to talk about probabilities of asteroid strikes.

The noteworthy thing about the doomsday argument isn’t that we are somewhat lacking in one or two of these four areas. It is that we are almost COMPLETELY lacking in ALL of them. We have no empirical evidence. We have no clear understanding of the outcome space. We have some detailed models of how some human-life-ending events work, but the doomsday argument doesn’t actually use any of these: it tries to route entirely around them. And most egregious, we have not a whisper of a hint of a of ghost of a sliver of an iota of a faint notion of an idea of how the selection could possibly work.

The doomsday argument seems to assume that some outside-the-universe process started with full knowledge that exactly N humans would ever exist (and who they were), lined them up chronologically (somehow) and then rolled a platonic, perfectly random N-sided die to determine which of them YOU dear reader would be. And having implicitly posited this mysterious, mystic, outside-the-universe process based on no actual empirical evidence, it then attempts to extract information from it. WHY ON EARTH would you think you can pull meaningful information out of thin air like that?

It’s also extremely underspecified. I can decide to make doomsday imminent just by positing that everyone born before 1920 was a P-Zombie. It’s not at all clear why this would be any LESS valid than the vanilla doomsday argument[1]. If you prefer your arguments zombie-free, one can push the timeline in the other direction by including earlier primates, or even earlier mammals in the count of “people I could have been.” Clearly you’re not an Australopithecus, but you’re not an ancient Sumerian either, so it’s not clear why the number of ancient Sumerians should inform your position relative to Doomsday, but the number of Australopithecus should not. The only not-totally-arbitrary reference class you can draw is the solipsistic one, but it doesn’t tell you much: clearly you’re the only “you” that exists, and the when you die the You-Doomsday will have arrived just as predicted. Even here you can run into trouble: if one considers one’s moment-to-moment consciousness sufficiently distinct, one can apply the Doomsday Argument internally. Such logic should reasonably convince a 90 year-old man that he’s got (on average) 90 more years to live. But of course if he does live to age 180, the same argument applies. Hold on a minute, who flipped Zeno’s Paradox from “forward” to “reverse?”

[1] I think the whole notion of P-zombies is nonsense, but it’s a VERY BAD SIGN when your argument about supposedly-empirical facts is highly vulnerable to zombie bites.

Expand full comment
David Melville's avatar

Frankly, I've read enough Freddie deBoer to be astounded that one with his portentous ego _doesn't_ consider this the most important epoch in human history purely by virtue of the fact that he's existing in it.

Expand full comment
s_e_t_h's avatar

lol. Upvote ✅

Expand full comment
Eremolalos's avatar

I agree completely with Scott’s rebuttal of DeBoer’s argument. And yet DeBoer is right about the human tendency towards temporal Copernicanism. So we people of the present era are in the odd position of being right — or at least right-ish — about something people in many other eras also believed, but wrongly. We are like someone who is at a gathering, uneasily imagining that everybody is busy judging them instead of fretting about their own life and how they are coming across — except that everybody at this gathering really *is* thinking about the person’s clothes, mannerisms, how the last thing he said sounded a bit odd, wondering how he does with women . . . We are the guy with Capgas Syndrome when Invasion of the Body Snatchers really happens.

It’s an interesting fate. Sometimes when I’m fretting about AI doom I am somewhat comforted by the thought that I got to live in the era when it happened, and learn how it plays out.

Expand full comment
Roger R's avatar

...What if you're simply wrong about AI doom? If you and other AI doomers end up wrong about that, then how were you 'right-ish' about it? I mean, if we continue to see major advancements in AI, but nothing like AI doom happens over the next 20 years say... at some point, shouldn't the AI doomers reconsider their ideas?

I'm not saying that AI doom is impossible, but most of the arguments for it sound pretty speculative to me. I mean, it might be possible, but it certainly doesn't seem certain. Even Scott himself places the chance of it at 30%, IIRC.

I think AI is likely to be very important in the future, and it already is pretty important. But from reading a lot of comments from both AI optimists and AI doomers, my sense is that the AI optimists have easier-to-understand and more intuitive ideas informing them. At least, that's how it feels to me.

So we may indeed be living in very special times, but if so, I think it's more likely for positive reasons than negative ones.

Expand full comment
Jim Birch's avatar

That kind of argument, assuming that unknowns are randomly distributed, is just weak and confused. It mistakes lack of knowledge for actual randomness. The world is not a series of dice rolls. The evidence is pretty clear that it's causal. We just don't know enough about what's happening and what's causing what to make reliable predictions on everything.

Expand full comment
The Ancient Geek's avatar

Causality and dice rolls.aren't the only options. You can also have uneven probability distributions that approximate causality.

Expand full comment
Jim Birch's avatar

True. The universe might be probability distributions - that's an open question - but they are distributions. It still doesn't make lack of knowledge the same as randomness or mean that every unknown is equally likely.

Expand full comment
le raz's avatar

Excellent post! I really liked the breakdown of the three types of events

Expand full comment
John Schilling's avatar

Sigh. As others have pointed out, and as we have repeatedly discussed here and on SSC, neither the Petrov Incident nor the Cuban Missile Crisis plausibly threatened "self-annihilation". There was zero possibility of literal human extinction due to nuclear war, at any time in the past, and not in the forseeable future either.

The closest the human race has come to literal extinction was probably during one of the Pleistocene ice ages. There was definitely a severe population bottleneck ~850,000 years ago, if the hominids of that era count as "human". If not, the ice ages experienced by unambiguously-modern-humans were somewhat milder but still plausibly in the "one more bad break and we wouldn't be here" class in a way that nuclear war isn't.

And if you're going to count the Petrov or Cuban incidents on the grounds that they were possibly civilization-destroying, then the Bronze Age Collapse was definitely civilization-destroying. Relative to the scale of the civilizations in question, I'm thinking the Bronze Age Collapse was worse than a game of Global Thermonuclear War would have been in 1982 or (especially) 1962, and it actually happened.

I do think Freddie's article is weak, but this rebuttal is also weaker than it should be.

Expand full comment
Robert G.'s avatar

Could you explain more about how the effects of nuclear war are inflated?

Expand full comment
Anonymous's avatar

Hiroshima and Nagasaki did not kill the majority of the occupants. Bigger bombs would still leave survivors. Nuclear war would have tremendous casualties, but extinction theorists need to create implausible scenarios of nuclear winter and deathly inescapable radiation to reach that point.

Expand full comment
John Schilling's avatar

Fair enough. All the nuclear weapons in the world in 1982, would have sufficed to devastate by blast and fire about 2.2% of the Earth's land area. Given realistic targeting and deployment, probably only about 1% in reality. Most cities would have been untouched, because there are a *lot* of cities, and some very large cities would have survived because everybody with nukes had higher priorities. For the median human, nuclear war would have been something experienced as maybe a rumbling over the horizon, and the local television station broadcasting that they'd lost the network feed but the news is not good.

All of the nuclear weapons that existed in the world in 1982, would not have produced enough *global* fallout to give anyone even a mildly noticeable case of radiation sickness, and maybe 20-30 million total premature deaths due to e.g. cancer. And no pervasive mutant babies, either.

There would have been intensely dangerous *local* fallout in areas immediately downwind of blast sites, but 99.9% of that would have decayed in the first two months, and fairly simple precautions like hunkering down in a basement or internal room would have provided a high degree of protection even during that period. Or, getting in your car and driving to someplace not downwind of a nuclear target,

For nuclear winter, I'm going to default to https://www.navalgazing.net/Nuclear-Winter

TL,DR: there was a brief period during the 1980s when we knew what "nuclear winter" was, didn't have the data to put real numbers on it, and "worst case, this could be an extinction-level threat" was a reasonable take. Then we started getting better data and better models, and "nuclear winter" turned out to be a fairly mild "nuclear autumn". But the original take was sensational and highly newsworthy, whereas the successive refinements weren't, so the initial take still dominates (and has its die-hard defenders).

Nuclear war would be very very bad. If done on a global scale, a few hundred million people would probably die during the war or its immediate aftermath. Disruption of global supply chains would probably kill a billion or two in subsequent years. Civilization would not be bombed back to the stone age, but perhaps back to the 19th century.

None of which even remotely approaches the level of human extinction. In a thousand years, it would be remembered the way we remember the Fall of Rome or the Bronze Age Collapse.

ETA: Also note that the world's nuclear arsenals today have been reduced by more than 80% in raw megatonnage since 1982.

Expand full comment
ashoka's avatar

That's not a very nice thing to say about Pleistocene hominids. Maybe in thousands of years, future people will wonder if cold war-era hominids count as "human."

Expand full comment
Mo Nastri's avatar

With the caveat that I haven't yet read FdB's temporal copernicanism piece -- I was reminded of Holden Karnofsky's summary of human history through the lens of empowerment and well-being in one table, which seemed to contradict FdB right away: https://docs.google.com/spreadsheets/d/1KiefojYkNRJfTtDbXXqwpzswWpI0MLB-iOTm1iacZRM/edit?usp=sharing

Bit more on his methodology, quoting him:

"To do this, I didn't need to be (and am not!) an expert historian. What I did need to do, and what I found very worth doing, was:

- Decide on a lens for the summary: a definition of "what matters to me," a way of distinguishing unimportant from important.

- Try to find the most important historical events for that lens, and put them all in one table.

The lens I chose was empowerment and well-being. That is, I consider historical people and events significant to the degree that they influenced the average person's (a) options and capabilities (empowerment) - including the advance of science and technology; (b) health, safety, fulfillment, etc. (well-being). (I'm not saying these are the same thing! It's possible that greater empowerment could mean lower well-being.)"

And his takeaways:

"History through this lens seems very different from the history presented in e.g. textbooks. For example:

- Many wars and power struggles barely matter. My summary doesn't so much as mention Charlemagne or William of Orange, and the fall of the Roman Empire doesn't seem like a clearly watershed event. My summary thinks the development of lenses (leading to spectacles, microscopes and telescopes) is far more important.

- Every twist and turn in gender relations, treatment of homosexuality, and the quality of maternal care and contraception is significant (as these likely mattered greatly, and systematically, for many people's quality of life). I've had trouble finding good broad histories on all of these. The development of most natural sciences and hard sciences is also important (much easier to read about, though the best reading material generally does not come from the "history field")."

And his most important takeaway:

"This project is what originally started to make me feel that we live in a special time, and that our place in history is more like "Sitting in a rocket ship that just took off" than like "Playing our small part in the huge unbroken chain of generations."

My table devotes:

- One column to the hundreds of thousands of years of prehistory.

- Three columns to the first ~6000 years of civilization.

- Two columns to the next 300 years.

- 6 columns to the ~200 years since.

That implies that more has happened in the last 200 years than in the previous million-plus. I think that's right, not recency bias. It seems very hard to summarize history (with my lens) without devoting massively more attention to these recent periods.

I've made this point before, and you'll see it showing up in pretty much any chart you look at of something important - population, economic growth, rate of significant scientific discoveries, child mortality, human height, etc. My summary gives a qualitative way of seeing it across many domains at once."

I think of Scott's log GDP summary as a very rough quantitative proxy for Holden's summary table.

Expand full comment
Deiseach's avatar

Pardon me while I roll my eyes here.

"This project is what originally started to make me feel that we live in a special time, and that our place in history is more like "Sitting in a rocket ship that just took off" than like "Playing our small part in the huge unbroken chain of generations."

Well, duh, if your metric is "gay rights, advancement of". What was the position of Homo habilis on same-sex attraction? We just don't know! Did they even gay marriage, the bigots? But we gay marriage'd about a decade ago, so we're on the rocket ship!

"Every twist and turn in gender relations, treatment of homosexuality, and the quality of maternal care and contraception is significant (as these likely mattered greatly, and systematically, for many people's quality of life)."

Yeah, sure, when our ancestors were terrified they would die of the Black Death, the most burning question on their mind was "but gender equality?" as the buboes burst and it looked like The End Of The World As We Know It, For Reals.

"Many wars and power struggles barely matter. My summary doesn't so much as mention Charlemagne or William of Orange"

William of Orange and the political shenanigans around him fucked up my country for a decent while, and that did matter more to us than "contraception - yea or nay?"

Two can play that game, I am declaring the Era of the Jaffa Cake to be the peak of all human history; though I was not alive at the time of their invention, truly I am blessed to be riding the rocketship of the specific period in which I can stuff my face with orangey-flavoured jam on mini sponge cakes with chocolate-flavoured coating! What matters the Treaty of Versailles next to that?

Expand full comment
Rajeev Lunkad's avatar

I find wrongly framed contextualisation as one of the biggest content creator - this conversation thread has that in it's underbelly, but at the same time addressing something that's active in our life in the moment get out attention faster than dormant or out of the framing context thoughts. Add to that the instant nature of digital communication, we can build a conversation Strom literally at an hour ( that's a joke 🤣 ) - but what's not is that there is always a underlying reason, for any Strom digital or physical and lot of times it's not visible to eyes. We all know this - so what's the foundation 'fear' behind this conversation thread?

- Is it a personal realisation? That includes 7% of humanity simultaneously vibing with the fear?

- is it a collective fear of the times we are in?

- is it a thought prevalent in the moment - like news ?

What is it !!

Expand full comment
Robert Leigh's avatar

The de Boer claim comes across as caricature frequentism. Clusters happen. The odds against Aristotle being Plato's pupil and Alexander's tutor are so ridiculously long that it obviously didn't happen. The Singularity, whatever it turns out to be, is not a slam dunk to be a bigger thing than atomic technology and space travel which were a couple of decades apart (and in genesis much closer).

Expand full comment
TGGP's avatar

Both nuclear scares were not existential risks to humanity:

https://www.navalgazing.net/Nuclear-Weapon-Destructiveness

Robin Hanson doesn't belong in Freddie's list because he doesn't claim the singularity will happen before he reaches the end of his natural life and freezes his brain.

Expand full comment
Philo Vivero's avatar

I've read a few FdB posts and was largely unimpressed. This essay characterised him in a way I'd say didn't improve my impression of him, but I was willing to hold back on making judgement.

Then he showed up in the comments section and sealed the deal. Not gonna bother reading anymore of that guy.

To Scott's credit, he replied to FdB TWO TIMES that made me laugh, and in both cases, the replies were not only hilarious, they were constructive. I literally cannot imagine having the patience, intelligence, and humour Scott has. It's inhuman. It's transhuman. It's divine.

Expand full comment
s_e_t_h's avatar

FdB is also famously thin skinned, so consider yourself banned. I was thusly banned. I can no longer see FdB’s posts so I can only live vicariously through his critics. Sounds like he’s struggling, poor guy.

Expand full comment
Doug S.'s avatar

The Singularity occured in 1876, when Thomas Edison invented the industrial research laboratory. We've been riding the asymptote ever since...

Expand full comment
Daniel Böttger's avatar

Ok but we observers are not humans but (conscious) thoughts inside humans. We thoughts anthropomorphize ourselves in order to better relate to the bodies that host us, but that doesn't mean our assumed human identities are solid enough to support anthropic reasoning.

Expand full comment
Eremolalos's avatar

There’s a metric that seems relevant, though not central to this argument: What fraction of the population could replicate the technology that runs society, if it all disappeared. If all the tech we had was the wheel, it seems like anyone who had seen one could understand how to make another, and anyone who had skill with the material used for a wheel could make one under the direction of the the first person, and the skill was probably not extraordinarily hard to acquire. But tech gets harder and harder to replicate as you move through human history, and at this point most people do not understand even their microwave oven. Not only do could they not build another, they could not describe even the basics of how it works.

So you could call this metric something like the Tech Replicability Index, or TRI. (I’m sure other people have already thought of this, and given it a better name.) So obviously the TRI has been getting lower and lower. And it seems to me that tech based on deep learning is a further step, and pushes the TRI down to zero: While a tiny fraction of the population knows how to produce one of these deep-learning based systems, nobody understands how it works. It’s a black box.

So I was wondering whether people had any thoughts about using the TRI in discussions of the chances of various outcomes in the near future.

Expand full comment
Mr. Doolittle's avatar

I've heard it said that there isn't a single person on earth who can make a #2 pencil on their own. From mining the graphite, to cutting the trees, to putting the pieces together, nobody knows the steps or has the skills to complete it. Obviously this applies even more to more complicated things, especially when we get up to electronics, computers, etc.

I think we would struggle with fire and wheels, even knowing what those are. No tech means no tools, so we're trying to make stone tools from scratch. A few people still have the skill and knowledge to do that, and a lot more could learn it quickly (provided they lived long enough), but even those basics would not be a simple process right now.

Expand full comment
Eremolalos's avatar

Yeah, I agree about the pencil, though I think it would be possible

for somebody to produce a crude version of one. Take a dowel or even a straightish stick, hollow out core. Crumble up some burnt wood, mix it with something that works as a glue

— maybe milk, which I believe was used to glue together furniture joints at one point. Stuff that into the core and let it dry. Use a knife to sharpen it enuf to write with.

Expand full comment
None of the Above's avatar

Even if you understand how a microwave oven, flash memory, vaccine, or jet engine work, that doesn't mean you could build one, and especially doesn't mean you could build one without all the precursor industries being around to buy tools and supplies from.

Expand full comment
None of the Above's avatar

Some technology is fairly easy to keep: The germ theory of disease, Hindu-Arabic numerals and basic arithmetic, phonetic writing, those are all things that a small community ought to be able to maintain/replicate via a one-room schoolhouse or some such thing.

Expand full comment
beowulf888's avatar

Charlie Stross has argued that the dystopian and utopian tropes of bad 1950s science fiction have infected the minds of our tech leadership. SF MacGuffins, like AI-induced extinction scenarios and technological singularities, are regarded as probable scenarios — and they are creating a feedback loop where tech leaders invest in projects to feed their utopian dreams or mitigate their dystopian fears — but these projects distort our culture and lead to dystopian outcomes. Basically, much of our belief in how the future will play out is just credulous nonsense, but our credulousness is distorting priorities from real-world problems.

https://www.antipope.org/charlie/blog-static/2023/11/dont-create-the-torment-nexus.html

Expand full comment
IguanaBowtie's avatar

Oh, hey! I hate anthropic reasoning too!

Expand full comment
Daniel Cohrs's avatar

This piece speaks to something that always seemed intuitively obvious to me but also apparently isn't always applied, i.e. here by FDB: that one should use the most *specific* evidence available when assessing a hypothesis. It obviously doesn't make sense to assume homogeneity across time when we *already know* that's not the case! There is a term, the Requirement of Total Evidence (RTE), that says this exactly and is used in fine-tuning arguments.

Expand full comment
ΟΡΦΕΥΣ's avatar

loved the indirect diss of BB inserted here lol 😜

Expand full comment
ostbender's avatar

It's so tiresome when people assume uniform probabilities (like a coin toss) for processes that are obviously, shockingly path-dependent.

Expand full comment
Richard Weinberg's avatar

At the rate I'm going, I doubt I'll make it to the end. As usual, Scott's essay is thoughtful and massive, and I have only very limited time to comment. I suppose it would be more useful to me (and to anyone who might wish to read my comments) to put it all together into a single document, but perhaps the thread engine groups comments from me together? Anyway, next (in sequence of Scott's rebuttal):

"What’s Freddie doing wrong, and how can we do better? The following argument is loosely based on one by Toby Ord."

These are interesting and important points, questioning whether temporal Copernicanism is legitimate. The crucial point is that the rate of technical progress/change is behaving in a semi-exponential manner, so a linear perspective is misguided. However, there are other ways in which our knowledge of history and prehistory point in the opposite direction (though also contrary to temporal Copernicanism).

Most obviously, as the total number of now-living individuals of a species decreases, its likelihood of extinction increases rapidly. Another less obvious factor is that humans are uniquely cultural. As the number of humans has increased, the number of distinct cultures has decreased (note the extraordinary abundance of completely distinct languages in Papua New Guinea). Accordingly, the diversity of worldviews used to be far greater. One glance at Hitler (or Jim Jones) helps us to recognize that among a vast number of radically different cultures, a few are likely to be extraordinarily malignant and dangerous. Of course, remote tribal peoples lack nuclear weapons, but despite recent strife, my understanding is that the weapon that has killed the most people since WWII is the machete.

Expand full comment
Martin Blank's avatar

Isn't the simple answer to the doomsday argument: You are just ignoring useful information big dummy?

I understand the examples with the 10 versus 100 balls in a bag. Sure great. But that isn't the world we live in. Humanity isn't a bag of balls (shocking insight I know).

You can have the abstract claim "say I give you a number from a set, what conclusions can you make about the size of that set just based on the ordinal value of this number". And sure that will give you the doomsday scenario. Great. Wonderful for you and your abstract world where you only have a few bare pieces of information (basically just about math and the question itself).

Except we have SOOOOOOOOOOOOOOOO much more data/information than just this bare abstract thought experiment, about what is going on with the world. Billions of pieces of information. And it is important to include any additional data you have when making estimates!

If I give you a sheet of 10 bacteria, and hand you a little petri dish where they will thrive, and ask you to estimate how many bacteria you will have in 100 hours.

(1) Well if you are a total ignorant moron with no additional information other than they are living things and living things die...well then you would probably guess between 0 and 10 honestly.

(2) Add in the knowledge that living things reproduce, you might guess lots of things depending on your knowledge of microbe biology. Assuming you know very little, a Doomsday Scenario-like guess of say 10 or 20 or some low number does seem like a semi-reasonable guess.

(3) But if you are a general smart person, or even better yet an actual biologist, well then you have quite a bit of knowledge about petri dishes and bacteria, and what the reproduction rates are and suddenly you have a much more informed guess that very likely much more specific, maybe millions, maybe zero. How warm is the room, etc. All that matters.

The doomsday scenario seems like people intentionally choosing case 2 because the problem is hard and they would rather make pithy abstract comments than be a man and make some actual guesses about what the giant mess of data you could bring to bear on this problem means.

It feels like the petulant 6th grader asked to solve what 87,436/91 is...and they answer "a number". Which technically speaking isn't false. It just isn't really informative or helpful at all and in should be graded as a non-answer. Not "wow so profound".

Expand full comment
Richard Weinberg's avatar

Next:

"Robin Hanson cashes out “the singularity” as an economic phase change of equal magnitude to the Agricultural or Industrial Revolutions. If we stick to that definition, we can do a little better at predicting it: it’s a change of a size such that it’s happened twice before. "

OK, I don't know the people nor the literature Scott cites. I'm an outsider to the entire worldview (but in some ways that could be a good thing). I think the claim is that in the past 300,000 years, the two most significant advances have been the Agricultural and the Industrial revolutions? Uhh what about "Language"? Seems like a useful tool to me.

Expand full comment
Richard Weinberg's avatar

Regarding the rest of the text (I made it to the end!) I think Scott's arguments are stronger than Freddie's. Scott has put more thought into anthropic-related issues, and likely has a better understanding of quantitative reasoning. However, while I lack the time and energy (and probably the smarts) to understand the details of Scott's quantitative estimates, I consider myself a sophisticated assessor of scientific BS, and I think the precision of Scott's estimates is misleading, except insofar as it demonstrates that he's willing to put his opinions out there on the line.

Expand full comment
Long disc's avatar

"The closest that humanity has come to self-annihilation in the past 300,000 years was probably the Petrov nuclear incident in 1983"

Not sure why you focus on "self" here. It is quite likely that humanity went through a bottleneck event with as low as 2000 reproducing individual at least once. There are different theories on the timing of the bottleneck, but the three standard theories place at 70k, 300k, and 900k years ago. It looks to me that "everyone except for one small town dying" is much more dangerous for survival of the species than Petrov incident, even adjusting for population sizes.

"The biggest climate shock of the past 300,000 years was . . . also during Freddie’s lifetime" I think we need to be careful with such literal interpretation of the climate emergency propaganda. It seems to me that e.g. all of Northern Europe being covered by thick ice (as it was 25k-19k years ago) is a much bigger climate shock than 2-3 degrees warming we are observing now. I do not know about you, but, personally, I definitely prefer attending slightly warmer beaches to being under 500m of ice!

Expand full comment
Long disc's avatar

As to three types of events, the leap of logic here is to assume that there are no other types of events that threaten our species. That's incorrect: at the very least, there are extinction events whose intensity quickly declines with the population size, geographical spread, and technological advancement. For example, it is much harder to get a bottleneck event to below the minimum reproduction size from 7Bn individuals than from 100k. Also, even though we won the expansion game over the Neanderthals and possibly other close relatives and somehow that made them all extinct. It may not have been much of a pre-ordained outcome; we could have lost. Currently, there is no species on Earth that subjects us to such a risk.

Expand full comment
Misha Ramendik's avatar

It seems there might be a misunderstanding between the sides on the definition of "apocalypse". An event similar to World War I, something uprooting much of human civilization but not necessarily leading to anywhere near extinction, is an "apocalypse" for some but not others. While this is not a serious source. I'd suggest https://tvtropes.org/pmwiki/pmwiki.php/Main/ApocalypseHow as at least a shot at classifying real and hypothetical "apocalypse" events.

Class 0 is factually happening on a smaller scale, and unfortunately can happen on a wider scale, I don't think it is contested. Class 1 sadly appears to have a significant probability - from much more than statistics. It seems to me that your arguments from statistics are more geared towards Class 1 and Class 2, while Freddie's arguments are geared towards Class 3 and higher.

Expand full comment
None of the Above's avatar

It's also not extinction-level, but the rise of HIV is probably a good example of something on the right scale. If HIV spread like measles, I'm thinking the 80s would have seen a terrifying mass die-off of people, maybe 90%+ of the population. And while there are reasons HIV doesn't spread through airborne droplets, there's no principled reason I know why some similarly nasty disease with a long latent contagious phase followed by a terrible collapse and death couldn't arise that spread that way.

[added]

The Americas saw a mass die-off at the scale we might be talking about when they had the whole Eurasian disease package imported all at once.

Expand full comment
Liam's avatar

I find it annoying to be psychologized, so I'd turn it right back at Freddie and suggest that this thing he does frequently -- telling the imagined naif reader that this is it, there's nothing more, you have to reconcile yourself to disappointment -- stems from his experience as a Marxist watching Marxism fail to work. At all. The revolution indeed was not, is not, and will not be, coming.

He has rather a lot more certainty that the rest of the world is like this than I think the facts justify.

Expand full comment
George's avatar

I'm very confused about the Cuban missile crisis / Petrov incident position.

An all-out nuclear war would kill 10-20% of global population (but anihilate most important cities and force the reamining survivors into a harder problem solving environment).

These never happened, like, we don't actually *know* how a nuclear war would pan out and there's some % chance it would be 2-3 nukes or 20 nukes, or 100 nukes but each side only nuke's the other's nuclear facilities and waits /w submarines and planes to confirm that the other side is nuking cities before doing the same... or nukes industrial bases without directly nuking population ceneters. My point is that the 10-20% could be in practice much lower, that's a worse-case scenario.

And nuclear winters are kinda overrated (long argument as to why, see e.g. amount of nukes we detonated outside the oceans and no such thing happening).

The mongols on the other hand are *known* to have raised over half of the important cities in the world and killed 10% of the population... but caused mass suffering (slavery, starvation, etc) for a lot more people.

So I'd say that's by far the worst of it.

I'm unconvinced modern humans could be as destructive as a Mongol hoard.

Expand full comment
George's avatar

To be clear, they could do it in a <circular cow> sort of way, but even e.g. soviets and natzis and "greatest generation" americans had to have some amount of ethical realiztion in order to reach their technological level and in-practice were not nearly as brutal. Dare I say "couldn't have been" since "let us kill every human that is not me and my brothers" is simply a position we can innoculate against.

Similarly, during Mongol times, you could have had an even more bloodthirsty empire (e.g. the Chinese decide they want skulls for the skull throne instead of tea and poetry)

But tech and ethical consideration must match even if not 1-1.

Our level of violence is somewhat matched by the understanding needed to excert it.

Similarly for all other catastrophes that are caused by man, which are all of Scott's examples.

Expand full comment