437 Comments

Yeah I like some of FDB's stuff but this was particularly off the mark. I immediately wanted to correct him in the comments but annoyingly his comment section is limited to paid subscribers.

Expand full comment

To be fair to Freddie, he writes exactly the sort of things that makes public comments go to shit - the community here is mostly rationalists, the community there is a lot of right-wingers enjoying Freddie "goring the right ox" (as he puts it) with marxists and people interested in the same subject matter a seeming minority; even the paid comments can be pretty bad.

Expand full comment

Yeah I feel like at some level FDB's super inflammatory rhetoric (while fun to read) naturally draws in more toxic commenters. This community is mostly rationalists but Scott also refrains from needless name-calling and assumes best intentions.

Expand full comment

Except this post. If it does meet that bar, it's only by a technicality. But usually he's quite good at that.

Expand full comment

Freddie has been putting out bad takes on AI for a long time now. He's so confidently wrong that it's grating. I lost patience with him a year ago, I'm not surprised Scott eventually snapped and wrote an aggressive article. We're only human.

Expand full comment

He hates his comments section, which I understand - it's full of obsessive unsophisticated antiwoke types who are not necessarily representative of his overall readership but (as always) dominate the conversation because they are obsessed.

At the same time... look, I'm a pretty strong believer that bloggers tend to get the commentariat they deserve. I don't mean that as a moral judgment - I really like Freddie and his writing, and I am a paid subscriber myself, but he is to a significant extent a "takedown" blogger, with a tone and approach that tends toward the hyperbolic and is sometimes more style than substance, insofar as he's not so much saying anything new as he is saying it very well. That is not a bad thing - some things deserve to be taken down, and Freddie does it better than anyone - but it is inevitably going to attract a type of reader that is less interested in ideas than in seeing their own preconceptions validated ("goring the right ox"). There's a reason the one-note antiwoke crowd hangs out at FdB rather than (say) ACX, even though this community is more sympathetic to a lot of the antiwoke talking points (eg trans-skepticism) than Freddie himself.

Expand full comment

The paid comments are also "shit", as you put it. In general, I'm not convinced yet that restricting comments to paid improves the discourse beyond turning it more into an echo chamber; Nate Silver is an even better example where the articles are not even that inflammatory but the (paid) comment section is almost unreadable.

Expand full comment

Half the time he closes the comments to paid subscribers as well when he doesn't get the reaction he wanted.

Expand full comment

Based.

Expand full comment

I've seen him out-right banning people for criticizing his articles in ways that were far more civil and respectful than what was within the articles themselves. Weird to make your career on antagonist writing while not permitting antagonism from others.

Expand full comment

You’re better off. Freddie is constitutionally unable to handle reasoned disagreement. It always devolves into some version of an aggrieved and misunderstanding Freddie shouting, over and over, “you didn’t understand what I wrote you idiot.”

Expand full comment

more annoying than Freddie's comment section being limited to paid subscribers (which I get) he turns it off when he finds comments questioning trans, or turns comments off when he suspects it will turn contra trans. All this despite declaring himself a free speech zealot ... but he's also a Marxist free speech zealot. Censorship is most likely the defining term for a Marxist free speech zealot.

Expand full comment

That's the game, restrict comments section to paid users, occasionally make glaring errors that everyone feels a need to correct you on in the comments.

Expand full comment

If I only had a nickel for every time someone was upset that I was wrong on the Internet...

Expand full comment

I feel bad for DeBoer because he’s bipolar, and had a manic episode a while back where he said spewed a bunch of indefensible stuff. And he’s hypersensitive. Now

Scott’s rebutted his argument, which is fair enough, and now a bunch of ACX readers are criticizing him in a smirky kind of way, which sux. Jeez, let him be. He has his own kind of smarts.

Expand full comment

Yeah. I don't like the pile-ons. I think Scott has the right of this argument, but abstractly, my main goal besides identifying the truth would be to convince Freddie of it. And that's best done by someone he knows and respects, like perhaps Scott. And probably best done without pile-ons or derogatory comments or anything that would get Freddie's hackles up, because that's just going to make it less likely that he change his mind.

I suppose "playing for an audience" and "increasing personal status" and "building political coalitions" are goals lots of people have, which might be served by mocking someone. I don't like it when I have those goals, and I don't like it when they shape my actions, but de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/ Mostly, it just seems cruel to sacrifice a random person due to a quirk of fate.

Expand full comment

<de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/

Yeah, I understand about that. I seem to be wired to be unable to stick with playing for an audience, etc. I could make a case that's because I've got lotsa integrity etc etc., but I really don't think that's what it is. I am just not wired to be affiliative. I can be present in groups and participate in group activities and like many of the individuals, but I never have the feeling that these are my people, we're puppies from the same litter etc.

It's definitely cost me a lot. One cost, one I really don't mind that much, is money. If you're a psychologist who got Ivy League training and did a postdoc at a famous hospital that treats the rich and famous, you're in a great position to network your way into a private practice where you charge $300/hour, and see people to whom that sum is not a big deal. But I didn't do it. One reason I didn't is that it seems -- I don't know, kinda piggy -- to get good training then only give the benefits of it to Ivy Leaguers. But also, and this accounts for it more, I simply could not stand to do the required networking. Everybody knew me from work, but to really network I needed to like, accept Carol's suggestion to join her high end health club with flowers in the dressing room, and then have a little something with Carol in the club bar afterwards, and sometimes she'd bring along somebody she'd met on the stairmasters, and I realized I was supposed to bring along somebody I'd met now and then, and I just couldn't stand it. I hate talking to people while I'm working out. I like to close my eyes, put on headphones and try to get some self-hypnosis going where I have the feeling that the music is the power that's moving my legs, then sprint til I'm a sweaty red-faced mess.

Has it caused you misfortunes beyond the one big disaster?

Expand full comment

He also tends to yell at people who disagree with him in the comments.

Expand full comment

I used to subscribe to Freddie. Don't really recommend it.

Expand full comment

Speaking of Anthropics and Doomsday, Sean Carroll addressed it, yet again, in his last AMA

https://www.preposterousuniverse.com/podcast/2024/09/02/ama-september-2024/

> The doomsday argument for those who want to... Who haven't heard of it, is an argument that doomsday for the human race is not that far off in the future, in some way of measuring, based on statistics and the fact that the past of the human race is not that far in the past. It would be unlikely to find ourselves in the first 10 to the minus five of the whole history of the universe, so the whole history of humanity. Or even the 10 to the minus three of the whole history of humanity. So therefore, probably the whole history of humanity is not stretched into the future very far. That's the doomsday argument. So you say the doomsday argument fails because you are not typical, but consider the chronological list of the n humans who will ever live. Almost all the humans have fractional position encoded by an algorithm of size log 2n bits.

> This implies their fractional position has a uniform probability density function on the interval zero to one, so the doomsday argument proceeds. Surely it is likely that you are one of those humans. No, I can't agree with any of this, really, to be honest. Sure, you can encode the fractional position with a certain string of a certain length n, okay? Great. Sorry, the log 2n is the length of the string. Yes, that is true. There's absolutely no justification to go from that to a uniform probability density function. In fact, I am absolutely sure that I am not randomly selected from a uniform probability distribution on the set of all human beings who ever existed, because most of those human beings don't have the first name Sean. There you go, I am atypical in this ensemble. But where did this probability distribution purportedly come from? And why does it get set on human beings.

> Why not living creatures? Or why not people with an IQ above a certain or below a certain threshold? Or why not people in technologically advanced societies? You get wildly different answers. If you depend... If you put different... If you have different... What are they called? Reference classes for the set of people in which you are purportedly typical, multi-celled organisms. So that's why it's pretty easy to see that this kind of argument can't be universally correct, because it's just no good way to decide the reference class. People try, Nick Bostrom, former Mindscape guest, has put a lot of work into this, wrote a book on it, and we talked about it in our conversation. But I find all the efforts to put that distribution on completely unsatisfying. The one possible counter example would be possible counter example, would be if we were somehow in equilibrium.

> If somehow there was some feature of humanity where every generation was more or less indistinguishable from the previous generation, then within that equilibrium era, if there was a finite number of people, you might have some justification for choosing that as your reference class. But we are clearly not in equilibrium, things are changing around us very, very rapidly. So no era in modern human history is the same as the next era, no generation is the same, there's no reason to treat them similarly in some typicality calculation.

Expand full comment

I actually think you can do Carter Doomsday with anything you want - humans, living creatures, people with IQ above some threshold that your IQ is also over, even something silly like Californians. You'll get somewhat different answers, but only in the same sense that if you tried to estimate the average temperature of the US by taking one sample measurement in a randomly selected place, you would get different answers depending on which place you took your sample measurement (also, you're asking different questions - when will humanity be destroyed vs. when will California be destroyed).

I think the stronger objection to Carter Doomsday is https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal ; I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Expand full comment

I think Sean's point is that this argument is useless because your reference class is unstable or something. Hence the last paragraph, where he steelmans it.

I do not understand the point of SSA vs SIA, neither seems to have any predictive power.

Expand full comment

Man does not live by prediction alone. SSA and SIA are trying to do.something more like abduction.

Expand full comment

Well, yes, you can do the Carter Doomsday argument with any reference class you want because it's wrong, and bad math can "prove" anything. The fact that it's so flexible, and that it's making predictions of the future that swing by orders of magnitude just based on what today's philosophy counts as consciousness, should warn you off of it!

The SIA at least has the property that you get the correct answer out of it (ie, you can't predict the future based on observing literally nothing). But the real problem is the idea that there's any "sampling" going on at all. Both the SSA and SIA have that bad assumption baked into them. There's one observer to a human, no more and no less. If you believe in solipsism - that you're the one true soul who is somehow picked (timelessly) out of a deterministic universe populated by p-zombies - THEN the Doomsday Argument applies. But I doubt that you do.

Note that the correct boring answer - that your existence doesn't inherently count as evidence for or against a future apocalypse - does NOT depend on fuzzy questions like what reference class you decide you count as. Good math doesn't depend on your frame of reference. :)

Expand full comment

I think SIA shows exactly why Doomsday goes wrong, but that it's not hard to see that Doomsday does go wrong. Like, if Doomsday is right, then if the world would continue for a bunch of generations unless you get 10 royal flushes in a row in poker, you should be confident that you'll get 10 royal flushes in a row--not to mention that Adam and Eve stuff https://link.springer.com/article/10.1007/s11229-024-04686-w

Expand full comment

> I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Oh, come on, Scott, you know better than that!

Here is a common sense way to deal with the Doomsday Argument, whitch, I believe, resolves most of the confusion around the matter, without accepting the bizarreness of either SSA or SIA.

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic

Expand full comment

Ape, I was so glad to read your post. For years, I've felt like I'm taking crazy pills whenever I see supposedly (or even actually) smart people discuss the DA. Like you said, "it's not a difficult problem to begin with" - well, ok, maybe it does take more than "a few minutes" to really get a solid grasp on it. But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

Expand full comment

You are most welcome. It's a rare pleasure to meet another person who reasons sanely about anthopics. And I extremely empatize with all the struggles you had to endure while doing so.

I have a whole series of posts about anthropic reasoning on LessWrong, which culminates in resolution of Sleeping Beauty paradox - feel free to check it out.

> But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

I know, right! I originally couldn't understand why otherwise reasonable people are so eager to accept nonsense as soon as they start talking about anthropics. I currently believe that the source of this confusion lies in a huge gap in understanding of fundamentals of probability theory, namely the concept of probability experiment, how to assign sound sample space to a problem and and where the uniform prior even comes from. Sadly, Bayesians can be very suceptible to it, because all the talk about probability experiments have "frequentist vibe".

Expand full comment

Fewer words better. "The mutually exclusive events in the space are just two, whatever names you give them. They have equal probability. Ignore the flim-flam."

Expand full comment

This seems so confused that it's hard to even rebut. The fact that you are a specific person doesn't counteract the fact that you are a person.

Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Knowing only that I was born on the 311th day of the year, I should predict that a year is 622 days long (again, not as good, but right order of magnitude - compare to eg accidentally flipping the day-of-the-month and day-of-the-year and predicting a year is 14 days long!)

Knowing only that I was born on the 4th year of the decade, I should predict that a decade is 8 years long (again, pretty good).

Notice that by doing this I have a pretty low chance of getting anything wrong by many orders of magnitude. For example, to accidentally end up believing that a year lasted 10,000 days, I would have to have (by unlucky coincidence) have be born very early in the morning of January 1.

Here I think anthropics just works. And here I think it's really obvious that "but your parents, who birthed you, are specific people" has no bearing whatsoever on any of these calculations. You can just plug in the percent of the way through the interval that you were born. I think this works the same way when you use "the length of time humans exist" as the interval.

(I'm slightly eliding time and population, but if I knew how many babies were born on each day of the 1980s, I could do these same calculations and they would also be right).

Expand full comment

But wait ... "the length of time humans exist" is NOT the measure that Doomsday Argument proponents use! Continuing your analysis, since you're born roughly 20,000 years into homo sapiens' history, you should expect homo sapiens to go for 20,000 years more. Which is much more than an order of magnitude off from what Doomsday Argument proponents predict (because population growth has been exponential, and they use "birth order index" instead of date). You picked a slightly different way to slice the data and got an irreconcilably different result.

This should tell you that something is going badly wrong here, but what? All you've really shown here is that you can sample dates, from a large enough set that, say, "day of year" will be nice and uniformly distributed. Then the Copernican Principle can be used. (It's a mistake to call this "anthropics", though, as what you're measuring is uncorrelated with the fact of your existence. If, say, babies were more likely to be stillborn later in the year, then you could predict ahead of time that your day of birth would probably be on an early date. That's an anthropic argument.)

But how will you sample "birth order index"? Everybody you know will have roughly the same one, regardless of whether there are billions or quadrillions of humans in our future. You're not an eldritch being existing out of time, and can't pick this property uniformly at random.

To be honest, I'm not sure I'm doing a good job of explaining what's going wrong here. I also feel like this discussion is "so confusing it's hard to rebut". The real way to understand why the DA is wrong is just to create a (simple) formal model with a few populated toy universes, and analyze it. Not much English to obfuscate with... just math.

Expand full comment

Thanks for the reply! Let's clear the confusion, whether its mine or yours.

> Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you. Therefore your birth rank in necessary among the 6-th ten-billion group of people, therefore you do not get to make the update in favor of short timeline.

Whether you could've been born in any other hour of the day or in any other day of the month or in any other month of the year is, strictly speaking, irrelevant to this core claim. We can imagine a universe where the birth of a child happens truly randomly throughout a year after their parents had sex. In such universe Doomsday Inference would still be false.

That said, our universe isn't like that. And your reasoning here doesn't systematically produce correct answers, allowing for us to distinguish between things that are and are not randomly sampled, which we can see via a couple of simple sanity checks.

> Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Sanity check number one. Would your reasoning produce different result if you birth hour was definetely not random?

Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference: notice that your predicted value is very close to correct value and therefore assume that you were born at a random time throughout the day, even though, it would be completely wrong. Sanity check 1 failed.

> Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Sanity check number two. Would some other, clearly wrong approach, fail to produce similarly good estimate?

I've generated a random english word, using this site: https://randomwordgenerator.com/

The word happened to be "inflation". It has 9 letters. Suppose for some reason I believe that the length of this word is randomly selected from the number of days in a month. That would give me an estimate of 18 days in a month. Even better than yours!

If I believed the same for the number of days in a year I'd be one order of magnitude wrong. On one hand, that's not good, but on the other, just one order of magnitude! If I generated a random number from -infinity to infinity, with all likelihood it would be many more orders of magnitude worse! Sanity check two also failed.

Now, there is, of course, a pretty obvious reason why both your and my methods of estimating all these things seem to be working good enough, which has little to do with random sampling or anthropics, but I think we do not need to go on this tangent now.

Expand full comment

>Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference

The randomness in this example is no longer the randomness of when in the day you are born, but the randomness of which hypothetical world you picked. You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day. So it's essentially the same deduction, randomly choosing fixed-birth-time worlds instead of randomly choosing birth times.

Expand full comment

> You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day.

And in majority of such worlds Scott would still do similar inference, see that the estimate is good enough and, therefore wrongly conclude that birth dates of people happens at random.

The point I'm making here is that his reasoning method clearly doesn't allow to distinguish between worlds where things actually happen randomly and worlds where there is only one deterministic outcome.

Expand full comment

> First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you.

But every human who has ever and will ever live also has a causal process that produces them. My possession of a causal process therefore does nothing to distinguish me, in a temporal-Copernican sense, from all other possible observers.

I have read your LessWrong post all the way through twice, and, well, maybe I'm a brainlet, but I don't understand your line of argumentation at all. I'm not a random sample because... I have parents and grandparents? How does that make me non-random?

Humans had parents and grandparents in 202,024BC, and (pardon my speculation) humans will have parents and grandparents in 202,044AD (unless DOOM=T), therefore the fact that I have parents and grandparents in 2024 doesn't make any commentary on the sensible-ness of regarding myself as a random sample.

Expand full comment

This is not about your ability to distinguish things. This is about your knowledge of the nature of the probability experiment you are reasoning about.

Lets start from the beginning. When we are dealing with a random number generator, how comes we can estimate the range in which it produces numbers, just from one outcome?

We can do it due to the properties of Normal distribution. Most of the generated values will be closer to the mean value than not - see the famous Bell curve picture for an illustration. So whatever value you received is likely to be close to the mean, and, therefore, this way you can estimate the range with some confidence. Of course, the more values you have the better your estimate will be, all other things being equal, but you can get a somewhat reasonable estimate even with a single value.

But consider you are dealing not with a random number generator but with an iterator: instead of producing random numbers, this process produces numbers in a strict order, every next number is large the previous exactly by 1. Can you estimate the range in which it produces numbers based on some outcome that you've received?

No, because now there is no Bell curve. It's not even clear whether there is any maximum value at all, before we take into account physical limitations of the implementation of the iterator. As soon as you know that you are dealing with an iterator and not a random number generator applying this method, appropriate for reasoning about random number generators and not iterators, would be completely ungrounded.

Do you agree with me so far? Do you understand how this is relevant to Doomsday Inference?

Expand full comment

If I understand SIA correctly, it is connected to the idea that a sigmoid and an exponential distribution look the same until you reach the inflection point. By assuming that the distribution of observers looks like what we've seen historically, Carter privileges the sigmoid distribution of observers. Instead, if you consider the worlds in which population growth starts rising again for any number of reasons, you come to the conclusion that we can't normalize our distribution of observers. For example, if you simulated observers throughout history and asked them when doomsday would arise, none of them would have said the 21st century until at least the 1850s - that's a lot of observers that Carter doomsday misled terribly, often by millennia and orders of magnitude of population based on what we already know. Based on those observations, you could make a trollish reverse-Carter argument that we would expect Carter doomsday arguments to be off by millennia and orders of magnitude of population. And what the SIA paper seems to find is that the Carter and anti-Carter arguments exactly cancel out. That’s not as implausible as it sounds, given that we might expect to be able to draw no meaningful conclusions from total ignorance.

Expand full comment

Their argument is so obviously bad in a completely uninteresting way that I wonder why Scott bothered to Contra it. Was it gaining traction or something?

Expand full comment

Because Freddie is a prominent enough blogger. To give an extreme example, if the prime minister of Australia (whoever that is) said something obviously wrong, it would be worth rebutting.

Expand full comment

Anthony Albanese

Expand full comment

No, not the UK.

Expand full comment

I think if I'm right about prior odds of 30% on a singularity in our lifetime, that's a novel and interesting result. And if I'm wrong I'll enjoy having hundreds of smart people read over it and cooperate on figuring out the real number.

Expand full comment

Abusing footnote 8 somewhat, I don't know what the prior probability that my lifetime would overlap the time interval that someone in the far future would assign to the singularity after the industrial revolution.

I'd argue that the posterior probability is 100% based on the scale of the Robin Hansen definition. You make a good case for it in techno-economic section. In terms of economic growth and social impact isn't the information age obviously a singularity? I'm connected to literally billions of people in a giant joint conversation. How preposterous would that statement be in a pre-information age world.

When the histories are written by our future descendants or creations it might make sense for them to date the singularity from the start of the information age to the next stable state.

Expand full comment

I also think that the information age (aka Toffler's technological revolution) has already happened. The rate of major revolution has been accelerating.

Fire was invented so far in prehistory that we have no records of that time.

Writing (which enabled history and science) and the wheel followed the agricultural revolution by only a few millenia.

The industrial revolution was less than 3 millenia after that.

The information age took only 0.4 millenia.

Each of these revolutions significaltly increased the rate of innovation and the singularity will accelerate it even more.

At this rate of acceleration the singularity (which may have already happened and is smart enough to hide from us primitive apes) may indeed cause another revolution beyond the information age. Whenever that happens, I wonder if we will be smart enough to recognize that it already did.

Expand full comment

So the singularity is people using apps to do the same things they were already doing?

Expand full comment

The expert consensus seems to be we're using apps to do things in a dramatically different way than we used to. If you want to go down that rabbit hole, search for dopamine related to social media.

There are a lot of weird things going on right now. It seems uncontroversial to say that society is acting very differently than it has in the past (even the recent past), and that it's a fairly reasonable assumption that's because we're being manipulated by algorithms.

That those algorithms are just dumb automatons in the service of Moloch as instantiated by humans is taken for granted. But that they might already be in the service of one or more AGIs is not out of the question.

Expand full comment

Social media seems to be a new flavor of social heroin. Humanity has developed many of these in recent years.

Garbage content being spewed out by AI would probably concern people more if it didn’t compare favorably to the garbage that was already being produced by human hands.

Expand full comment

I get that you're using Robin Hansen's definition and that changes things, but I don't think we refer to a Singularity that happens but nobody noticed. Based on the common definition, we will all know the Singularity happened if/when it does, without the need to pontificate about it.

Expand full comment

This might be reasonable if the Internet doesn't continue to fragment. We already don't have a single giant conversation.

Expand full comment

I don't think "singularity" is sufficiently well defined that it makes sense to try to figure out a number.

Using your definition of being at least as important as agriculture or the industrial revolution... well, I accept that 30% isn't an unreasonable number but don't think that the Industrial Revolution is as significant as the sort of thing I've heard called a "singularity" in the past.

Expand full comment

It's an estimate, and it's perhaps more reasonable than others, but it has three key assumptions:

1. The singularity would be in the same class of consequentiality as the Industrial or Agricultural Revolution, no more, no less. This assumption puts a constraint on the sort of singularity we're talking about, one which doesn't involve mass extinction, a nice obviously that would be much more consequential than the AR or IR.

2. There have been no other instances of similar consequentiality in all of human history. One might argue that the harnessing of fire was at least as economically consequential. Or the domestication of animals. Or the development of written languages.

3. The next event of consequentiality in this category will be the singularity. A few decades ago, most people would have bet on something related to nuclear energy or nuclear annihilation. An odd feature of the 30% calculation is that the more possibilities one can think of for the next great event, the lower the probability of it being the singularity. One can argue that the pace of AI technology is such that a singularity is very likely to be the one that happens soonest, but then you may as well dump the human history probability handwaving and just base the argument on technological progress.

Expand full comment

I mostly agree with you, but playing devil's advocate:

1. Lots of races went extinct during the hunter-gatherer to agricultural transition. We don't care much because we're not part of those races. If humans went extinct during the singularity (were replaced by AI), maybe future AIs would think of this as an event only a little more consequential than ancestral hunter-gatherers going extinct and getting replaced by agriculturalists. The apocalypse that hits *you* always feels like a bigger deal than the apocalypse that hits someone else!

2. Invention of fire is outside of human history as defined here. Domestication of animals is part of agriculture.

3. I think this is right, though I have trouble thinking about this.

Expand full comment

Is anyone seriously proposing a singularity which cashes out in a plurality of (sentient?) AI minds? All the AI Doom scenarios I've ever seen propose a singular, implacable superintelligence which uses its first-mover advantage nips all competition in the bud. Some, of course, predict worlds with large numbers of independent AIs, but in those worlds humanity also survives for the same reason all the other AIs survive.

Expand full comment

Source for "lots of races went extinct"? I don't know how to interpret that. But it does suggest another criticism: the choice of class. What odds would we get for "large extinction event"? Probably shouldn't limit it to human extinction because we're biased about treating our own extinction as more consequential than others. Pliosaurs and trilobites may disagree. Limit it to humans and there have been near-extinctions and genetic choke points in our history. Which suggests that there may be enough flexibility in choice of category to get a wide range of answers. There's never been a singularity (yet).

Expand full comment

The term used in ancient population genetics is "population replacement". But that's usually for relatively recent humans, and we also know that there used to be even more distantly related subspecies like Neandertals, varieties of Denisovans, branches of Homo Erectus in places like Africa we haven't named, Flores "hobbits" etc.

Expand full comment

The entire basis of the argument is wrong. It's treating a causally determined result as the outcome of a random chance. The is NO chance that someone will invent transistors before electricity (unless you alter the definition of transistor enough to include hydraulic systems or some similar alteration).

Expand full comment

It has nothing to do with random chance, it's an argument about uncertainty. You can't invent transistors without electricity - but you have to know how a transistor works to know that. Someone who doesn't has to at least entertain the possibility that you can.

Expand full comment

Am I the only one thinking that neither the Pavlov thing nor the CMC would have realistically caused anything worth calling the apocalypse?

Expand full comment

I agree that the seas would not have turned to blood, and that a beast with seven heads and an odd obsession with the number 666 wouldn't have been involved.

Expand full comment

I think Mr. Serner is saying that neither event going the other way, resulting in global thermonuclear war, would have resulted in the extinction of humans. Not even close.

Expand full comment

You used the term "self-annihilation" with regard to humanity; "annihilation" means "to nothing", i.e. 100.00000000% of humans killed.

I'll go further and say that WWIII would probably have fallen short of even the Black Death in %humans killed (because nobody would be nuking Africa, India, SEA or South America), though it'd probably be #2 in recorded history (the ones I know i.e. the Columbian Exchange, the first plague pandemic, and the Fall of Rome seem to be somewhat less; note of course that in *pre*history there were probably bigger bottlenecks in relative terms) and obviously #1 in absolute number or in %humans killed per *day*.

Expand full comment

I don't think this is right. If we suppose Europe, America, and East Asia are all economically destroyed, I don't think Africa, South America and SEA would be able to support populations of billions of people while being disconnected from the existing world economy.

We'd be unlikely to see actual human extinction, but I think the populations of countries outside the First World would be very far from intact.

Expand full comment

Africa is only one generation removed from subsistence farming, so I'm pretty sure they'd be fine. South and Central Asia probably likewise. South and Central America probably untouched. Pacific Islands and most of Australia untouched. Most of Russia untouched. China is mostly vertical, thus most damage wouldn't propagate.

That's probably 3 billion people untouched. A massive drop in GDP, oil would stop flowing for some years and there would be big adjustments, probably mass migration out of cities and a return to subsistence farming and somewhat of a great reset. Discovered knowledge would be mostly intact, there's libraries after all. In about 20 years, GDP growth would likely resume.

Expand full comment

Most of Africa is a couple *very large* generations removed from subsistence farming. The population of Africa today is greater than the population of the world in 1800, and that relies heavily on technology and connection to global supply chains that they wouldn't have in this scenario. India likewise relies heavily on interconnection with a huge global economy and supply chain to sustain its population. Countries around the world (including India, and through much of the aforementioned Africa) are heavily plugged into China's economy. The human species would survive, but those 3 billion people would be very far from untouched. We don't actually have the capacity to support 3 billion people via subsistence farming without global supply networks moving resources around, which is why, throughout the history of subsistence farming, we never approached that sort of global population. Even subsistence farmers in the present day mostly buy from international agricultural corporations.

We could transition back to it if we had to, but it would be a major disruption to the systems which have allowed us to sustain global populations in excess of *one* billion people, and the survivors would disproportionately be concentrated where our knowledge and training bases right now are weakest.

Expand full comment

You can get some idea of how economic collapse and huge supply chain disruptions play out by looking at Russia/Ukraine/Belarus/Kazakhstan (largest ex-USSR republics) between 1990 and 2000.

Expand full comment

Their populations might decline, but humanity has survived periods of population decline.

Expand full comment

> (because nobody would be nuking Africa, India, SEA or South America)

Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began. Radioactive ash can cause life-threatening problems for people exposed, even weeks after the bomb goes off and hundreds, maybe thousands of miles downwind.

Firestorms don't care much about politics either. If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming. Ocean chemistry shifts, atmosphere becomes unbreathable, game over.

Expand full comment

CO2 is great for plan growth (in commercial greenhouses, the guidelines are to supplement to 3x ambient level), so - with less humans around - it can easily balance out in terms of temperature over few years/decades.

Expand full comment

I'm not saying CO2 by itself would be directly killing us all. To elaborate on what I meant by "ocean chemistry shifts": https://spacenews.com/global-warming-led-to-atmospheric-hydrogen-sulfide-and-permian-extinction/

Expand full comment

>Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began.

Sure, but I'm not sure how a war between NATO/Free Asia and the Soviets/PRC would have significant operations in Africa or South America (there's *maybe* a case for India and SEA coming into it due to proximity to China, but if you're talking about followup invasions after a thorough nuclear bombardment, amphibious landings are quite feasible).

>If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming.

Note that firestorms actually cause cooling in the short-term (because of particulates - "nuclear winter"), and that in the medium term there'd be significant reforestation to counter this (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets).

Also, the Permian extinction was from over 10 degrees (!) of global warming. A few degrees clearly doesn't do the same thing (the Paleocene-Eocene Thermal Maximum, 5-8 degrees, did not cause a significant mass extinction).

Expand full comment

Yes, based on current understanding several things would need to go wrong in semi-unrelated and individually unlikely ways for outright extinction of all humans to be the result, but there are a lot of unknowns - it's not like we can check the fossil record for results of previous nuclear wars - so it seems like it would be very bad news overall, compared to the alternative.

Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

> (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets)

I understand there's brown coal still in the ground in Germany, within plausible ICBM targeting-error range of a lot of Europe's critical high-tech infrastructure.

Expand full comment

>Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

There are two notable question marks there.

1) It would give us a reroll of "everyone is addicted to products provided by the big AI companies, and the public square is routed through them, and they have massive lobbying influence". It's not obvious to me that we would allow that to happen again, with foreknowledge.

2) There's the soft-error issue, where the global fallout could potentially make it much harder to build and operate highly-miniaturised computers. I'm unsure of the numbers, here, so I'm not sure that this would actually happen, but it's a big deal if it does.

Expand full comment

India would have either stayed neutral or sided with the Soviets in WWIII. They were officially neutral, but it was a "leaning pro-Soviet" neutrality. On the other hand, China would quite possibly *not* have sided with the Soviets, and might even have sided with America, at any rate after the Sino-Soviet split in1969.

Expand full comment

IIUC, there's a lot of overkill in the ICBM department, and "On the Beach" only had things over too quickly. But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.) And perhaps oceanic mammals would have a decent chance, as water tends to absorb radiation over very short distances.

However, I haven't revised my beliefs yet, I'm just willing to consider whether I should. But that was a LOT of overkill in the ICBM department.

Expand full comment

There are people living in cities in Iran that are every bit as 'hot' as Chernobyl. There are beaches in Rio every bit as hot as Chernobyl. People there live long healthy lives. The risk of radiation is vastly overstated. Chernobyl only directly killed twenty people and another twenty or so probable killed. 40 is a pretty small number.

Expand full comment

It's my understanding that the dogs that live in the hottest part of Chernobyl have a greatly increased mutational load. They've got short generations, though, so mutations that are too deleterious tend to disappear. But if they were to accumulate, the dogs probably wouldn't survive. (How hot Chernobyl is depends on where you measure.) (Well, not really the hottest part, that's inside the reactor and a lot hotter than where the dogs are living.)

OTOH, I expect a full out WWIII with ICBMs would provide thousands of times overkill. (That's what's been publicly claimed, and I haven't heard it refuted.) Just exactly what that means isn't quite clear, but Chernobyl was not an intentional weapon, so I expect the south pole would end up hotter than Chernobyl was a decade ago.

Expand full comment

>IIUC, there's a lot of overkill in the ICBM department

To be precise, there *was*. There isn't so much anymore. But I was discussing the Cold War, so this is valid.

>But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

>OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.)

Okay, let's crunch some numbers. I'm going to count 90Sr and 137Cs, as this is the vast majority of "medium-lived" fallout beyond a year (I'd have preferred to include some short-lived fission products as well, to get a better read on 100-day numbers, but my favourite source NuDat is playing up so I don't have easy access to fission yields of those).

Let's take the 1980 arsenal numbers, which is 60,000 nukes. A bunch of those are going to be shot down, fail to detonate, or be destroyed on the ground, so let's say 20,000 detonations (this rubble is bouncing!), and let's say they're each half a megaton (this is probably too high because a bunch were tacnukes or ICBM cluster nukes, but whatever) and half fission, for 5,000 megatons fission.

5,000 megatons = 2.1*10^19 J = 1.3*10^38 eV.

A fission of 235U releases ~200 MeV (239Pu is more), so that's 6.5*10^29 individual fissions (255 tonnes of uranium).

Fission produces 90Sr about 4.5% of the time, and 137Cs about 6.3% of the time (this is 65% 235U + 35% 239Pu, because that's the numbers to which I have easy access on WP, but the numbers aren't that different). So that's 2.9*10^28 atoms of 90Sr (4.4 tonnes) and 4.1*10^28 atoms of 137Cs (9.4 tonnes).

Let's assume this is spread evenly throughout the biosphere (this is a bad estimate in two directions, because humans do concentrate strontium - although not caesium - but also a bunch of this is going to wind up in the ocean so there's a lot more mass it's spread through; I'm hoping this mostly cancels out). The biosphere is ~2 trillion tonnes, and a human is ~80kg, so that's 1.2*10^15 atoms (180 ng) of 90Sr and 1.7*10^15 atoms of 137Cs (380 ng) per human.

90Sr has a half-life of 28.9 years and 137Cs 30.2 years, so that's 890,000 decays of 90Sr per second per person and 1.2 million decays of 137Cs per second per person. 90Sr has decay energy 2.8 MeV (of which ~1/3 we care about, because the rest goes into antineutrinos that don't interact with matter) and 137Cs has decay energy 1.2 MeV (of which something like ~2/3 IIRC we care about, because of the same issue), so that's 134 nJ/s from 90Sr plus 154 nJ/s from 137Cs = 288 nJ/s total.

1 Sv = 1 J/kg, so that's 3.6 nSv/s, or 114 mSv/year. The lowest chronic radiation dose clearly associated with increased cancer risk is 100 mSv/year, so you're looking at a likely cancer uptick (remember, I've preferred to err on the side of overestimates here), but this is nowhere near enough to give radiation sickness (Albert Stevens got 3 Sv/year after the Manhattan Project secretly injected him with plutonium to see what would happen, and he died of old age 20 years later, although at that level he was lucky to not get cancer).

Now, this is the medium-lived fallout. The first year is going to be worse, significantly, but in the first year it's also not going to be everywhere; the "global fallout" takes months to years to come back down from the stratosphere (i.e. most of the super-hot stuff decays *before* coming back to earth), and the "local fallout" is, well, *local* i.e. not evenly distributed. With Cold War arsenals at this level of use, you *would* be seeing enough fallout to depopulate entire areas, because it AIUI wouldn't reach tolerable levels for longer than the month or so people can last without food. But a full-blown "On the Beach" scenario? No; while that's not *physically impossible* it was never a real possibility during the Cold War (nuclear autumn *was*, but note that I say "autumn" rather than "winter"; the "lol we all die" numbers were literally a hoax by anti-nuclear activists).

Expand full comment

Well, you are clearly better informed than I am. But it wasn't just the anti-nuclear activists. Compton's Encyclopedia from the early 1950's contained a claim that 7 cobalt bombs would suffice to depopulate the earth. This probably shaped the way I read the rest of the data I encountered.

Expand full comment

Nuclear winter was a hoax by anti-nuclear activists that successfully convinced much of society (so yes, other people repeated the hoax, because they were fooled). Fallout is a completely-different thing, and I'm not aware of a *deliberate* hoax in that area.

Note that "X nuclear bombs" is nearly always a sign of total cluelessness about how variable the power of nuclear bombs really is. You can have really-small nukes (the Davy Crockett tactical nuke was equal in energy to about 20 tonnes of TNT), but also really-big ones (the Tsar Bomba was equal in energy to about 50,000,000 tonnes of TNT, it was originally designed to be 100,000,000 tonnes, and I suspect the true upper limit is around 1,000,000,000 tonnes). The amounts of fallout from a 20-tonne bomb and a 1,000,000,000-tonne bomb are very different!

Expand full comment

You do not understand correctly. In order to render the survival of primates "unlikely", you would need about two orders of magnitude more (or more powerful) nuclear weapons than have ever been built, or three orders of magnitude more than presently exist.

"Enough nuclear weapons to destroy the world X times over", in all of its variations, is a stupid, stupid meme invented and spread by gullible innumerates who for various reasons wanted it to be true. It isn't.

Expand full comment

I think part of the issue is definitions of "overkill"; Cold War arsenals really were big enough that there were issues with a lack of viable targets other than the other side's ICBM siloes, but that doesn't remotely mean they were big enough to kill literally everyone (due to the issue where nukes are basically useless against rural populations). There is legitimate room for confusion there for people who haven't thought about this a lot.

Expand full comment

These are analogies not physical things. Remember, they were the dreams of a guy who lived 2000 years ago.

Did he see the seas turn to blood, or the seas polluted?

A beast with seven heads isn't a physical monster, it's an international organization.

666 — in earlier languages letters could be substituted for numbers. In modern Hebrew this is the case. Some names are lucky because they are somehow made from lucky numbers. I only know the concept, find a Jewish person to explain it.

Expand full comment

And where exactly do you expect Scott to find a Jewish person capable of interpreting theology against the modern world? I doubt he knows anyone like that

Expand full comment

No, which is why I linked to https://www.navalgazing.net/Nuclear-Weapon-Destructiveness in another comment.

Expand full comment

>The biggest climate shock of the past 300,000 years is . . . also during Freddie’s lifetime

Can you clarify what shock you're talking about? If you mean the blade of "hockey stick"-type charts, that's the result of plotting some low-variance proxy reconstructions and then suddenly glomming onto the end of the proxy chart a modern temperature-measurement based chart. If you bring the proxies up to date and plot THAT to the current day there's no sudden jump at the end. If there had been big jumps or drops just like the current one in the past we wouldn't know it because they wouldn't show up in the proxy data any more than the current jump does - the way we're generating the older data in our chart has inherent swing-dampening properties.

Expand full comment

But do you really believe that the current hockey stick is just random variance? Saying that AGW is overhyped and not actually a huge deal is one thing, but denying it altogether is intellectually indefensible these days I'd say.

Expand full comment

The hockey stick was debunked as the surgery of cherry-picked proxies from a field of proxies with counter evidence.

But to notice those problems is a career limiting move, hence people don't do it.

Expand full comment

Is that actually true? I mean I was working through a statistics text book that had an exercise based on the date of Japanese cherry blossoms blooming since like 1000 ad.

How many proxies have you directly checked to make sure they don't show sharply different behavior in the last hundred years?

Expand full comment

And temperature reconstructions are only one line of evidence for AGW. The basic finding that human activities are charging the climate are also supported by validated computer modelling and our understanding of the physical processes in play.

This can all be explored in detail in the relevant IPCC report https://www.ipcc.ch/report/ar6/wg1/

Expand full comment

But does one trust the IPCC when they say, “how much”?

Expand full comment

Not really. The IPCC is known to discard studies that they consider too extreme. There are legitimate arguments as to why they shouldn't be really trusted, but you can make analogous arguments about every single prediction. It's only the ensemble that should be trusted (with reasonably large error bars).

So far including some of the more extreme predictions in the ensemble would have improved the accuracy of the IPCC forecases.

Expand full comment

For policy relevant predictions such as seawater rise or desertification, the extreme predictions have not panned out in the slightest.

Expand full comment

have you never read climate gate files?

Currently the global temperature data set contains up to 50% 'estimated data.' There have been several (four I think) 'adjustments' of the past data, where every adjustment cools the past global temperature.

Expand full comment

This is such a weird way of arguing. Again, which proxies have you actually looked at? Which temperature series? Have you made sure you are actually reading the series correctly?

Given the temperature history of just the last fifty years, that I very much doubt is constantly getting adjusted down, the default would be to think the world is warming. Given the causal logic of the greenhouse effect, the default is to think temperature is likely to be rising because of rising CO2.

Complaining about estimated data without attacking the established core facts of the rising temperature trend in detail seems to me a lot like the people who with Covid vaccines talk a whole lot how many anecdotal cases of how someone who had been in perfect health died after being vaxxed, but they never make a serious effort to explain why Covid shows up very clearly in aggregate mortality statistics, but deaths from vaccinations do not.

Expand full comment

Comments from Xpym and Jason here seem like they're misunderstanding Glen's point, so I'll make it more explicit.

Glen isn't saying "maybe anthropogenic global warming isn't real".

He's saying "maybe anthropogenic global warming isn't as unprecedented as it looks from the usual hockey-stick graph, not because the head of the stick might not be the shape everyone says it is, but because the _shaft_ of the stick might have big deviations in it that aren't visible to the ways we estimate it.

(I make no comment on 1. how plausible that is or 2. whether making the argument suggests that Glen is _secretly_ some variety of global-warming denier. But he's _not saying_ that we aren't rapidly warming the planet right now. He even talks about "the current jump".)

(Having raised those questions I suppose I _should_ make some comment on them. 1: doesn't seem very plausible to me but I'm not a climate scientist. 2: probably not, but I would be only moderately surprised to find that Glen is opposed to regulations aimed at mitigating anthropogenic climate change. None of that is actually relevant to this discussion, though.)

Expand full comment

On the other hand, this doesn't really change the argument that increasing numbers of people and technological acceleration are making it so that a uniform distribution of events across time is a rather silly model for many types of events. And, although AGW my not have been a perfect example, it still should be enough to bring the idea into focus (except for people whose brain explodes at the mention of AGW).

Expand full comment

No, I think I understood his point. He essentially implies that there's no good reason to believe that AGW will be substantial enough to eventually result in a visible "hockey stick" in the long-term proxy graphs, which is IMO unreasonable.

Expand full comment

I don't think that's what he meant at all. If we're using the judgment that FDB has been alive for the biggest climate shock in the last 300,000 years, it would be helpful to know if there have been other climate shocks in that time period and to find out whether they were of similar, or greater, size. Glen is saying that the measurements we use would not show such a shock during that timeframe, because the proxies move slower than the actual temperatures (that we are measuring in real time right now). We wouldn't see a jump if it also went back to the baseline later.

Expand full comment

I think he’d have to clarify what time period he’s referring to because in terms of assessing whether or not we’re in the midst of a potential “shock” it seems to me that only the last 1000 years is relevant to human civilization.

Also, if past temperatures were more variable than the current consensus there’s an argument that that would mean that climate sensitivity was on the high end and that temperatures will end up even higher for any given magnitude of human-induced forcings (carbon dioxide, methane, land use change, etc).

Expand full comment

FWIW, rice paddies began emitting excess methane. That puts it about 9,000 years ago. OTOH, that was a very small contribution at first.

Expand full comment

20,000 years ago, sea levels were 120m lower than today. Seattle and most of North America were under a mile of ice.

Do the quick cocktail napkin math on sea level rise with 20k years and 120k mm of sea level rise. You quickly find that the simple average is 6mm/yr. But any look at NOAA data for San Francisco, or Battery Park in New York shows 2-3mm/yr for the past 150 years.

If you think 6mm/yr is less shock than 3mm/yr you're seeing something much different than I.

Expand full comment

What claim are you arguing with?

Specifically, I think worrying about sea level rise mostly involves people worrying that we'll hit a tipping point that causes the big ice caps in greenland and the antarctic to get much smaller. I think what you just wrote actually confirms that rapid sea level rises are a thing that can happen and that is worth worrying about.

Assuming I'm right that you think that worrying about that is deeply wrongheaded, what exactly in what you wrote disagrees with the worry I just described.

Expand full comment

I think the claim being argued with is the claim in the article that the past few decades have seen the "biggest climate shock in the last 300,000 years".

I think this is a very reasonable quibble given the various ice ages and interglacial periods over the past 300,000 years, with swings of ten degrees or more over a fairly short (it's hard to say exactly how short) period.

https://en.wikipedia.org/wiki/Interglacial#/media/File:Ice_Age_Temperature.png

Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock".

Expand full comment

"Even if the last fifty years does turn out to have been steeper"

It is steeper. There's no "if" about it. Taking a low-end estimate of the temperature change since 1974 gives about 0.75 degrees total, or 0.015 degrees per year. The steepest of those "nearly-vertical" lines on the plot still clearly spans multiple thousands of years. If we estimate that the steepest such event involved 10 degrees of warming over 2000 years (which is being VERY generous: my principled estimate based on the graph would be at least 3x that), that's 0.005 degrees per year: only 1/3 of the speed! To credibly claim that it isn't steeper, you'd have to argue that either one of those 10 degree events occurred over a mere ~600 years (meaning the graph is just plain wrong) or that modern, industrial humans are SO BAD at temperature measurement that we got the data for the last 50 years wrong by at least a factor of 3. And again, those are using numbers that are generous to your claim, perhaps unreasonably so.

"Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock."

This would be an excellent point if the Earth were not still warming. I suggest checking back from the distant-future and re-evaluating the validity of this argument then.

Expand full comment

Climate tipping points are a completely political creation and don't exist in the realm of facts. You can invent a tipping point for anything if you are running with zero facts.

Expand full comment

That's funny, because I remember learning about at least one potential climate tipping point mechanism in an astrophysics class a decade and a half ago. Being an astrophysics class it included no discussion of politics or politics-adjacent topics, and was almost wholly concerned with the physics behind the mechanism. Given that the mechanism in question was a. something that would cause global cooling rather than warming and b. last relevant millions of years ago (if ever), it seems like a pretty strange target for a political fabrication. Who are these shady political operatives sneaking their indecipherable agendas into our physics textbooks? Tell me, I would really like to know.

Expand full comment

That sea level rise did not happen gradually over 20k years, though; the vast majority of it took place between 12k and 10k years ago, at the Pleistocene-Holocene transition.

Expand full comment

Exactly that was my point. The majority of change was most of 120m in only 2,000 years. If you averaged all the change out over 20k years it would be 6mm/yr ... OP is saying the current change of 3mm/yr is unprecedented, when the average across the past 20k years is 6mm/yr, it was several cm per year for some periods of time.

but 2-3mm/yr is unprecedented.

Expand full comment

Ah, then I completely misunderstood your post, I apologize.

Expand full comment

"gjm" is correct as to what I was saying.

The proxy evidence I'm the most familiar with is tree ring series, eg Briffa's bristlecone pines. The way those proxies work is we identify still-living trees near the treeline which we think are temperature limited - they grow better when it gets warmer - so you can tell from the size of the rings which years were warmer than others. One way to calibrate such a thing is to compare *recent* temperature trends (for which we have good data) to the tree-ring trends at the time you first core the tree and see similar movement and assume it was also similar in the distant past and will continue to be so in the future. The first problem I mentioned is that if you go back and sample that tree again 20 or 30 years later you DON'T see that the relationship stayed similar. Pretty much all the tree series used in Mann's big Nature paper for which we have later data *didn't* maintain the relationship with temperature shown in their calibration period. Climate scientists call this "the divergence problem" - wikipedia's page on it is surprisingly not-terrible and has a good chart:

https://en.wikipedia.org/wiki/Divergence_problem

So the tree record doesn't IN PRACTICE - when you look at it in full - appear to suggest current levels of warmth are unusual much less rising at an "unprecedented" rate.

One possible reason for the divergence is an issue with the underlying theory: The way we use "tree-mometer" series usually implies a *linear* temperature/growth relationship - the warmer it is, the more the tree grows - but a better model of tree growth with respect to temperature would be a upside-down "U" shape. Which is to say: for a tree in a particular circumstance there is some OPTIMAL growing temperature that maximizes growth and it grows less than that if the local temperature is lower OR HIGHER than optimum. In that world, if there were in the past big positive swings in temperature just like today they might show up as brief negative movement - it could look to us today like it briefly got *colder* back then.

Anyway, that's one of a few reasons to think the shaft of the "hockey stick" in many of the usual sorts of reconstructions is artificially smoothed/dampened. I'm not saying it hasn't warmed recently, I am just saying we can't be sure how unusual it is to warm as much as it recently has.

(one of my influences on this is Craig Loehle, a tree scientist. Read his papers (and the responses to them, and his responses...) if you want to dig in further)

Expand full comment

I'd say the biggest climate shock of the past 300,000 years is the recent ice age, which killed off Mastodons, Wooly Mammoths, Sabre Tooth Cats, Dire Wolves, Short Face Bear, Giant Sloths, ... countless smaller animals.

Expand full comment

That's not remotely an apples-to-apples comparison. The full suite of ecological changes caused by AGW won't be known for centuries. Pointing at the large ecological changes caused by the ice age and insisting that it proves that was a bigger shock than AGW isn't something that can be done in a principled fashion (yet). You can either base your judgment on some criterion that we DO know, like the speed of the temperature change (which I assume is what Scott is doing) or you can reserve judgment. If you're reserving judgment you'll probably need to do so for several centuries at a minimum.

Expand full comment

Well, we pretty much know that sea level rose 6mm/yr from 20k years ago til recently, and 2-3mm/yr from 1855 to today. So yes, we can definitely say that previous rates of warming must have been much greater than today.

6mm/yr is the simple mean. We know 20k years ago sea level was 120m lower than today. Most of North America was under a mile or more of ice. If seas were 120k mm lower 20k years ago, the simple cocktail math is: 20k years over 120k mm and have 6mm/yr as the simple average. We know that's not really the case, as there were stable periods and periods where sea level rise was much much faster. So that level of rapid rise required temperatures much higher, a more drastic climate change than the relative stability we see today.

Expand full comment