437 Comments

Yeah I like some of FDB's stuff but this was particularly off the mark. I immediately wanted to correct him in the comments but annoyingly his comment section is limited to paid subscribers.

Expand full comment

To be fair to Freddie, he writes exactly the sort of things that makes public comments go to shit - the community here is mostly rationalists, the community there is a lot of right-wingers enjoying Freddie "goring the right ox" (as he puts it) with marxists and people interested in the same subject matter a seeming minority; even the paid comments can be pretty bad.

Expand full comment

Yeah I feel like at some level FDB's super inflammatory rhetoric (while fun to read) naturally draws in more toxic commenters. This community is mostly rationalists but Scott also refrains from needless name-calling and assumes best intentions.

Expand full comment
Sep 11·edited Sep 11

Except this post. If it does meet that bar, it's only by a technicality. But usually he's quite good at that.

Expand full comment

Freddie has been putting out bad takes on AI for a long time now. He's so confidently wrong that it's grating. I lost patience with him a year ago, I'm not surprised Scott eventually snapped and wrote an aggressive article. We're only human.

Expand full comment
Sep 16·edited Sep 16

He hates his comments section, which I understand - it's full of obsessive unsophisticated antiwoke types who are not necessarily representative of his overall readership but (as always) dominate the conversation because they are obsessed.

At the same time... look, I'm a pretty strong believer that bloggers tend to get the commentariat they deserve. I don't mean that as a moral judgment - I really like Freddie and his writing, and I am a paid subscriber myself, but he is to a significant extent a "takedown" blogger, with a tone and approach that tends toward the hyperbolic and is sometimes more style than substance, insofar as he's not so much saying anything new as he is saying it very well. That is not a bad thing - some things deserve to be taken down, and Freddie does it better than anyone - but it is inevitably going to attract a type of reader that is less interested in ideas than in seeing their own preconceptions validated ("goring the right ox"). There's a reason the one-note antiwoke crowd hangs out at FdB rather than (say) ACX, even though this community is more sympathetic to a lot of the antiwoke talking points (eg trans-skepticism) than Freddie himself.

Expand full comment

The paid comments are also "shit", as you put it. In general, I'm not convinced yet that restricting comments to paid improves the discourse beyond turning it more into an echo chamber; Nate Silver is an even better example where the articles are not even that inflammatory but the (paid) comment section is almost unreadable.

Expand full comment

Half the time he closes the comments to paid subscribers as well when he doesn't get the reaction he wanted.

Expand full comment

Based.

Expand full comment

I've seen him out-right banning people for criticizing his articles in ways that were far more civil and respectful than what was within the articles themselves. Weird to make your career on antagonist writing while not permitting antagonism from others.

Expand full comment
Sep 10·edited Sep 11

You’re better off. Freddie is constitutionally unable to handle reasoned disagreement. It always devolves into some version of an aggrieved and misunderstanding Freddie shouting, over and over, “you didn’t understand what I wrote you idiot.”

Expand full comment

more annoying than Freddie's comment section being limited to paid subscribers (which I get) he turns it off when he finds comments questioning trans, or turns comments off when he suspects it will turn contra trans. All this despite declaring himself a free speech zealot ... but he's also a Marxist free speech zealot. Censorship is most likely the defining term for a Marxist free speech zealot.

Expand full comment

That's the game, restrict comments section to paid users, occasionally make glaring errors that everyone feels a need to correct you on in the comments.

Expand full comment

If I only had a nickel for every time someone was upset that I was wrong on the Internet...

Expand full comment

I feel bad for DeBoer because he’s bipolar, and had a manic episode a while back where he said spewed a bunch of indefensible stuff. And he’s hypersensitive. Now

Scott’s rebutted his argument, which is fair enough, and now a bunch of ACX readers are criticizing him in a smirky kind of way, which sux. Jeez, let him be. He has his own kind of smarts.

Expand full comment

Yeah. I don't like the pile-ons. I think Scott has the right of this argument, but abstractly, my main goal besides identifying the truth would be to convince Freddie of it. And that's best done by someone he knows and respects, like perhaps Scott. And probably best done without pile-ons or derogatory comments or anything that would get Freddie's hackles up, because that's just going to make it less likely that he change his mind.

I suppose "playing for an audience" and "increasing personal status" and "building political coalitions" are goals lots of people have, which might be served by mocking someone. I don't like it when I have those goals, and I don't like it when they shape my actions, but de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/ Mostly, it just seems cruel to sacrifice a random person due to a quirk of fate.

Expand full comment

<de-prioritizing those things is probably the cause of a number of misfortunes in my life. :-/

Yeah, I understand about that. I seem to be wired to be unable to stick with playing for an audience, etc. I could make a case that's because I've got lotsa integrity etc etc., but I really don't think that's what it is. I am just not wired to be affiliative. I can be present in groups and participate in group activities and like many of the individuals, but I never have the feeling that these are my people, we're puppies from the same litter etc.

It's definitely cost me a lot. One cost, one I really don't mind that much, is money. If you're a psychologist who got Ivy League training and did a postdoc at a famous hospital that treats the rich and famous, you're in a great position to network your way into a private practice where you charge $300/hour, and see people to whom that sum is not a big deal. But I didn't do it. One reason I didn't is that it seems -- I don't know, kinda piggy -- to get good training then only give the benefits of it to Ivy Leaguers. But also, and this accounts for it more, I simply could not stand to do the required networking. Everybody knew me from work, but to really network I needed to like, accept Carol's suggestion to join her high end health club with flowers in the dressing room, and then have a little something with Carol in the club bar afterwards, and sometimes she'd bring along somebody she'd met on the stairmasters, and I realized I was supposed to bring along somebody I'd met now and then, and I just couldn't stand it. I hate talking to people while I'm working out. I like to close my eyes, put on headphones and try to get some self-hypnosis going where I have the feeling that the music is the power that's moving my legs, then sprint til I'm a sweaty red-faced mess.

Has it caused you misfortunes beyond the one big disaster?

Expand full comment

He also tends to yell at people who disagree with him in the comments.

Expand full comment

I used to subscribe to Freddie. Don't really recommend it.

Expand full comment

Speaking of Anthropics and Doomsday, Sean Carroll addressed it, yet again, in his last AMA

https://www.preposterousuniverse.com/podcast/2024/09/02/ama-september-2024/

> The doomsday argument for those who want to... Who haven't heard of it, is an argument that doomsday for the human race is not that far off in the future, in some way of measuring, based on statistics and the fact that the past of the human race is not that far in the past. It would be unlikely to find ourselves in the first 10 to the minus five of the whole history of the universe, so the whole history of humanity. Or even the 10 to the minus three of the whole history of humanity. So therefore, probably the whole history of humanity is not stretched into the future very far. That's the doomsday argument. So you say the doomsday argument fails because you are not typical, but consider the chronological list of the n humans who will ever live. Almost all the humans have fractional position encoded by an algorithm of size log 2n bits.

> This implies their fractional position has a uniform probability density function on the interval zero to one, so the doomsday argument proceeds. Surely it is likely that you are one of those humans. No, I can't agree with any of this, really, to be honest. Sure, you can encode the fractional position with a certain string of a certain length n, okay? Great. Sorry, the log 2n is the length of the string. Yes, that is true. There's absolutely no justification to go from that to a uniform probability density function. In fact, I am absolutely sure that I am not randomly selected from a uniform probability distribution on the set of all human beings who ever existed, because most of those human beings don't have the first name Sean. There you go, I am atypical in this ensemble. But where did this probability distribution purportedly come from? And why does it get set on human beings.

> Why not living creatures? Or why not people with an IQ above a certain or below a certain threshold? Or why not people in technologically advanced societies? You get wildly different answers. If you depend... If you put different... If you have different... What are they called? Reference classes for the set of people in which you are purportedly typical, multi-celled organisms. So that's why it's pretty easy to see that this kind of argument can't be universally correct, because it's just no good way to decide the reference class. People try, Nick Bostrom, former Mindscape guest, has put a lot of work into this, wrote a book on it, and we talked about it in our conversation. But I find all the efforts to put that distribution on completely unsatisfying. The one possible counter example would be possible counter example, would be if we were somehow in equilibrium.

> If somehow there was some feature of humanity where every generation was more or less indistinguishable from the previous generation, then within that equilibrium era, if there was a finite number of people, you might have some justification for choosing that as your reference class. But we are clearly not in equilibrium, things are changing around us very, very rapidly. So no era in modern human history is the same as the next era, no generation is the same, there's no reason to treat them similarly in some typicality calculation.

Expand full comment
author

I actually think you can do Carter Doomsday with anything you want - humans, living creatures, people with IQ above some threshold that your IQ is also over, even something silly like Californians. You'll get somewhat different answers, but only in the same sense that if you tried to estimate the average temperature of the US by taking one sample measurement in a randomly selected place, you would get different answers depending on which place you took your sample measurement (also, you're asking different questions - when will humanity be destroyed vs. when will California be destroyed).

I think the stronger objection to Carter Doomsday is https://en.wikipedia.org/wiki/Self-indication_assumption_doomsday_argument_rebuttal ; I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Expand full comment

I think Sean's point is that this argument is useless because your reference class is unstable or something. Hence the last paragraph, where he steelmans it.

I do not understand the point of SSA vs SIA, neither seems to have any predictive power.

Expand full comment

Man does not live by prediction alone. SSA and SIA are trying to do.something more like abduction.

Expand full comment
Sep 10·edited Sep 10

Well, yes, you can do the Carter Doomsday argument with any reference class you want because it's wrong, and bad math can "prove" anything. The fact that it's so flexible, and that it's making predictions of the future that swing by orders of magnitude just based on what today's philosophy counts as consciousness, should warn you off of it!

The SIA at least has the property that you get the correct answer out of it (ie, you can't predict the future based on observing literally nothing). But the real problem is the idea that there's any "sampling" going on at all. Both the SSA and SIA have that bad assumption baked into them. There's one observer to a human, no more and no less. If you believe in solipsism - that you're the one true soul who is somehow picked (timelessly) out of a deterministic universe populated by p-zombies - THEN the Doomsday Argument applies. But I doubt that you do.

Note that the correct boring answer - that your existence doesn't inherently count as evidence for or against a future apocalypse - does NOT depend on fuzzy questions like what reference class you decide you count as. Good math doesn't depend on your frame of reference. :)

Expand full comment

I think SIA shows exactly why Doomsday goes wrong, but that it's not hard to see that Doomsday does go wrong. Like, if Doomsday is right, then if the world would continue for a bunch of generations unless you get 10 royal flushes in a row in poker, you should be confident that you'll get 10 royal flushes in a row--not to mention that Adam and Eve stuff https://link.springer.com/article/10.1007/s11229-024-04686-w

Expand full comment

> I'm not sure it's true, but it at least makes me confused enough that it turns me away from thinking about the subject at all, which is a sort of victory.

Oh, come on, Scott, you know better than that!

Here is a common sense way to deal with the Doomsday Argument, whitch, I believe, resolves most of the confusion around the matter, without accepting the bizarreness of either SSA or SIA.

https://www.lesswrong.com/posts/YgSKfAG2iY5Sxw7Xd/doomsday-argument-and-the-false-dilemma-of-anthropic

Expand full comment

Ape, I was so glad to read your post. For years, I've felt like I'm taking crazy pills whenever I see supposedly (or even actually) smart people discuss the DA. Like you said, "it's not a difficult problem to begin with" - well, ok, maybe it does take more than "a few minutes" to really get a solid grasp on it. But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

Expand full comment

You are most welcome. It's a rare pleasure to meet another person who reasons sanely about anthopics. And I extremely empatize with all the struggles you had to endure while doing so.

I have a whole series of posts about anthropic reasoning on LessWrong, which culminates in resolution of Sleeping Beauty paradox - feel free to check it out.

> But surely the rationality community should have collectively been able to make the correct answer (that the Doomsday Argument packs in a hidden assumption of a sampling process that doesn't exist) into common knowledge...? Correct ideas are supposed to be able to beat out pseudoscience in the marketplace of discourse, right? Sigh.

I know, right! I originally couldn't understand why otherwise reasonable people are so eager to accept nonsense as soon as they start talking about anthropics. I currently believe that the source of this confusion lies in a huge gap in understanding of fundamentals of probability theory, namely the concept of probability experiment, how to assign sound sample space to a problem and and where the uniform prior even comes from. Sadly, Bayesians can be very suceptible to it, because all the talk about probability experiments have "frequentist vibe".

Expand full comment

Fewer words better. "The mutually exclusive events in the space are just two, whatever names you give them. They have equal probability. Ignore the flim-flam."

Expand full comment
author
Sep 11·edited Sep 11Author

This seems so confused that it's hard to even rebut. The fact that you are a specific person doesn't counteract the fact that you are a person.

Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Knowing only that I was born on the 311th day of the year, I should predict that a year is 622 days long (again, not as good, but right order of magnitude - compare to eg accidentally flipping the day-of-the-month and day-of-the-year and predicting a year is 14 days long!)

Knowing only that I was born on the 4th year of the decade, I should predict that a decade is 8 years long (again, pretty good).

Notice that by doing this I have a pretty low chance of getting anything wrong by many orders of magnitude. For example, to accidentally end up believing that a year lasted 10,000 days, I would have to have (by unlucky coincidence) have be born very early in the morning of January 1.

Here I think anthropics just works. And here I think it's really obvious that "but your parents, who birthed you, are specific people" has no bearing whatsoever on any of these calculations. You can just plug in the percent of the way through the interval that you were born. I think this works the same way when you use "the length of time humans exist" as the interval.

(I'm slightly eliding time and population, but if I knew how many babies were born on each day of the 1980s, I could do these same calculations and they would also be right).

Expand full comment

But wait ... "the length of time humans exist" is NOT the measure that Doomsday Argument proponents use! Continuing your analysis, since you're born roughly 20,000 years into homo sapiens' history, you should expect homo sapiens to go for 20,000 years more. Which is much more than an order of magnitude off from what Doomsday Argument proponents predict (because population growth has been exponential, and they use "birth order index" instead of date). You picked a slightly different way to slice the data and got an irreconcilably different result.

This should tell you that something is going badly wrong here, but what? All you've really shown here is that you can sample dates, from a large enough set that, say, "day of year" will be nice and uniformly distributed. Then the Copernican Principle can be used. (It's a mistake to call this "anthropics", though, as what you're measuring is uncorrelated with the fact of your existence. If, say, babies were more likely to be stillborn later in the year, then you could predict ahead of time that your day of birth would probably be on an early date. That's an anthropic argument.)

But how will you sample "birth order index"? Everybody you know will have roughly the same one, regardless of whether there are billions or quadrillions of humans in our future. You're not an eldritch being existing out of time, and can't pick this property uniformly at random.

To be honest, I'm not sure I'm doing a good job of explaining what's going wrong here. I also feel like this discussion is "so confusing it's hard to rebut". The real way to understand why the DA is wrong is just to create a (simple) formal model with a few populated toy universes, and analyze it. Not much English to obfuscate with... just math.

Expand full comment

Thanks for the reply! Let's clear the confusion, whether its mine or yours.

> Here's a cute counterexample: I was born at 11:30 AM on November 7, 1984. Suppose that I wanted to use this to determine the length of various units of time, something which for some reason I didn't know.

First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you. Therefore your birth rank in necessary among the 6-th ten-billion group of people, therefore you do not get to make the update in favor of short timeline.

Whether you could've been born in any other hour of the day or in any other day of the month or in any other month of the year is, strictly speaking, irrelevant to this core claim. We can imagine a universe where the birth of a child happens truly randomly throughout a year after their parents had sex. In such universe Doomsday Inference would still be false.

That said, our universe isn't like that. And your reasoning here doesn't systematically produce correct answers, allowing for us to distinguish between things that are and are not randomly sampled, which we can see via a couple of simple sanity checks.

> Knowing only that I was born 11.5 hours into the day, I should predict that a day is about 23 hours long (hey, that's pretty good!)

Sanity check number one. Would your reasoning produce different result if you birth hour was definetely not random?

Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference: notice that your predicted value is very close to correct value and therefore assume that you were born at a random time throughout the day, even though, it would be completely wrong. Sanity check 1 failed.

> Knowing only that I was born 7 days into a month, I should predict that a month is 14 days long (not as good, but order of magnitude right, and probably the 95% error bars include 30).

Sanity check number two. Would some other, clearly wrong approach, fail to produce similarly good estimate?

I've generated a random english word, using this site: https://randomwordgenerator.com/

The word happened to be "inflation". It has 9 letters. Suppose for some reason I believe that the length of this word is randomly selected from the number of days in a month. That would give me an estimate of 18 days in a month. Even better than yours!

If I believed the same for the number of days in a year I'd be one order of magnitude wrong. On one hand, that's not good, but on the other, just one order of magnitude! If I generated a random number from -infinity to infinity, with all likelihood it would be many more orders of magnitude worse! Sanity check two also failed.

Now, there is, of course, a pretty obvious reason why both your and my methods of estimating all these things seem to be working good enough, which has little to do with random sampling or anthropics, but I think we do not need to go on this tangent now.

Expand full comment

>Imagine a world where everyone is born 11.5 hours into the day. In such world you would also make the exact same inference

The randomness in this example is no longer the randomness of when in the day you are born, but the randomness of which hypothetical world you picked. You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day. So it's essentially the same deduction, randomly choosing fixed-birth-time worlds instead of randomly choosing birth times.

Expand full comment

> You could have picked a hypothetical world where births are fixed to be anywhere from 0 to 24 hours into the day.

And in majority of such worlds Scott would still do similar inference, see that the estimate is good enough and, therefore wrongly conclude that birth dates of people happens at random.

The point I'm making here is that his reasoning method clearly doesn't allow to distinguish between worlds where things actually happen randomly and worlds where there is only one deterministic outcome.

Expand full comment

> First of all, let's be very clear that this isn't our crux. The reason why I claim Doomsday Inference is wrong is because you couldn't have been born in distant past or distant future, according to the rules of the causal process that produced you.

But every human who has ever and will ever live also has a causal process that produces them. My possession of a causal process therefore does nothing to distinguish me, in a temporal-Copernican sense, from all other possible observers.

I have read your LessWrong post all the way through twice, and, well, maybe I'm a brainlet, but I don't understand your line of argumentation at all. I'm not a random sample because... I have parents and grandparents? How does that make me non-random?

Humans had parents and grandparents in 202,024BC, and (pardon my speculation) humans will have parents and grandparents in 202,044AD (unless DOOM=T), therefore the fact that I have parents and grandparents in 2024 doesn't make any commentary on the sensible-ness of regarding myself as a random sample.

Expand full comment

This is not about your ability to distinguish things. This is about your knowledge of the nature of the probability experiment you are reasoning about.

Lets start from the beginning. When we are dealing with a random number generator, how comes we can estimate the range in which it produces numbers, just from one outcome?

We can do it due to the properties of Normal distribution. Most of the generated values will be closer to the mean value than not - see the famous Bell curve picture for an illustration. So whatever value you received is likely to be close to the mean, and, therefore, this way you can estimate the range with some confidence. Of course, the more values you have the better your estimate will be, all other things being equal, but you can get a somewhat reasonable estimate even with a single value.

But consider you are dealing not with a random number generator but with an iterator: instead of producing random numbers, this process produces numbers in a strict order, every next number is large the previous exactly by 1. Can you estimate the range in which it produces numbers based on some outcome that you've received?

No, because now there is no Bell curve. It's not even clear whether there is any maximum value at all, before we take into account physical limitations of the implementation of the iterator. As soon as you know that you are dealing with an iterator and not a random number generator applying this method, appropriate for reasoning about random number generators and not iterators, would be completely ungrounded.

Do you agree with me so far? Do you understand how this is relevant to Doomsday Inference?

Expand full comment
Sep 11·edited Sep 11

If I understand SIA correctly, it is connected to the idea that a sigmoid and an exponential distribution look the same until you reach the inflection point. By assuming that the distribution of observers looks like what we've seen historically, Carter privileges the sigmoid distribution of observers. Instead, if you consider the worlds in which population growth starts rising again for any number of reasons, you come to the conclusion that we can't normalize our distribution of observers. For example, if you simulated observers throughout history and asked them when doomsday would arise, none of them would have said the 21st century until at least the 1850s - that's a lot of observers that Carter doomsday misled terribly, often by millennia and orders of magnitude of population based on what we already know. Based on those observations, you could make a trollish reverse-Carter argument that we would expect Carter doomsday arguments to be off by millennia and orders of magnitude of population. And what the SIA paper seems to find is that the Carter and anti-Carter arguments exactly cancel out. That’s not as implausible as it sounds, given that we might expect to be able to draw no meaningful conclusions from total ignorance.

Expand full comment

Their argument is so obviously bad in a completely uninteresting way that I wonder why Scott bothered to Contra it. Was it gaining traction or something?

Expand full comment

Because Freddie is a prominent enough blogger. To give an extreme example, if the prime minister of Australia (whoever that is) said something obviously wrong, it would be worth rebutting.

Expand full comment

Anthony Albanese

Expand full comment

No, not the UK.

Expand full comment
author

I think if I'm right about prior odds of 30% on a singularity in our lifetime, that's a novel and interesting result. And if I'm wrong I'll enjoy having hundreds of smart people read over it and cooperate on figuring out the real number.

Expand full comment

Abusing footnote 8 somewhat, I don't know what the prior probability that my lifetime would overlap the time interval that someone in the far future would assign to the singularity after the industrial revolution.

I'd argue that the posterior probability is 100% based on the scale of the Robin Hansen definition. You make a good case for it in techno-economic section. In terms of economic growth and social impact isn't the information age obviously a singularity? I'm connected to literally billions of people in a giant joint conversation. How preposterous would that statement be in a pre-information age world.

When the histories are written by our future descendants or creations it might make sense for them to date the singularity from the start of the information age to the next stable state.

Expand full comment

I also think that the information age (aka Toffler's technological revolution) has already happened. The rate of major revolution has been accelerating.

Fire was invented so far in prehistory that we have no records of that time.

Writing (which enabled history and science) and the wheel followed the agricultural revolution by only a few millenia.

The industrial revolution was less than 3 millenia after that.

The information age took only 0.4 millenia.

Each of these revolutions significaltly increased the rate of innovation and the singularity will accelerate it even more.

At this rate of acceleration the singularity (which may have already happened and is smart enough to hide from us primitive apes) may indeed cause another revolution beyond the information age. Whenever that happens, I wonder if we will be smart enough to recognize that it already did.

Expand full comment

So the singularity is people using apps to do the same things they were already doing?

Expand full comment

The expert consensus seems to be we're using apps to do things in a dramatically different way than we used to. If you want to go down that rabbit hole, search for dopamine related to social media.

There are a lot of weird things going on right now. It seems uncontroversial to say that society is acting very differently than it has in the past (even the recent past), and that it's a fairly reasonable assumption that's because we're being manipulated by algorithms.

That those algorithms are just dumb automatons in the service of Moloch as instantiated by humans is taken for granted. But that they might already be in the service of one or more AGIs is not out of the question.

Expand full comment

Social media seems to be a new flavor of social heroin. Humanity has developed many of these in recent years.

Garbage content being spewed out by AI would probably concern people more if it didn’t compare favorably to the garbage that was already being produced by human hands.

Expand full comment

I get that you're using Robin Hansen's definition and that changes things, but I don't think we refer to a Singularity that happens but nobody noticed. Based on the common definition, we will all know the Singularity happened if/when it does, without the need to pontificate about it.

Expand full comment

This might be reasonable if the Internet doesn't continue to fragment. We already don't have a single giant conversation.

Expand full comment

I don't think "singularity" is sufficiently well defined that it makes sense to try to figure out a number.

Using your definition of being at least as important as agriculture or the industrial revolution... well, I accept that 30% isn't an unreasonable number but don't think that the Industrial Revolution is as significant as the sort of thing I've heard called a "singularity" in the past.

Expand full comment
Sep 10·edited Sep 10

It's an estimate, and it's perhaps more reasonable than others, but it has three key assumptions:

1. The singularity would be in the same class of consequentiality as the Industrial or Agricultural Revolution, no more, no less. This assumption puts a constraint on the sort of singularity we're talking about, one which doesn't involve mass extinction, a nice obviously that would be much more consequential than the AR or IR.

2. There have been no other instances of similar consequentiality in all of human history. One might argue that the harnessing of fire was at least as economically consequential. Or the domestication of animals. Or the development of written languages.

3. The next event of consequentiality in this category will be the singularity. A few decades ago, most people would have bet on something related to nuclear energy or nuclear annihilation. An odd feature of the 30% calculation is that the more possibilities one can think of for the next great event, the lower the probability of it being the singularity. One can argue that the pace of AI technology is such that a singularity is very likely to be the one that happens soonest, but then you may as well dump the human history probability handwaving and just base the argument on technological progress.

Expand full comment
author

I mostly agree with you, but playing devil's advocate:

1. Lots of races went extinct during the hunter-gatherer to agricultural transition. We don't care much because we're not part of those races. If humans went extinct during the singularity (were replaced by AI), maybe future AIs would think of this as an event only a little more consequential than ancestral hunter-gatherers going extinct and getting replaced by agriculturalists. The apocalypse that hits *you* always feels like a bigger deal than the apocalypse that hits someone else!

2. Invention of fire is outside of human history as defined here. Domestication of animals is part of agriculture.

3. I think this is right, though I have trouble thinking about this.

Expand full comment

Is anyone seriously proposing a singularity which cashes out in a plurality of (sentient?) AI minds? All the AI Doom scenarios I've ever seen propose a singular, implacable superintelligence which uses its first-mover advantage nips all competition in the bud. Some, of course, predict worlds with large numbers of independent AIs, but in those worlds humanity also survives for the same reason all the other AIs survive.

Expand full comment

Source for "lots of races went extinct"? I don't know how to interpret that. But it does suggest another criticism: the choice of class. What odds would we get for "large extinction event"? Probably shouldn't limit it to human extinction because we're biased about treating our own extinction as more consequential than others. Pliosaurs and trilobites may disagree. Limit it to humans and there have been near-extinctions and genetic choke points in our history. Which suggests that there may be enough flexibility in choice of category to get a wide range of answers. There's never been a singularity (yet).

Expand full comment

The term used in ancient population genetics is "population replacement". But that's usually for relatively recent humans, and we also know that there used to be even more distantly related subspecies like Neandertals, varieties of Denisovans, branches of Homo Erectus in places like Africa we haven't named, Flores "hobbits" etc.

Expand full comment

The entire basis of the argument is wrong. It's treating a causally determined result as the outcome of a random chance. The is NO chance that someone will invent transistors before electricity (unless you alter the definition of transistor enough to include hydraulic systems or some similar alteration).

Expand full comment
Sep 10·edited Sep 10

It has nothing to do with random chance, it's an argument about uncertainty. You can't invent transistors without electricity - but you have to know how a transistor works to know that. Someone who doesn't has to at least entertain the possibility that you can.

Expand full comment

Am I the only one thinking that neither the Pavlov thing nor the CMC would have realistically caused anything worth calling the apocalypse?

Expand full comment
author

I agree that the seas would not have turned to blood, and that a beast with seven heads and an odd obsession with the number 666 wouldn't have been involved.

Expand full comment

I think Mr. Serner is saying that neither event going the other way, resulting in global thermonuclear war, would have resulted in the extinction of humans. Not even close.

Expand full comment
Sep 10·edited Sep 10

You used the term "self-annihilation" with regard to humanity; "annihilation" means "to nothing", i.e. 100.00000000% of humans killed.

I'll go further and say that WWIII would probably have fallen short of even the Black Death in %humans killed (because nobody would be nuking Africa, India, SEA or South America), though it'd probably be #2 in recorded history (the ones I know i.e. the Columbian Exchange, the first plague pandemic, and the Fall of Rome seem to be somewhat less; note of course that in *pre*history there were probably bigger bottlenecks in relative terms) and obviously #1 in absolute number or in %humans killed per *day*.

Expand full comment

I don't think this is right. If we suppose Europe, America, and East Asia are all economically destroyed, I don't think Africa, South America and SEA would be able to support populations of billions of people while being disconnected from the existing world economy.

We'd be unlikely to see actual human extinction, but I think the populations of countries outside the First World would be very far from intact.

Expand full comment

Africa is only one generation removed from subsistence farming, so I'm pretty sure they'd be fine. South and Central Asia probably likewise. South and Central America probably untouched. Pacific Islands and most of Australia untouched. Most of Russia untouched. China is mostly vertical, thus most damage wouldn't propagate.

That's probably 3 billion people untouched. A massive drop in GDP, oil would stop flowing for some years and there would be big adjustments, probably mass migration out of cities and a return to subsistence farming and somewhat of a great reset. Discovered knowledge would be mostly intact, there's libraries after all. In about 20 years, GDP growth would likely resume.

Expand full comment

Most of Africa is a couple *very large* generations removed from subsistence farming. The population of Africa today is greater than the population of the world in 1800, and that relies heavily on technology and connection to global supply chains that they wouldn't have in this scenario. India likewise relies heavily on interconnection with a huge global economy and supply chain to sustain its population. Countries around the world (including India, and through much of the aforementioned Africa) are heavily plugged into China's economy. The human species would survive, but those 3 billion people would be very far from untouched. We don't actually have the capacity to support 3 billion people via subsistence farming without global supply networks moving resources around, which is why, throughout the history of subsistence farming, we never approached that sort of global population. Even subsistence farmers in the present day mostly buy from international agricultural corporations.

We could transition back to it if we had to, but it would be a major disruption to the systems which have allowed us to sustain global populations in excess of *one* billion people, and the survivors would disproportionately be concentrated where our knowledge and training bases right now are weakest.

Expand full comment

You can get some idea of how economic collapse and huge supply chain disruptions play out by looking at Russia/Ukraine/Belarus/Kazakhstan (largest ex-USSR republics) between 1990 and 2000.

Expand full comment

Their populations might decline, but humanity has survived periods of population decline.

Expand full comment

> (because nobody would be nuking Africa, India, SEA or South America)

Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began. Radioactive ash can cause life-threatening problems for people exposed, even weeks after the bomb goes off and hundreds, maybe thousands of miles downwind.

Firestorms don't care much about politics either. If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming. Ocean chemistry shifts, atmosphere becomes unbreathable, game over.

Expand full comment

CO2 is great for plan growth (in commercial greenhouses, the guidelines are to supplement to 3x ambient level), so - with less humans around - it can easily balance out in terms of temperature over few years/decades.

Expand full comment

I'm not saying CO2 by itself would be directly killing us all. To elaborate on what I meant by "ocean chemistry shifts": https://spacenews.com/global-warming-led-to-atmospheric-hydrogen-sulfide-and-permian-extinction/

Expand full comment
Sep 11·edited Sep 11

>Armies in the field, logistics hubs, and strategically significant factories or mineral resources, might be targeted even if the territory they're in was considered neutral before the war began.

Sure, but I'm not sure how a war between NATO/Free Asia and the Soviets/PRC would have significant operations in Africa or South America (there's *maybe* a case for India and SEA coming into it due to proximity to China, but if you're talking about followup invasions after a thorough nuclear bombardment, amphibious landings are quite feasible).

>If enough forests, oil wells, and coal seams ignite, while emergency-response infrastructure is already tapped out, all that carbon dioxide release puts climate change back on track toward multiple degrees of warming.

Note that firestorms actually cause cooling in the short-term (because of particulates - "nuclear winter"), and that in the medium term there'd be significant reforestation to counter this (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets).

Also, the Permian extinction was from over 10 degrees (!) of global warming. A few degrees clearly doesn't do the same thing (the Paleocene-Eocene Thermal Maximum, 5-8 degrees, did not cause a significant mass extinction).

Expand full comment

Yes, based on current understanding several things would need to go wrong in semi-unrelated and individually unlikely ways for outright extinction of all humans to be the result, but there are a lot of unknowns - it's not like we can check the fossil record for results of previous nuclear wars - so it seems like it would be very bad news overall, compared to the alternative.

Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

> (and it's not like coal seam fires don't ever happen naturally, or like these things are prime nuclear targets)

I understand there's brown coal still in the ground in Germany, within plausible ICBM targeting-error range of a lot of Europe's critical high-tech infrastructure.

Expand full comment

>Wouldn't even do much to make an AI apocalypse less likely, since survivors trying to reestablish a tech base would presumably be more concerned with immediate functionality than theoretical long-term risks.

There are two notable question marks there.

1) It would give us a reroll of "everyone is addicted to products provided by the big AI companies, and the public square is routed through them, and they have massive lobbying influence". It's not obvious to me that we would allow that to happen again, with foreknowledge.

2) There's the soft-error issue, where the global fallout could potentially make it much harder to build and operate highly-miniaturised computers. I'm unsure of the numbers, here, so I'm not sure that this would actually happen, but it's a big deal if it does.

Expand full comment

India would have either stayed neutral or sided with the Soviets in WWIII. They were officially neutral, but it was a "leaning pro-Soviet" neutrality. On the other hand, China would quite possibly *not* have sided with the Soviets, and might even have sided with America, at any rate after the Sino-Soviet split in1969.

Expand full comment

IIUC, there's a lot of overkill in the ICBM department, and "On the Beach" only had things over too quickly. But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.) And perhaps oceanic mammals would have a decent chance, as water tends to absorb radiation over very short distances.

However, I haven't revised my beliefs yet, I'm just willing to consider whether I should. But that was a LOT of overkill in the ICBM department.

Expand full comment

There are people living in cities in Iran that are every bit as 'hot' as Chernobyl. There are beaches in Rio every bit as hot as Chernobyl. People there live long healthy lives. The risk of radiation is vastly overstated. Chernobyl only directly killed twenty people and another twenty or so probable killed. 40 is a pretty small number.

Expand full comment

It's my understanding that the dogs that live in the hottest part of Chernobyl have a greatly increased mutational load. They've got short generations, though, so mutations that are too deleterious tend to disappear. But if they were to accumulate, the dogs probably wouldn't survive. (How hot Chernobyl is depends on where you measure.) (Well, not really the hottest part, that's inside the reactor and a lot hotter than where the dogs are living.)

OTOH, I expect a full out WWIII with ICBMs would provide thousands of times overkill. (That's what's been publicly claimed, and I haven't heard it refuted.) Just exactly what that means isn't quite clear, but Chernobyl was not an intentional weapon, so I expect the south pole would end up hotter than Chernobyl was a decade ago.

Expand full comment
Sep 11·edited Sep 11

>IIUC, there's a lot of overkill in the ICBM department

To be precise, there *was*. There isn't so much anymore. But I was discussing the Cold War, so this is valid.

>But there would definitely be surviving microbes, probably nematodes, and likely even a few chordates. Mammals...perhaps. Primates, not likely.

>OTOH, in the light of the wildlife around Chernobyl, perhaps I should revise my beliefs. Perhaps mammals with short generation times would be likely to survive. (But that doesn't include humans.)

Okay, let's crunch some numbers. I'm going to count 90Sr and 137Cs, as this is the vast majority of "medium-lived" fallout beyond a year (I'd have preferred to include some short-lived fission products as well, to get a better read on 100-day numbers, but my favourite source NuDat is playing up so I don't have easy access to fission yields of those).

Let's take the 1980 arsenal numbers, which is 60,000 nukes. A bunch of those are going to be shot down, fail to detonate, or be destroyed on the ground, so let's say 20,000 detonations (this rubble is bouncing!), and let's say they're each half a megaton (this is probably too high because a bunch were tacnukes or ICBM cluster nukes, but whatever) and half fission, for 5,000 megatons fission.

5,000 megatons = 2.1*10^19 J = 1.3*10^38 eV.

A fission of 235U releases ~200 MeV (239Pu is more), so that's 6.5*10^29 individual fissions (255 tonnes of uranium).

Fission produces 90Sr about 4.5% of the time, and 137Cs about 6.3% of the time (this is 65% 235U + 35% 239Pu, because that's the numbers to which I have easy access on WP, but the numbers aren't that different). So that's 2.9*10^28 atoms of 90Sr (4.4 tonnes) and 4.1*10^28 atoms of 137Cs (9.4 tonnes).

Let's assume this is spread evenly throughout the biosphere (this is a bad estimate in two directions, because humans do concentrate strontium - although not caesium - but also a bunch of this is going to wind up in the ocean so there's a lot more mass it's spread through; I'm hoping this mostly cancels out). The biosphere is ~2 trillion tonnes, and a human is ~80kg, so that's 1.2*10^15 atoms (180 ng) of 90Sr and 1.7*10^15 atoms of 137Cs (380 ng) per human.

90Sr has a half-life of 28.9 years and 137Cs 30.2 years, so that's 890,000 decays of 90Sr per second per person and 1.2 million decays of 137Cs per second per person. 90Sr has decay energy 2.8 MeV (of which ~1/3 we care about, because the rest goes into antineutrinos that don't interact with matter) and 137Cs has decay energy 1.2 MeV (of which something like ~2/3 IIRC we care about, because of the same issue), so that's 134 nJ/s from 90Sr plus 154 nJ/s from 137Cs = 288 nJ/s total.

1 Sv = 1 J/kg, so that's 3.6 nSv/s, or 114 mSv/year. The lowest chronic radiation dose clearly associated with increased cancer risk is 100 mSv/year, so you're looking at a likely cancer uptick (remember, I've preferred to err on the side of overestimates here), but this is nowhere near enough to give radiation sickness (Albert Stevens got 3 Sv/year after the Manhattan Project secretly injected him with plutonium to see what would happen, and he died of old age 20 years later, although at that level he was lucky to not get cancer).

Now, this is the medium-lived fallout. The first year is going to be worse, significantly, but in the first year it's also not going to be everywhere; the "global fallout" takes months to years to come back down from the stratosphere (i.e. most of the super-hot stuff decays *before* coming back to earth), and the "local fallout" is, well, *local* i.e. not evenly distributed. With Cold War arsenals at this level of use, you *would* be seeing enough fallout to depopulate entire areas, because it AIUI wouldn't reach tolerable levels for longer than the month or so people can last without food. But a full-blown "On the Beach" scenario? No; while that's not *physically impossible* it was never a real possibility during the Cold War (nuclear autumn *was*, but note that I say "autumn" rather than "winter"; the "lol we all die" numbers were literally a hoax by anti-nuclear activists).

Expand full comment

Well, you are clearly better informed than I am. But it wasn't just the anti-nuclear activists. Compton's Encyclopedia from the early 1950's contained a claim that 7 cobalt bombs would suffice to depopulate the earth. This probably shaped the way I read the rest of the data I encountered.

Expand full comment

Nuclear winter was a hoax by anti-nuclear activists that successfully convinced much of society (so yes, other people repeated the hoax, because they were fooled). Fallout is a completely-different thing, and I'm not aware of a *deliberate* hoax in that area.

Note that "X nuclear bombs" is nearly always a sign of total cluelessness about how variable the power of nuclear bombs really is. You can have really-small nukes (the Davy Crockett tactical nuke was equal in energy to about 20 tonnes of TNT), but also really-big ones (the Tsar Bomba was equal in energy to about 50,000,000 tonnes of TNT, it was originally designed to be 100,000,000 tonnes, and I suspect the true upper limit is around 1,000,000,000 tonnes). The amounts of fallout from a 20-tonne bomb and a 1,000,000,000-tonne bomb are very different!

Expand full comment
founding

You do not understand correctly. In order to render the survival of primates "unlikely", you would need about two orders of magnitude more (or more powerful) nuclear weapons than have ever been built, or three orders of magnitude more than presently exist.

"Enough nuclear weapons to destroy the world X times over", in all of its variations, is a stupid, stupid meme invented and spread by gullible innumerates who for various reasons wanted it to be true. It isn't.

Expand full comment

I think part of the issue is definitions of "overkill"; Cold War arsenals really were big enough that there were issues with a lack of viable targets other than the other side's ICBM siloes, but that doesn't remotely mean they were big enough to kill literally everyone (due to the issue where nukes are basically useless against rural populations). There is legitimate room for confusion there for people who haven't thought about this a lot.

Expand full comment

These are analogies not physical things. Remember, they were the dreams of a guy who lived 2000 years ago.

Did he see the seas turn to blood, or the seas polluted?

A beast with seven heads isn't a physical monster, it's an international organization.

666 — in earlier languages letters could be substituted for numbers. In modern Hebrew this is the case. Some names are lucky because they are somehow made from lucky numbers. I only know the concept, find a Jewish person to explain it.

Expand full comment

And where exactly do you expect Scott to find a Jewish person capable of interpreting theology against the modern world? I doubt he knows anyone like that

Expand full comment

No, which is why I linked to https://www.navalgazing.net/Nuclear-Weapon-Destructiveness in another comment.

Expand full comment

>The biggest climate shock of the past 300,000 years is . . . also during Freddie’s lifetime

Can you clarify what shock you're talking about? If you mean the blade of "hockey stick"-type charts, that's the result of plotting some low-variance proxy reconstructions and then suddenly glomming onto the end of the proxy chart a modern temperature-measurement based chart. If you bring the proxies up to date and plot THAT to the current day there's no sudden jump at the end. If there had been big jumps or drops just like the current one in the past we wouldn't know it because they wouldn't show up in the proxy data any more than the current jump does - the way we're generating the older data in our chart has inherent swing-dampening properties.

Expand full comment

But do you really believe that the current hockey stick is just random variance? Saying that AGW is overhyped and not actually a huge deal is one thing, but denying it altogether is intellectually indefensible these days I'd say.

Expand full comment

The hockey stick was debunked as the surgery of cherry-picked proxies from a field of proxies with counter evidence.

But to notice those problems is a career limiting move, hence people don't do it.

Expand full comment

Is that actually true? I mean I was working through a statistics text book that had an exercise based on the date of Japanese cherry blossoms blooming since like 1000 ad.

How many proxies have you directly checked to make sure they don't show sharply different behavior in the last hundred years?

Expand full comment

And temperature reconstructions are only one line of evidence for AGW. The basic finding that human activities are charging the climate are also supported by validated computer modelling and our understanding of the physical processes in play.

This can all be explored in detail in the relevant IPCC report https://www.ipcc.ch/report/ar6/wg1/

Expand full comment

But does one trust the IPCC when they say, “how much”?

Expand full comment

Not really. The IPCC is known to discard studies that they consider too extreme. There are legitimate arguments as to why they shouldn't be really trusted, but you can make analogous arguments about every single prediction. It's only the ensemble that should be trusted (with reasonably large error bars).

So far including some of the more extreme predictions in the ensemble would have improved the accuracy of the IPCC forecases.

Expand full comment

For policy relevant predictions such as seawater rise or desertification, the extreme predictions have not panned out in the slightest.

Expand full comment

have you never read climate gate files?

Currently the global temperature data set contains up to 50% 'estimated data.' There have been several (four I think) 'adjustments' of the past data, where every adjustment cools the past global temperature.

Expand full comment

This is such a weird way of arguing. Again, which proxies have you actually looked at? Which temperature series? Have you made sure you are actually reading the series correctly?

Given the temperature history of just the last fifty years, that I very much doubt is constantly getting adjusted down, the default would be to think the world is warming. Given the causal logic of the greenhouse effect, the default is to think temperature is likely to be rising because of rising CO2.

Complaining about estimated data without attacking the established core facts of the rising temperature trend in detail seems to me a lot like the people who with Covid vaccines talk a whole lot how many anecdotal cases of how someone who had been in perfect health died after being vaxxed, but they never make a serious effort to explain why Covid shows up very clearly in aggregate mortality statistics, but deaths from vaccinations do not.

Expand full comment

Comments from Xpym and Jason here seem like they're misunderstanding Glen's point, so I'll make it more explicit.

Glen isn't saying "maybe anthropogenic global warming isn't real".

He's saying "maybe anthropogenic global warming isn't as unprecedented as it looks from the usual hockey-stick graph, not because the head of the stick might not be the shape everyone says it is, but because the _shaft_ of the stick might have big deviations in it that aren't visible to the ways we estimate it.

(I make no comment on 1. how plausible that is or 2. whether making the argument suggests that Glen is _secretly_ some variety of global-warming denier. But he's _not saying_ that we aren't rapidly warming the planet right now. He even talks about "the current jump".)

(Having raised those questions I suppose I _should_ make some comment on them. 1: doesn't seem very plausible to me but I'm not a climate scientist. 2: probably not, but I would be only moderately surprised to find that Glen is opposed to regulations aimed at mitigating anthropogenic climate change. None of that is actually relevant to this discussion, though.)

Expand full comment

On the other hand, this doesn't really change the argument that increasing numbers of people and technological acceleration are making it so that a uniform distribution of events across time is a rather silly model for many types of events. And, although AGW my not have been a perfect example, it still should be enough to bring the idea into focus (except for people whose brain explodes at the mention of AGW).

Expand full comment

No, I think I understood his point. He essentially implies that there's no good reason to believe that AGW will be substantial enough to eventually result in a visible "hockey stick" in the long-term proxy graphs, which is IMO unreasonable.

Expand full comment

I don't think that's what he meant at all. If we're using the judgment that FDB has been alive for the biggest climate shock in the last 300,000 years, it would be helpful to know if there have been other climate shocks in that time period and to find out whether they were of similar, or greater, size. Glen is saying that the measurements we use would not show such a shock during that timeframe, because the proxies move slower than the actual temperatures (that we are measuring in real time right now). We wouldn't see a jump if it also went back to the baseline later.

Expand full comment
Sep 10·edited Sep 10

I think he’d have to clarify what time period he’s referring to because in terms of assessing whether or not we’re in the midst of a potential “shock” it seems to me that only the last 1000 years is relevant to human civilization.

Also, if past temperatures were more variable than the current consensus there’s an argument that that would mean that climate sensitivity was on the high end and that temperatures will end up even higher for any given magnitude of human-induced forcings (carbon dioxide, methane, land use change, etc).

Expand full comment

FWIW, rice paddies began emitting excess methane. That puts it about 9,000 years ago. OTOH, that was a very small contribution at first.

Expand full comment

20,000 years ago, sea levels were 120m lower than today. Seattle and most of North America were under a mile of ice.

Do the quick cocktail napkin math on sea level rise with 20k years and 120k mm of sea level rise. You quickly find that the simple average is 6mm/yr. But any look at NOAA data for San Francisco, or Battery Park in New York shows 2-3mm/yr for the past 150 years.

If you think 6mm/yr is less shock than 3mm/yr you're seeing something much different than I.

Expand full comment

What claim are you arguing with?

Specifically, I think worrying about sea level rise mostly involves people worrying that we'll hit a tipping point that causes the big ice caps in greenland and the antarctic to get much smaller. I think what you just wrote actually confirms that rapid sea level rises are a thing that can happen and that is worth worrying about.

Assuming I'm right that you think that worrying about that is deeply wrongheaded, what exactly in what you wrote disagrees with the worry I just described.

Expand full comment

I think the claim being argued with is the claim in the article that the past few decades have seen the "biggest climate shock in the last 300,000 years".

I think this is a very reasonable quibble given the various ice ages and interglacial periods over the past 300,000 years, with swings of ten degrees or more over a fairly short (it's hard to say exactly how short) period.

https://en.wikipedia.org/wiki/Interglacial#/media/File:Ice_Age_Temperature.png

Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock".

Expand full comment

"Even if the last fifty years does turn out to have been steeper"

It is steeper. There's no "if" about it. Taking a low-end estimate of the temperature change since 1974 gives about 0.75 degrees total, or 0.015 degrees per year. The steepest of those "nearly-vertical" lines on the plot still clearly spans multiple thousands of years. If we estimate that the steepest such event involved 10 degrees of warming over 2000 years (which is being VERY generous: my principled estimate based on the graph would be at least 3x that), that's 0.005 degrees per year: only 1/3 of the speed! To credibly claim that it isn't steeper, you'd have to argue that either one of those 10 degree events occurred over a mere ~600 years (meaning the graph is just plain wrong) or that modern, industrial humans are SO BAD at temperature measurement that we got the data for the last 50 years wrong by at least a factor of 3. And again, those are using numbers that are generous to your claim, perhaps unreasonably so.

"Even if the last fifty years does turn out to have been steeper than the nearly-vertical lines on this plot, it's clearly been much smaller in magnitude, so it's hard to call it the "biggest shock."

This would be an excellent point if the Earth were not still warming. I suggest checking back from the distant-future and re-evaluating the validity of this argument then.

Expand full comment

Climate tipping points are a completely political creation and don't exist in the realm of facts. You can invent a tipping point for anything if you are running with zero facts.

Expand full comment

That's funny, because I remember learning about at least one potential climate tipping point mechanism in an astrophysics class a decade and a half ago. Being an astrophysics class it included no discussion of politics or politics-adjacent topics, and was almost wholly concerned with the physics behind the mechanism. Given that the mechanism in question was a. something that would cause global cooling rather than warming and b. last relevant millions of years ago (if ever), it seems like a pretty strange target for a political fabrication. Who are these shady political operatives sneaking their indecipherable agendas into our physics textbooks? Tell me, I would really like to know.

Expand full comment

That sea level rise did not happen gradually over 20k years, though; the vast majority of it took place between 12k and 10k years ago, at the Pleistocene-Holocene transition.

Expand full comment

Exactly that was my point. The majority of change was most of 120m in only 2,000 years. If you averaged all the change out over 20k years it would be 6mm/yr ... OP is saying the current change of 3mm/yr is unprecedented, when the average across the past 20k years is 6mm/yr, it was several cm per year for some periods of time.

but 2-3mm/yr is unprecedented.

Expand full comment

Ah, then I completely misunderstood your post, I apologize.

Expand full comment
Sep 10·edited Sep 10

"gjm" is correct as to what I was saying.

The proxy evidence I'm the most familiar with is tree ring series, eg Briffa's bristlecone pines. The way those proxies work is we identify still-living trees near the treeline which we think are temperature limited - they grow better when it gets warmer - so you can tell from the size of the rings which years were warmer than others. One way to calibrate such a thing is to compare *recent* temperature trends (for which we have good data) to the tree-ring trends at the time you first core the tree and see similar movement and assume it was also similar in the distant past and will continue to be so in the future. The first problem I mentioned is that if you go back and sample that tree again 20 or 30 years later you DON'T see that the relationship stayed similar. Pretty much all the tree series used in Mann's big Nature paper for which we have later data *didn't* maintain the relationship with temperature shown in their calibration period. Climate scientists call this "the divergence problem" - wikipedia's page on it is surprisingly not-terrible and has a good chart:

https://en.wikipedia.org/wiki/Divergence_problem

So the tree record doesn't IN PRACTICE - when you look at it in full - appear to suggest current levels of warmth are unusual much less rising at an "unprecedented" rate.

One possible reason for the divergence is an issue with the underlying theory: The way we use "tree-mometer" series usually implies a *linear* temperature/growth relationship - the warmer it is, the more the tree grows - but a better model of tree growth with respect to temperature would be a upside-down "U" shape. Which is to say: for a tree in a particular circumstance there is some OPTIMAL growing temperature that maximizes growth and it grows less than that if the local temperature is lower OR HIGHER than optimum. In that world, if there were in the past big positive swings in temperature just like today they might show up as brief negative movement - it could look to us today like it briefly got *colder* back then.

Anyway, that's one of a few reasons to think the shaft of the "hockey stick" in many of the usual sorts of reconstructions is artificially smoothed/dampened. I'm not saying it hasn't warmed recently, I am just saying we can't be sure how unusual it is to warm as much as it recently has.

(one of my influences on this is Craig Loehle, a tree scientist. Read his papers (and the responses to them, and his responses...) if you want to dig in further)

Expand full comment

I'd say the biggest climate shock of the past 300,000 years is the recent ice age, which killed off Mastodons, Wooly Mammoths, Sabre Tooth Cats, Dire Wolves, Short Face Bear, Giant Sloths, ... countless smaller animals.

Expand full comment

That's not remotely an apples-to-apples comparison. The full suite of ecological changes caused by AGW won't be known for centuries. Pointing at the large ecological changes caused by the ice age and insisting that it proves that was a bigger shock than AGW isn't something that can be done in a principled fashion (yet). You can either base your judgment on some criterion that we DO know, like the speed of the temperature change (which I assume is what Scott is doing) or you can reserve judgment. If you're reserving judgment you'll probably need to do so for several centuries at a minimum.

Expand full comment

Well, we pretty much know that sea level rose 6mm/yr from 20k years ago til recently, and 2-3mm/yr from 1855 to today. So yes, we can definitely say that previous rates of warming must have been much greater than today.

6mm/yr is the simple mean. We know 20k years ago sea level was 120m lower than today. Most of North America was under a mile or more of ice. If seas were 120k mm lower 20k years ago, the simple cocktail math is: 20k years over 120k mm and have 6mm/yr as the simple average. We know that's not really the case, as there were stable periods and periods where sea level rise was much much faster. So that level of rapid rise required temperatures much higher, a more drastic climate change than the relative stability we see today.

Expand full comment

"So yes, we can definitely say that previous rates of warming must have been much greater than today."

WHAT? That doesn't REMOTELY follow. Why on Earth would you assume that the rate of sea level rise is a simple, linear function of the rate of warming? That's not even in the neighborhood of the ballpark of a reasonable assumption.

Heck, it wouldn't be reasonable even if the sea level rises were produced by the same mechanism, which they ARE NOT. Sea level rise (in populated places) today is a result of melting ice adding water to the ocean: you need a lot of water for a small increase. Sea level rise after the last ice age was partly that (but with much more ice to melt) but also partly post-glacial rebound, which isn't factor today.

https://en.wikipedia.org/wiki/Post-glacial_rebound

Expand full comment

If I have a cup of water with ice, and the ice melts after 10min, and I measure for the next 5 hours, I'll see a 10mm/hr rise in the first hour, and 0mm/hr for every hour after that.

I'll measure the same even if the room temperature is rising.

You'll want to account for that in your calculations if you want to prove what I think you're trying to prove.

Specifically... what if there's only half as much ice left to melt now, and ice melts at a constant rate per volume?

Expand full comment

“how likely is it that the most important technological advance in history thus far happens during my lifetime?”

Ask a Luddite…..

I did not read the entire essay because I was struck right at the opening with a huge problem: please define “important”. Particularly, define “important” in the context of history, which is a rolling wave on a very big ocean.

And while you’re at it, let’s include some events not attributable to human agency (meteors, ice ages, volcanic activity, etc.) in the definition of “important” in sofar as that word weighs on this discussion.

My odds of something important happening in my lifetime are as good as anyone else’s.

As a matter of fact, I will throw one out; nuclear energy and nuclear WEAPONS were invented in my lifetime.

I propose that nuclear weapons fulfill the same function in the modern world that God used to fill in the “ancient” one. That is important.

EDIT: I lied but not intentionally; nuclear weapons were already around before I was born. My apologies. I can’t think of a single important thing that’s happened in my lifetime so far.

Expand full comment
author

You might want to read the essay.

Expand full comment

Yes, I probably should.

Expand full comment

> So: what are the odds that Harari’s lifespan overlaps with the most #important# period in human history, as he believes, given those numbers?

This is where I got stuck and it’s Freddy‘s words not yours. I am reading the essay now and I kept wondering where I had seen that word, because it hadn’t come up and I was several paragraphs in. So I stopped, went back to the beginning, and started rereading the piece. It was in the quote of FdB.

I have to admit, it triggered me.

😆

Expand full comment
author

My impression is that this doesn't matter too much for his point. Even if we have some objective standard for "important" (eg the period with the highest temperature), the argument still applies. And if you tried hard enough, you could come up with some objective metric for whatever Freddie means here.

Expand full comment

I have taken the time to read FdBs essay.

My first thought is that he is a bit reactionary, but more importantly (guilty…I used that word) he fools with time scale to his own purpose: first how we don’t comprehend large time scales properly and so assign importance to a very particular century, when really that is just when it began to be noticed, in retrospect. I agree with him, if my interpretation is correct.

But then he seems to go hard on people who talk about things twenty or forty or ten years ago that they claim to be important; specifically AI. AI is what seems to put a twist in his knickers (I am making an effort to be as indifferent to the sensibilities of a reader as he is); I get the sense it he doesn’t think it its a big deal, and he focuses on all the mundane, trivial exploitations of AI to date to make this point. But he is whistling past the graveyard I think.

The transition from spoken to written language as the main method of communication (barring the intimate, I hope) was a huge HUGE transition, with enormous consequences for our development as a species; if you want an objective standard of importance*, I offer this millennial-long transition. Every person alive for that 1000 years lived in an important time, because it was a process and everyone played their part. Some made out ok and others not so well.

I think we are at the dawn of something equally transformative now, and I think these two periods are bound to each other by a very fundamental necessity that people must have if they are to remain sane.

That necessity is trust. I trust what you tell me; I trust what is written down; I trust that I am really in touch with another human being. AI presents a real challenge to that simple, necessary thing, just like the development of words on paper did, and adapting to that is going to be transformative.

I do share his contempt for the writer he was taking to task, for what its worth.

Expand full comment

I quit following that guy recently. He seems like he's stopped thinking about what he posts and just has a few beers and then blasts word vomit out. Which would be ok except he also has a habit of attempting to dunk on very smart people and if you're going to do that, you best not miss

Expand full comment

He's a decent enough writer, and addresses some at least interesting topics. But, yes, he's fallen off a fair bit last coupla years. In addition, his pronounced assholery is a turn off.

Expand full comment

Yeah, it's the combo, assholery plus the slippage. Everyone has good spells and not so good spells. I actually suffer from a sleep disorder that causes me waves of cog fog that can sometimes last a month. So I'm pretty forgiving on mental slumps, but maybe don't @ everyone when you're at a low point.

Expand full comment

That's just the way he is, a more agreeable guy would've not produced stuff like The Good White Man Roster, but then there's also whatever this is.

Expand full comment

He cannot handle being disagreed with. The whole substack is just a cult of personality.

Expand full comment

I cancelled my paid subscription a few months ago but still read his free articles on occasion.

Expand full comment

He's...high-variance, moreso than other writers I follow. The occasional absolutely brilliant transcendental gem, in the same vein as some of Scott's GOAT posts that people still reference years later (it's depressing how some classic SSCs are over a decade old now...), and on niches that I don't find well-served at that weight class usually. The beauty of his prose has made me cry on more than a few occasions! Then there's, uh, pretty bad posts like the one dissected here, which are extra frustrating cause I always feel like he's got the chops to intellectually understand, say, AI, but bounces off for idiosyncratic non-mindkilled reasons. The beefs are likewise pretty good when they're good, skewering is an art. I am getting rather tired of the perpetual dunking on Matt Yglesias though, since as a subscriber to both blogs, it's just painfully obvious sometimes he didn't fully read/understand whatever MY post he's using as a convenient strawman foil. Hopefully this is just a one-off Scott diss and not a sign of starting a similar feud trend...since I haven't cared for Freddie's anti-EA posts either. At some point, chasing those diamond posts isn't worth slumming through more leftist agitprop (it's good to sample opposing views, but Weak Men Are Superweapons, etc), or putting up with the not-much-better-than-The FP commentariat.

Expand full comment

Could you give examples of some of his brilliant/transcendental posts? I'm very unfamiliar with his work

Expand full comment

Not the OP but I've read a lot of FDB.

I wouldn't go so far as to say that it's transcendent but this is one of my favourite posts of his that shows how capable he can be as a sensitive and introspective writer. It's very different from his polemics.

https://freddiedeboer.substack.com/p/losing-it

But his bread-and-butter is the polemic, so here's some of the more effective ones:

https://freddiedeboer.substack.com/p/your-mental-illness-beliefs-are-incoherent

https://freddiedeboer.substack.com/p/i-regret-to-inform-you-that-we-will

Expand full comment

https://freddiedeboer.substack.com/p/please-just-fucking-tell-me-what

I also wouldn't call this transcendent, but it's the best articulation of that particular issue I've seen.

Expand full comment

Freddie writes brilliant posts about mental health, the mental health crisis, boutique mental health claims, mental health meds, etc.

I subscribed to him, when he wrote a brilliant piece about how modern western Socialists are perpetually unserious and addicted to dunks, jokes, and memes; where no one is willing to sit down and write The Socialist Federalist Papers to describe how to form a good socialist government. Instead Western Socialists want to play today, and do the hard work after they win the revolution—pass me another Molotov Cocktail please—we'll just take the US Constitution and rip out all the private property parts.

Expand full comment

Yeah, when Scott quoted him as saying, " Some people who routinely violate the Temporal Copernican Principle include Harari, Eliezer Yudkowsky, Sam Altman, Francis Fukuyama, Elon Musk, Clay Shirky, Tyler Cowen, Matt Yglesias, Tom Friedman, Scott Alexander, every tech company CEO, Ray Kurzweil, Robin Hanson, and many many more," I was thinking he might want to contemplate the possibility that, if all those people are supposedly making the same mistake, it might not be a mistake at all.

Expand full comment

I had a similar experience with his post a few years back on the war in Ukraine. He basically parroted Putin's line about Russia fearing Nato expansion and the west provoked it etc. I know people from Ukraine. Didn't care for it.

Expand full comment

"The closest that humanity has come to annihilation in the past 300,000 years was probably the Petrov nuclear incident in 1983"

I'm sorry this is deeply unserious. Arguments that any nuclear crisis of the last century would have led to "annihilation" depend on an implausible nuclear winter. If you're going to make serious arguments about the possibility of an apocalypse, you can't just wave your hands around and pretend like the complete annihilation of humans was ever a possibility during the Cold War. Why should I take Scott seriously about this if he makes such an obvious misstep — and probably an intentional one for the sake of his argument — about something so easy to debunk with a little research?

Expand full comment

There are a million excellent summaries of the nuclear winter debate on the Effective Altruism forum, for example and even reading the most pessimistic ones, no honest reader could come away with the impression that a full on nuclear war would lead to complete human extinction.

Expand full comment

I’m sorry to be tediously literal, but he did say “the closest” to annihilation.

Is there another event you can think of that could have bought us closer than a full on nuclear war would have done? My initial thought was the genetic bottleneck event, but that’s well outside of the 300,000 year span.

Expand full comment

>I’m sorry to be tediously literal, but he did say “the closest” to annihilation.

Yes, but the obvious reading would seem to be that "closest" refers to how close we came to the event happening, not in what percentage of the population would have been killed (after all, the *actual* death toll was negligible at 1 for the Cuban Missile Crisis and 0 for Able Archer 83).

In percentage terms over the whole event, I doubt WWIII in the 60s or 80s would have exceeded the Black Death, though as noted above I can't think of anything else as bad.

Expand full comment

"I doubt WWIII in the 60s or 80s would have exceeded the Black Death, though as noted above I can't think of anything else as bad."

Reminds me of something I was confused about way back in middle school or whenever it was I learned about the Black Death. While reading through the textbook while bored in class, I saw in a section we didn't cover a passing mention of the Plague of Justinian, which supposedly killed a comparable percentage of the population of the Byzantine Empire as the Black Death. Only got mentioned once, as opposed to a whole section on the Black Death.

Never understood why. Today I'd guess it's a historiography thing, people have more to say about the Black Death's role in the grand sweep of history not just "a lot of people died, the end".

But it makes me wonder how many other mass death events happened over the years that nobody ever thinks about, but if you were going through it, would seem like the most important thing ever.

Expand full comment

There've likely been several. IIUC, some of the blood types are selected for because they (partially) protect against one disease or another. And selected against because of different diseases. Some of that was geographic. (I think typhoid was evolved against in China...but I may have the wrong disease.)

More details at https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7850852/ (which I didn't read, I'm working off memory).

Expand full comment

Freddie and Scott are using terms more loosely here. No one has insisted that "an apocalyptic event" must mean the end of existence for humanity. I think the vast majority of people would agree that a nuclear war would count as apocalyptic without having to strictly define how many people would be wiped out.

Expand full comment
Sep 10·edited Sep 10

If he'd used "apocalypse", I think there'd be less objection. OY is, however, responding to "[humanity's] self-annihilation", which literally means "humanity brought to nothing"; "no survivors".

This is particularly relevant since part of the context of FdB's essay is the anti-AI-X-risk movement, whose thesis is the possibility of humanity's self-annihilation due to building AI. We literally do mean "annihilation" there; there is no recovery from humanity being defeated by hostile AI. It's an apocalypse that cannot be survived and rebuilt from, because the AI doesn't go away and can actively hunt down survivors. "You win or you die; there is no middle ground."

Expand full comment

The previous sentence says "apocalyptic near misses". I think it's common sense what Scott meant here - given he has written many times on existential risk, he is definitely aware that nuclear winter and climate change don't reach that threshold.

Quibbling over words just seems like pure pedantry

Expand full comment
Sep 10·edited Sep 10

>Quibbling over words just seems like pure pedantry

1. In the X-risk field it's generally considered worth building a stigma about using words for the end of mankind incorrectly, because when we need those words we really need them (note Kamala Harris not appreciating the full scope of "existential" here https://www.politico.eu/article/existential-to-who-us-vp-kamala-harris-urges-focus-on-near-term-ai-risks/).

2. FdB's argument isn't being fully dealt with with sub-existential apocalypses; the Black Death is *not* "the most important period in human history", and WWIII would also (probably) not have been.

Expand full comment

There's two ways to read "near miss" here, that isn't helping the conversation. We can nearly miss the event, which is what happened in 1983, or we can nearly miss an apocalypse by too many people surviving to categorize it that way.

An event that wouldn't be an apocalypse if it happened also not happening doesn't meet my definition of "near miss" because it's too far removed from actuality. If we were trying to determine the frequency of a type of event, we would want actual instances of that event, not times when something similar-ish could theoretically have happened.

Expand full comment

If the AI apocalypse is "only" as bad as a full-scale nuclear war (not impossible - maybe the AI seizes power but it turns out nanotech superweapons aren't feasible and it has to conquer the world the old fashioned way) then that would still be bad enough to worry about!

Expand full comment

If it does conquer the world the old-fashioned way and is omnicidal, we're still all dead.

I do agree that there are scenarios where AI kills a bunch of people but not everyone, and that these scenarios are more likely if "lol nanotech" isn't on the table, but those scenarios are generally because we win at some cost, not because we lose - this is why I specified "defeated".

Expand full comment

The simplest rebuttal to Freddie’s post is that change is not a stochastic event but a continual process. He makes a good living (as he points out frequently) publishing on a platform that was invented well into his adulthood. Never been sure why he’s so insistent on denying the significance of technological change; it betrays his principal motive as being contrarian.

Expand full comment

The most important technological discovery is the Haber-Bosch process, which allowed us to take the world population from about a billion to its current level and well beyond.

Closely followed by the invisible demon--er, excuse me: microbe--theory of infectious disease.

Expand full comment

If you aren't going to quantify the time period, I'd be traditional and say the control of fire ... though admittedly that may be pre-human. After that I'd list language, though that wasn't exactly a discovery, and also has pre-human roots (but grammatical language is almost definitely human...almost). Next would be agriculture, even though that "discovery" took place over the span a multiple generations, and included such things as fertilizers. The Haber-Bosch process is only an elaboration of the original discovery of fertilizers.

When you're building something, the most important part is the foundation.

Expand full comment

I don't know if you are using the correct analysis. It underestimates the transformational nature of certain advances. The difference between natural fertilizer and synthetic is an upper bound on population of ~4 billion, probably more like 2 or 3 in actuality, vs... a limit placed by something other than food. At least 10 billion. If you follow the Robin Hanson school of thought, the amount of innovation this created by creating more people blows any other technology out of the water. Haber-Bosch technology also allowed the production of synthetic nitric acid > gunpowder and explosives, which was a pretty big deal.

Similarly, the printing press was an elaboration of writing, but it was the difference between only a tiny minority of people laboriously writing by hand and entire societies becoming literate and exchanging ideas. Saying it's just part of writing really doesn't do it justice.

Expand full comment

Yes, but those later advances could not have happened without the earlier ones. So the earlier ones were the more important.

Expand full comment

I understand that technologies require certain other technologies first in order to exist, but I still think assigning greater value on this basis is wrong. Electricity can't happen until people can work metal well enough to make dynamos and transmission wires, but metalworking had far less of an impact on progress than electricity. Or the Chinese guy who invented gunpowder being the most important part of the chain seems wrong. The people who figured out it could be used as a chemical energy source to propel projectiles had far more of an impact (hah) than pretty exploding lights, even though guns and cannons could never exist without the Chinese guy and his fireworks.

Expand full comment

This sounds approximately isomorphous to the "mother of Napoleon" discussion in https://www.astralcodexten.com/p/how-do-we-rate-the-importance-of

Expand full comment

I would amend Haber-Bosch to a nitrogen fixation process in general. There were other successful options available around the same time. In Norway, they developed an electric arc process, but it was only economical because of the abundance of cheap hydroelectric locally. There was also another chemical process - using pyridine? - that was popular in the US for a while. Haber-Bosch was more practical than the other options, but it was not the only one.

Expand full comment

Freddie is basically right.

Sure, the argument we live in the end times has merit, but it always does. The problem is people for all time have thought exactly the same thing. Always lead by a prophetic self-appointed intelligentsia.

If now is different, a bit of blather about technology and AI is not going to do the trick for me. Deal with the underlying human psychology, at least. Show a bit of self-consciousness, at least.

Expand full comment

That sounds like the "people have been wrong in the past therefore you're wrong now" argument. (This comes up often enough that it really deserves a better name.)

I'm not saying that the end is nigh, I'm just saying that the fact that people have been wrong in the past about the end being nigh isn't a particularly useful fact to bring to the discussion.

Expand full comment

Well, when precisely 100% of people have been wrong that the end is nigh for the past several hundred thousand years, I think it's slightly useful to mention that, if not treat it as more significant than the precise factors that you think make this time different.

Expand full comment

"Nothing major is going to happen" seems like one of those heuristics that almost always work: https://www.astralcodexten.com/p/heuristics-that-almost-always-work

Expand full comment

I think you're very right to point to that essay, because both it and this one are obviously arguments in an (mostly-AI-)X-risk debate.

Personally, my take has been I think roughly the same since when I read the essay you linked: yes, nothing is ever going to happen, until it does. So, let's suppose you actually start taking these things seriously, and you get the one time the heuristic doesn't work, and something significant happens, right. With the amount of resources you'll have wasted on precautions against disasters that don't happen by that point - would survival have been even worth it?

Expand full comment

That answer would depend on each individual’s preferences, but also on the measures one had taken. I haven’t come across any X-risk advocates arguing we should live in survivalist bunkers or any comparably disruptive step.

Expand full comment

Well of course it all ends someday. Someday in a few billion years our sun runs out of hydrogen and starts fusing helium ... and expands to include Mars. Every human life will come to an end around 100 years after it begins. Every family group will eventually come to an end, every civilization eventually comes to an end. The average duration for an Earth species is about a million years before they're supplanted by another. This too is the most likely fate of homo sapiens, there will be a new homo species supplanting us.

But all the catastrophic 'world is going to end all at once' ... there's no historical precedent. Does it feel like the world is ending all at once if you are living in say Pompeii in 79BC? Yup. Or Rome, or Dresden, or Hiroshima, etc. Local catastrophic ends occur on a regular basis. Globally? I don't think is very likely.

Expand full comment

I don’t know who is predicting it will specifically end all at once. We seem to have moved the goalposts considerably from Freddie’s claim that nothing at all is even going to change, let alone end.

Expand full comment

Note that, looking at a worldline outside time, all people at all times will always observe that no end has yet occurred - there are no people at times after the end to observe that it has occurred. Thus, regardless of whether an end is in their near future, they will see that 100% of predictions of the end are either incorrect or not tested yet. This reduces the value of that evidence; it always points the same way regardless of the truth, so it's of very little Bayesian value.

Expand full comment

If the number of humans and/or their average quality of life was trending downward, extrapolating that curve to the point where it hits zero would at least make some kind of sense.

Expand full comment

“Doomsaying dinosaurs have been wrong for 186 million years, so they can’t possibly be right now” says T-Rex, as asteroid ploughs into Earth’s atmosphere

Expand full comment

Yes, but a whole lot of 300,000 year periods fit into 186 million years. For the vast majority of that time, it would have been both accurate and important to say that the end wasn't nigh. The prediction would have been wrong 0.0016% of the time. That's a really good prediction!

Expand full comment

Hang on, are you replying to an ACX post, or your general impression of an argument? The fact that "precisely 100% of people have been wrong" is a mistake in a few different important ways is precisely what Scott's post is *about*.

Expand full comment
Sep 10·edited Sep 10

Not this post, as far as I can tell. This post is about how Freddie's argument is just an anthropic error. But I'm interested in the ways it's actually about anthropocentrism, and the enormous bias that (especially) smart, proselytising people have towards the belief that they live in special times. Their prognostications need to be heavily discounted regardless of the merits, I think, precisely because they are the sort of people who *would* think this way.

Expand full comment

Reread the part before anthropic reasoning is first mentioned: the '~7% of humans alive today' and other comparable measures are framed in terms of expectation over human lifetimes, but are easily recast in terms of 'expectation over the next 50 years' without loss of generality. At that point, it's practically vanilla Bayesian - you don't get near even 100:1 without sabotaging your reference class, and throwing out lines like "precisely 100%" should be setting off alarm bells.

The anthropocentricism strikes me so thoroughly beaten to death as to be trite, but if that was the part that interested me I'd probably go a few further posts upstream.

Expand full comment

Maybe. Then again maybe it's worth checking whether anyone has already mentioned it. If they have, you don't need to.

I think the problem with these sorts of conversations is that contributing to the actual meaningful conversation about the actual meaningful factors requires a lot of background knowledge, whereas the "people have been wrong about this sort of thing in the past" argument can be thrown in by every random passer-by.

The worst is when these random passers-by genuinely think they're being clever, like Freddie de Boer.

Expand full comment

It's not just that people have been wrong in the past, it's that even when they had no "rational reason" to believe that we live in the end times, intelligent people believed it anyway! The reason the psychology behind beliefs like this is important is that smart people are really, *really* good at talking themselves and each other into believing the strangest things - whether or not they're true.

Expand full comment

But isn't this tempered by the number of people who believed that nothing bad would happen—even when there was good reason to think something bad would happen—and then something bad happened? People talk themselves into believing stuff on both sides.

Expand full comment

It does seem notable that human beings have repeatedly built movements around predictions of total apocalypse, rather than around the much more likely prospect of smaller disasters, though really sorting this out would require a lot of empirical research.

Expand full comment

Sure, we're fascinated by how it's all going to end. But note that Freddie (in the original post and in the comments here), when he says these are not special times, isn't just talking about extinction—he's saying that there will hardly be any discernible change in society at all.

Expand full comment
Sep 10·edited Sep 10

I guess the question is how much we should take "people have been wrong about this sort of thing in the past" into account as an argument.

I don't think it's completely worthless as an argument. If you're talking to a hypochondriac who has a history of suspecting that his new vague symptom of the day is a sign of some fatal disease then it might be worth pointing out the fifteen times he's been wrong about this sort of thing in the last six months. But even hypochondriacs die eventually, and this sort of argument shouldn't override seeing a doctor when you're coughing up blood... or rather it shouldn't be interposed into the argument of how likely it is that a particular symptom is worthy of concern.

Expand full comment

Isn't the name "the Outside View", or it's extreme application.

Expand full comment

I think it's a variation on "post hoc ergo propter hoc". It's sort of "before this therefor after this" without any attempt to establish a causal relation. (If you establish a causal relation, that that becomes a valid argument.)

OTOH, it's the kind of prediction that comes up frequently, because it's the kind of thing our minds are tuned to react to. This doesn't make it right or wrong, but one should be suspicious of believing it without specific good reasons. It's a warning of danger, so it tends to be given attention to a greater extent that it's raw probability would warrant...but survival says "pay attention to warnings of danger, even if they're a bit unlikely". Unfortunately, given large populations and fast communication, we tend to be deluged by "things our minds are tuned to", and the question of "How much attention should we pay to this one?" doesn't fit the evolved responses...so we tend to tune them out.

TTLX feels "We're swamped by warnings without valid reasons behind them, so just ignore entire categories of claims". This will work almost all of the time. Almost. But if there *is* a valid causal mapping between the state and the prediction, this could be a bad mistake.

Statistical arguments aren't going to help here.

Expand full comment

I remember when some religious group put up billboards around Southern California saying "THE WORLD WILL END ON MAY 4" (or whatever the date was). I remember being fascinated by how much I perked up at that and was aware of the date when it came, even though I knew the prediction had zero basis. I wasn't scared at all, but…interested, attuned.

Expand full comment

>The problem is people for all time have thought exactly the same thing.

Have they? Or are you thinking of 3-5 catchy anecdotes from across human history?

Expand full comment
author

If only someone had written an essay demonstrating that this line of reasoning was false.

Expand full comment

If you consider the probability of a lot of these larger tail events (singularity / tech advances) a function of both scaffolding of accumulated knowledge (first I need a complex enough chip to build that could encapsulate a human brain, etc) and human coordination among highly enough skilled practitioners of a field of study, then you would expect a massive spike around the time when humans first build the first nearly real time planetary knowledge and coordination network; I.e. the internet (and the slope of the line at this time would likely be discontinuous, as humans now have the ability to work together at an unprecedented rate. This probability might cap off after a generation or two due to reaching the pinnacle of human ability to coordinate).

Any human living at that time (I.e. now) should see themselves living in an unprecedented time where the possibilities are fuzzy, as the new coordination engines of humanity are still being accelerated into full throttle, and we don’t yet understand what progress / harm is possible in a generation with the engines in full throttle.

So of course we feel like we live in a time where these black swans are increasingly possible. The future is more murky and volatile and hopeful and doomed than it has ever been… it all just depends on the acceleration and velocity vectors of the coordination engines.

Expand full comment

I was going to say something similar, but I see the slope of the line noticeably changing a bit earlier.

There are definitely a couple of points in human advancement where advancement increases significantly. While there's a correlation with changes GDP, I'm not sure the effect is the other way around. And I think your point that those are correlated with coordination between people.

Record keeping is one. While written records are the most obvious form this takes, it also includes things like using structures like Stonehenge to record and predict astronomical events. Getting technological progress to the point where there's sufficient excess production capacity to allow individuals to specialize in research and engineering development (and in similar knowledge-expansion professions, like teaching) is another.

I think the final one is getting society to the point where you can bring together researchers and engineers working together on projects that individuals couldn't handle. There's definitely a major discontinuity between major advancements coming from individuals working on their own (the Wright Brothers putting together the first practical aircraft in a home workshop) and advancements coming from massive groups working on specific projects (the Apollo program, 400,000 people at its peak). This sort of coordination may be responsible and a necessary prerequisite for the internet. The first example of its kind I can see on a technological is the Manhattan project (though you could argue the V-2), but you see society-level industrial coordination stepping up through at least World War I. Am I missing any obvious examples?

Expand full comment

Arguably, this was started slightly earlier, with Edison's lab:

https://www.nps.gov/articles/000/the-invention-factory-thomas-edison-s-laboratories.htm

Expand full comment

Very good example.

There's obviously a transition period from 'scientific and technological advancement is done by individual discoveries' to 'scientific advancement is done by large coordinated groups' and the transition is relatively recent historically.

Expand full comment

Many Thanks! And I agree about the transition and its timing.

Expand full comment

The internet does not function as a planetary knowledge and coordination work. Most compiled knowledge on the internet is behind paywalls.

Expand full comment

For technological apocalypse probability, I think the appropriate scale factor is "total energy the human species is able to manipulate".

If there were 100 billion humans but they hadn't had an industrial revolution (eg had very little access to chemical energy), it's still going to be very difficult to move enough energy to cause an apocalypse. Meanwhile access to nuclear energy means we *could* scour the earth (though as other comments note we'd still probably have to be trying).

If we ever get, say, matter/anti matter reactors going at scale, I'll figure the apocalypse is imminent.

You can make a case for scale of information manipulation posing similar risks, obviously, but I don't think it's as face-value clear.

Expand full comment

In some models the antimatter reactor scenario has orders of magnitude more information processing than we have now.

Expand full comment

I think a better scale factor ties into how much power each individual person can bring to bear, but it's harder to measure. Much like GDP, it's also a good offhand measure for the chance of a singularity. Anything that significantly increases an individual's ability to exercise power is very likely to generate significant social change.

Expand full comment

Good post! Though I think we can show quite demonstrably that either anthropic reasoning will force you to abandon Bayesian updating in extremely egregious ways, think that Adam can get telekinesis, or think that you can be certain that the universe is infinite. https://benthams.substack.com/p/explaining-my-paper-arguing-for-the?utm_source=activity_item

(From there, the route to a bigger infinite and then God is fairly straightforward).

Expand full comment