638 Comments
deletedJan 16
Comment deleted
Expand full comment

"Mass shooting" is just so not the liberal rationalist style! The observed tail is "limited stabbing"

Expand full comment

"Does it matter if COVID was a lab leak?"

Not terribly relevant to the point of your post, but what *I* found most interesting about the Lab Leak hypothesis was how vehement a lot of people were (very early on) about eliminating it as a possibility. Clearly, the chances of this being the case were greater than 0%. And the location of the outbreak being near the lab wasn't exactly evidence AGAINST a lab leak. Still ...

What I find interesting about the Harvey Weinstein affair is that the response from Hollywood seems to be, "Yes, *everyone* knew about it. And it was bad." And my question is, "So who is doing this NOW such that in 20 - 30 years we will be hearing (again), yes, everyone knew it ...?" We'll have to wait 20 - 30 years, though.

About SBF ... high trust groups (e.g. Mormons) are known to be at higher risk of this sort of thing than low trust groups. Because obviously ... It happens.

Expand full comment

At first I thought "of course it matters whether Covid was a lab leak! The media has been confidently insisting that it isn't, and this factors into whether you should trust them".

Then I realised the same argument applies to that: the media has been confidently wrong about many things, so also being confidently wrong about this isn't much of a surprise.

Expand full comment

This wasn't the first decade of observations. (Nor the first (possibly) lab-leak-caused pandemic, that was the 1977 Russian flu.)

Expand full comment

y'all I'm not going to update my prior that scott is a great writer just because when he has newborn twins he publishes some meandering thing full of reheated Taleb. ;)

Scott, prioritise mate. You don't need to feed the substack maw right now.

Expand full comment

> This is part of why I think we should be much more worried about a nuclear attack by terrorists.

Agreed, but we should worry even more if nukes were known to be accessible.

We did worry about nukes post 9/11 - it was both implied and stated openly. Remember the WMD and Iraq. Iraq had nothing to do with 911, of course, but the general population was, I think, genuinely concerned that if something like 911 could happen, then an even bigger attack could happen.

A bigger attack could have been one with weapons of mass destruction of course, and even though Saddam had nothing to do with any of this, and even though the neo-conservatives were using the fear post 9/11 to start a war that they had planned anyway, it is somewhat understandable that people’s priors about terrorists using WMD had changed.

Expand full comment

"You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge."

My SWAG is that everyone knew that everyone knew that everyone knew that there was a generalized problem, but not everyone knew specifics.

The Casting Couch is not exactly a secret, but it's a jump to go from "directors have been known to use unorthodox methods to audition starlets" to "Director X told Starlet Y on this date in that place the specifics of how she could best secure her big break."

Expand full comment

"Harvey Weinstein abusing people in Hollywood didn’t cause / shouldn’t have caused much of an epistemic update. All the insiders knew..."

But to non-insiders this was a huge moment - for them, it was an example of #5 in your list of exceptions, teaching them something horrifying and hitherto unknown about the world. A COVID lab leak is the same way; (if true) it takes a bunch of people who *haven't* formed models about how common lab leaks are (or whose models are just "scientists are smart people and wouldn't be that dumb") and instigates a truly massive update.

This isn't a point of disagreement with your post, probably just a question of emphasis instead. There are a lot of people ignorant about any given risk/phenomenon, nobody can know everything. It makes sense that there are lots of dramatic updates going on after dramatic events, and this needn't be (especially) irrational; and the headline of this article makes sense only within domains where you already have considered opinions.

Expand full comment
Jan 16·edited Jan 16

The argument seems to assume my priors are more or less accurate. But that seems unlikely for events that weren't salient enough for me to think about. Re the Covid lab leak specifically, I'm not updating (much) on the likelihood of a lab leak, since I think I had a good enough handle on that before. but I am updating substantially on the probability that the gov't and official health authorities will lie to me. The same problem arises at that level. Was I previously grossly underestimating the probability that the authorities would lie to me? Or have I overcorrected from the Covid lies? It is possible that I underestimated the probability before and I am overestimating it now. On the other hand, the dramatic events made something salient that was not salient before. Maybe that has caused me to think carefully about it now and my current estimate is more accurate than previously. So I expect that sometimes it is wrong to update substantially in light of dramatic events, but I doubt it's always wrong. And give that ex post, I've tried to think carefully about it and make my estimates as accurate as possible, I don't think it makes sense for me to discount my current estimate to account for the possibility that I'm overcorrecting.

Expand full comment

"I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before."

Objection! Beware the other kind of stupid people who don't know how to use probabilities and believe every dramatic thing that happened once will happen again and kill them unless it has exactly a 0% chance of happening. For they will turn your argument from drama against you and demand infinite safety requirements for all nice things and then we can't have nice things like nuclear power.

Expand full comment
founding

Phenomenal post! FWIW I agreed with SBF tweeting through it.

Expand full comment

Also, before everyone condemns some community for handling it badly, please give an example of a community that handled it well.

Expand full comment

The most crucial aspect of the lab leak hypothesis is the aggressive censorship it faced, with individuals considering it being labeled as conspiracy theorists. It was evident right from the start that something suspicious was happening.

Secondly, all bioweapons labs should be closed. This should not be a difficult task. Take Germany as an example, which successfully shut down all its nuclear plants, albeit for more or less wrong reasons.

Expand full comment
Jan 16·edited Jan 16

One thing that's missing from this is that dramatic events usually have a great deal more evidential supply than less dramatic ones. For nearly everyone, you don't have close to first hand reports of something like historical lab leaks, you have to rely at best on one or two scientific studies of average quality, for which you probably didn't read the methodology section. And for most categories of event, you haven't even read those reports, you're going by your impression using the availability heuristic. For dramatic events, you have evidence from all directions which you can cross check. So it is a suitable time to evaluate whether the evidential underpinnings of your previous impression are fit for purpose. Of course, in a lot of cases a lot of the evidence is still third hand, but there is still so much more of it that you can get a clearer picture of this particular event.

Expand full comment

>>the model airplane building community,<<

LOL!

Expand full comment

I agree, I put it similarly here:

"When we’re inferring general patterns from individual events, we put too much weight on events of personal, moral, or political significance.

We focus too much on whether we were harmed by some risky behaviour, relative to what’s happened to other people. Obviously on average there’s nothing special about yourself.

And we put too much evidentiary weight on what happens in large countries, such as the US, relative to small ones. We can learn a lot from events in small countries, even though their practical importance is smaller.

Likewise, we over-generalise from events of great historical significance, like World War II, whereas we neglect less significant events."

https://stefanschubert.substack.com/p/evidence-and-meaning

Robin Hanson also has a post on this issue but I can't find it now.

Expand full comment

Your intuition for how much to update your estimates based on samples of power-law distributions might be poor.

Expand full comment

> Does it matter if COVID was a lab leak?

My take: no, it's entirely irrelevant, because whether or not it leaked from a lab, it did indisputably, uncontroversially, get deliberately exported from a nation.

We know when China locked down Wuhan to internal travel, and we know when China shut down international air travel out of Wuhan, which was significantly later. That makes the pandemic a bioweapon attack. Whether or not it was developed in a bioweapon lab doesn't matter; a thing is a weapon *if it is used as one,* and China knowingly sending infected people to other countries absolutely counts as using Covid as a weapon.

Expand full comment

Depending how you're modeling your distributions, it is often completely correct to drastically overhaul your model in the face of dramatic events. Sure, changing an estimate from 1,000 to 1,001 generally does not require that you update your mean or standard deviation in any way whatsoever. But an update from 0 to 1 can and in many cases should result in a massive update of your model, because if a seven-sigma event happens, your model needs to be thrown into the trashcan.

One instance of something coming to light should often not be updated as one instance of something occurring. If I believe that the amount of daily shoplifting in a big box store is $200, because there's an inventory check at the end of the day that yields of average $200 discrepancy, and I notice one day that one shoplifter has managed to evade this system and steal $500 of product in one go undetected by the system, I need a massive update to my estimates: there could be anywhere from $200 to tens of thousands of dollars of product being stolen every day.

Any instance of sexual harassment coming to light can be an update being any number of sigmas. Consider the the model airplane community, consisting of 100 chapters of 100 members each. If I learn that one, or two, or three people in the model airplane community were persistently making unwanted advances on co-community members, that's not something that needs updating on, because we know this kind of thing happens everywhere all the time. But if I learn that one, or two, or three chapter leaders engaged in (presumably) rarer forms of sexual harassment like drugging them, assaulting them, then blackmailing them into recruiting more victims, and every time someone tried to blow the whistle it was somehow covered up, that's the kind of update that does in fact require taking a second look at the model airplane community because it implicates a lot more than "three instances of sexual harassment occurred."

Expand full comment

I think a big thing about "learning from dramatic events" is that everybody else is learning too, which can either dramatically increase, or dramatically decrease, the odds of a similar event happening again, depending on the specifics.

Expand full comment

I think that this article could benefit from more discussion on correlated events. Yes, if one extreme event occurs every t years, independently of other events, then we shouldn't worry too much. Extreme, newsworthy events can actually change the covariance of subsequent data and potentially increase their likelihood. Perhaps the possibility of correlated events, marginalized to only include your most disliked subgroups, is still very low and your point still stands.

Expand full comment
Jan 16·edited Jan 16

Scott, what was your prior estimate on, "expected number of billion-dollar frauds motivated by effective altruism per decade"? If you think a better measure would be, "expected fraction of EA-funds acquired fraudulently," you can express it that way.

I feel like there is a bit of sleight of hand here, where you use math show to how updating a lot on dramatic events is usually bad, but then you don't actually use this math to show why you shouldn't update that much on the specific event you are feeling defensive about.

Expand full comment

The WHO report on Covid origins said that just before the first reported case, WIV put all their samples in a van and moved them down the road to another building (as part of a planned move). As they say, a leak during such an event when usual containment is disrupted is more likely. The timing is a remarkable coincidence if you don't think it's causal.

I don't understand why banning "gain of function" research would be "going mad" or overreacting. Even on paper and before the first leak it sounds like an idea so deeply terrible and silly that my working assumption is that it's a way for people to do and fund biological weapons research in plain sight without being ostracized for doing so. What is the expected advantage and what's the reasonable rationale for this expectation?

Expand full comment
Jan 16·edited Jan 17

> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.

I know it's not "the point" of the post, but since you're talking about distributions, the one you imply having here (pretty much no difference between right- and left-wing terror) is wrong, for a few reasons.

1. There are 50-100x as many right-wing people as transgender people (~50% of the country is Republican, i.e. right-wing, and <1% are transgender). Naively, we should expect 50-100x as many mass shooting events where the perp is right-wing vs. trans. Even if you limit "right-wing attacks" to specifically "motivated by right-wing ideology", you're still looking at a large delta between the incidence of far-right extremists and trans people.

2. According to the US government (one source from the DOJ, https://www.ojp.gov/ncjrs/virtual-library/abstracts/comparative-analysis-violent-left-and-right-wing-extremist-groups, you can find others from e.g. the FBI, etc.), right-wing terrorism is a much more significant threat to the US than left-wing terror.

3. In point of fact, there is almost no left-wing terror/mass shootings in the US, whereas right-wing terror attacks are commonplace and rising. Heck, there was a ~~bloody~~ god-damned coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law... We are in a moment where the violent right is ascendant, and the right in general is getting more violent and less patient with democratic problem-solving.

Edit to strike out "bloody", as it was confusing. I meant "bloody coup" as in "a god-damned coup", just using bloody as an intensifier, but can see how that was confusing given that "bloody coup" means "a coup where many died".

Expand full comment

> Before 9-11, we might have investigated the frequency of terrorist attacks. We would have noticed small attacks once every few years, large attacks every decade or so, etc. Then we would have fit it to a power law (it’s always a power law) and predicted a distribution

I disagree with this example; I think that the model we had of terrorism prior to 9/11 was significantly different. Typical pre-9/11 terrorism involved making plausible political demands and using limited quantities of violence to terrorise the civilian population into accepting them. This was the terrorism of the PLO, or the IRA, or the ANC, and there was a rational (if amoral) political calculus behind it.

9/11 was sampled from a different distribution. We called it "terrorism" but it didn't really match that modus operandi, there was no rational political calculus going on (at least not one that we could understand), there were no specific political demands, nobody to negotiate with on those demands, and no sign of restraint. This new form of terrorism seemed to be simply aimed at killing as many people as possible.

Expand full comment

"It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believed Saddam was hatching terrorist plots, and invading Iraq)."

the logic here seems to be that launching the war on terrorism was an overreaction because during and after there wasn't much terrorism?

The issue is that it's possible the war on terror decreased terrorism, both through greater security and government powers, and removing important bases in Iraq and Afghanistan as well as putting sanctions on terrorist organisations around the world.

I don't know the relevant timelines, so I don't know how likely this is, but it really seems like since the US retreated from the war on terror, reduced sanctions on Iran, Houthis, and presumably others, and retreated from Iraq and Afghanistan, there DOES seem to have been a big uptick in global terrorism. So maybe the response to 9-11 did something?

I recommend the book Days of Rage about terrorism in the US in the 70s. If I recall correctly in 1976 there was an average of three attacks in the US each day.

In these three months following October 7th, I think we've really seen there are no shortage of terror supporters in the US or around the world.

Expand full comment

My objection to treating terrorist attacks as randomly distributed along a power law curve is that the big attacks are not someone randomly deciding to take action. They're sponsored by organizations, which are funded by rich people and protected by state governments. Those actors will look at a once-in-fifty years bodycount and think "That worked really well, we should do some more of that!"

The counter is for the anti-terrorist side to take action. Enough action that the next time someone has a bright idea for killing thousands of Americans or more, the response will be "Shut up! Do you want the Americans hanging around for twenty years making our children attend immoral schools?"

Expand full comment

Let's assume that this isn't preaching to the choir... are you sure that even normal people typically are updating based on the the occurrence of a dramatic event and not simply updating on the reaction to it?

A dramatic event is often latched onto by people who stand to gain by making it seem high probability or impact, and the fact that they create drama around (regardless of the specific logic they put forth) is signal too. If those people are (or are signal boosted) by people close to your usual sources of world truth, it makes sense to update your model of world truth.

Expand full comment

It's often not a power law! https://arxiv.org/abs/0706.1062

Expand full comment

Does this essay's commentary on terrorist attacks begin from the premise that they are... independent events? That seems absurd- It's perfectly reasonable to assume a massive terrorist attack being successful might change the frequency at which terrorists attempt to do massive attacks! If the goal of terrorists is to create chaos and terror in enemy society, they will try to replicate the tactics which successfully do so.

Expand full comment

I strongly agree with the overall point of this article (expressed around the SBF vs. Altman episodes), but not sure about this:

"But terrorist attacks after 9-11 mostly followed the same pattern as before 9-11: every few years, someone set off a bomb and killed some people, at about the same rate as always.... In retrospect, updating any of our beliefs - about Islam, about the extent of the terrorist threat, about geopolitical reality, based on 9-11, was probably a mistake"

I think the argument is that part of the post-9/11 paradigm was the government instituting a ton of security protocols, and collaborating with other friendly nations to do the same. So the rate of terrorist attacks after 9/11 probably reflected a heightened security environment- I disagree with the idea that they're randomly distributed and we just happened to hit a big in the early aughts. There are a lot of foiled attacks.

Also, there are a lot of *successful* terrorist attacks in Europe by Islamists. It seems to me the rate increased from before the 2000s? The Spanish train bombings were not that long after

Expand full comment

I. Lab leaks. Another option is recognizing the pandemic for the wakeup call that it was and ask do we want to continue with the (20%,2%, 0.2%?) annual risk of repeating a pandemic due to a lab leak. Arguing for continued gain-of-function study is the ultimate luxury belief.

II and III. 9-11, nuking of a single city and mass shootings (don't worry folks, we predicted something like this would happen, that's just life in big city). Some people (enough to affect national discourse and some elections) demand political responses to certain events. "We're not changing our response, not updating our priors, and continuing to be guided by Bayesian math," may not be a winning political response, which could lead to being forced to update priors after the next election. Maybe take out Bin Ladin and his lieutenants but not invade Iraq.

Expand full comment

There are several varieties of this, and in some cases I agree with your point more than in others.

1. Exceptional events, perhaps first of their class. In these cases, an occurrence typically provides a non-negligible update on the frequency of the event (even if the correct update may be smaller than many people think).

  1.1 Exceptional events with major consequences. E.g. a lab-leak-caused pandemic that kills millions of people, or a nuclear terrorist attack. Policy changes may be warranted.

  1.2 Exceptional events where the consequences are minor on a world/national scale. E.g. 9/11.

The US response to 9/11 wasn't an overreaction because it wasn't an exceptional event that should cause one to update non-negligibly, but because a few thousand people dying in a country of hundreds of millions is, as sad as it is, is too little to warrant major policy changes, even if similar attacks were to happen slightly more frequently than we'd thought.

(In the particular case of 9/11 there is the further consideration that air crews and passengers having learned never to cooperate with hijackers is enough to prevent attacks of this form from being repeated. OTOH that was an important lesson to have learned, with the side benefit of disincentivizing more traditional, hostage-taking hijackings.)

2. Non-exceptional events (say, at least a few dozen have happened already). In these cases, you should update negligibly based on a new occurrence.

2.1 Events infrequent enough that you're likely to hear about all of them: e.g. mass-shootings, air crashes.

2.2. Events (e.g. assaults, instances of sexual harassment, even homicide) more frequent than audiences have the appetite to read about. In these cases, how many you hear about in the media is almost entirely uncorrelated with the actual amount of occurrences, and depends entirely on how many stories can still keep up the readers' interests; and which ones you hear about depends on which ones the media decides to talk about (either deliberately, or through a random chain of events).

Expand full comment

One problem with the cynical strategy of using crises as organizational opportunities is the strong tendency they have toward becoming left/right coded. Once you've polarized on an issue - or leveraged polarization on that issue - you risk enacting reforms that are too targeted.

An example: Sex scandals involving allegations of the Catholic church covering up abuse have not been in the popular news recently. (At least not the news I consume.) When they were, I remember thinking, "Wow, that's bad. I'm shocked at the reports I'm reading!" I updated some of my understanding of the inner workings of the Catholic church, and thought there must be something uniquely wrong happening there.

Recently, I started looking at other statistics about sexual abuse of minors. It turns out this is very common in public schools as well, including reports that some school districts cover up allegations by transferring teachers, hiding the allegations, and allowing them to continue interacting with students.

As I was reading about this, I reflected on my previous updates about the Catholic church. I thought, "Maybe this phenomenon has nothing to do with religion or with one specific institution gone astray. Maybe when something sufficiently negative threatens to make an institution look REALLY bad, the people in charge respond by trying to hide the abuse." I don't claim to understand what these people are thinking at this point, but it doesn't seem like you need a particularly corrupt/captured institution for cycles of abuse to be hidden by the bureaucracy. Or at least, this isn't the kind of institution that is as highly abnormal/unexpected as I'd thought on reading the initial reports.

I also think that the impetus for abuse of minors by people in positions of authority is much higher than I used to think it was. Given these two updates, I no longer think a solution that specifically targets the Catholic church would do much to curb this phenomenon. Indeed, it might get in the way of good general reform to make a hyper-specific reform targeted at Catholics that does nothing to address the general problem rooted in human nature.

If, instead, we looked at general institutional incentives and tried to shift them to the point where any institution that failed to report abuse suffers, while those that proactively report abuse are rewarded as being forward-thinking and honest brokers (because they're on the lookout a behavior we expect to find under normal circumstances), we might be able to make meaningful changes. Indeed, we might find abuse that has previously been successfully covered up, allowing us to stop it.

The problem with large updates on single phenomena is that the hyperfocus of the moment can cause us to make changes that are far too specific to address the root problem we care about solving.

Expand full comment

I bet if Jeff Epstein was two decades younger he'd have given a lot of money to EA and there'd have been an entirely unnecessary massive controversy when his extracurricular activities got revealed.

Expand full comment
Jan 16·edited Jan 16

I assume there is a theory for the evolutionary origins of salience bias? If not, maybe it goes something like this:

It's always a power law. All power laws are scaled versions of the others. Hence if one considers the worst event a of type A one has ever observed vs. the worst event b of type B and b is worse than a then, everything equal, one has reason to believe that the power law for B is a scaled up version of the power law for A. For example, I am guessing the worst individual predator attack you have ever heard of killed no more than three people. The worst earth quake you have ever heard of on the other hand...

"Everything equal" does a lot of work here, specifically the amount of observation done should be equal. But this used to be the case throughout all of prehistory. History, by definition, started when a written record enabled us to preserve certain events for much greater periods of time. And these are realiably the most salient events. Hence we have a written record of the destruction of Pompeji by the eruption of Mount Vesuvius but not one of some random lion attack (unless it killed a very important person). In prehistory, people had their own observations and what other people told them about which went back maybe three generations before fading into the domain of myth. So in terms of orders of magnitude, all worst events one had observed or heard about could be considered about equally likely.

Now, usually precautions against the worst ever observed event of a given type will also work against lesser events of that type. E.g. sleeping around a fire at night will deter lions but also leopards, hyenas, wild dogs, ...; keeping a safety distance of 50km around a big volcano will also keep one safe from smaller volcanos; and so forth.

So if one scales one's precautions according to the inferred scaling factor deltas of the known worst events, one should assign approximately the right amount of resources to each.

In other words, If during my lifetime I have seen a lion kill a family member and I have also seen a volcano wipe out an entire tribe, it appears rational that I view volcanos as the greater threat and take much greater precautions against them. Here I may misallocate, but any such misallocation is short term. My great-great-grand children will already have forgotten about the volcano but they will still be aware of the more frequently occuring lion attacks and have reallocated their resources correspondingly.

Expand full comment

Viruses evolve and gain function in nature. What happens in a BSL4 could just as easily have happened in a bat cave. That's why I don't think it matters whether COVID was a lab leak. Restricting virology research does not materially reduce the probability of dangerous viruses existing or entering the human population. It would limit our ability to understand and respond to the situation when a naturally evolved virus does circulate.

Expand full comment

> over learning

... only follows if people changed their minds, people thought intentionally getting animals sick to breed better virus was stupid before fauci killed more people then several wars

It was an active debate on if "gain of function" research should be banned for a decade before 2020. It wasnt a debate if shoe bombs were risks before 9/11 and the tsa

> math, assume this and such numbers, and you get such and such result

Making viruses stronger is bio terrorism research, he was actively avoiding a law, and what helping china learn a new type of weapon mass destruction; I don't understand a way to read a situation that you accept the facts that fauci moved money to the Wuhan institute to breed stronger viruses, that people where very very concerned about and stopped the research for fears of this exact outcome that isn't profoundly stupid, treasonous or psychopathic.

Its unlikely we we ever know which, but why not take a common denominator of "bad"

Expand full comment

>But if you would freak out and ban gain-of-function research at a 27.5%-per-decade chance of it causing a pandemic per decade, you should probably still freak out at a 19-20%-per-decade chance. So it doesn’t matter very much whether COVID was a lab leak or not.

Notably, banning gain of function research wouldn't prevent SARS-CoV-2 leaking from a lab. According to the most plausible version of the lab-leak theory, SARS-CoV-2 was a natural virus from a cave, that researchers brought back to Wuhan, from which it subsequently escaped.

Expand full comment

You can update on individual events, as you discuss, but you can also update on models, along the lines of Popper.

One might have a model that "Gain-of-function research has arisen for reasons of institutional benefit. It has very little benefit for the actual science of virology, and the risk of lab leaks is quite high on a recurring basis." A confirmation that COVID was a lab leak might still produce only a small increase in belief in that model, but the practical import of the increase might be very high.

Expand full comment

It could be that the update coming from SBF and OpenAI is not the carefully formulated list of fine-grained beliefs that supposedly point in the opposite directions, but rather an increase in the single belief that "EAs aren't very good at reasoning about or acting on institutions."

Expand full comment

Hard cases make bad law.

Expand full comment

Every time rationalists or EAs complain about PR stuff like this, I think of https://xkcd.com/1112/. If you understand the game well enough to critique it, what's stopping you from winning?

Expand full comment

policy people say "Never let a good crisis go to waste."

Expand full comment

For a robust Bayesian analysis of the lab leak hypothesis, see physicist Michael Weiss man’s post: https://michaelweissman.substack.com/p/an-inconvenient-probability-v40

Expand full comment

Scott, Scott, Scott…

This is annoying in exactly the way you are most frequently annoying…so right in theory, so clueless in practice.

Despite your awareness of exactly this problem, you and so many commenters are STILL being a bunch of stupid quokkas who allow your basic factual background beliefs to be manipulated by psychopaths using the specific tool most effective against YOUR community of quokkas, namely taking advantage of a deep aversion to anything “right-coded” (an aversion which was itself cultivated in you by similar psychopaths ) to get you to not look at places you ought to be looking to get the necessary perspective.

It WAS a lab leak and NO ONE with

1) common sense who is

2) not a motivated reasoner and

3) understands molecular biology at the level of someone with a bachelors degree in the subject

thinks it was of natural origin, because of the three (3) smoking guns “human-infection-optimized furin cleavage site”, “Daszak and Fauci and Baric confirmed lies about funding gain of function research in Wuhan specifically including adding furin cleavage sites”, “Coincidence of Wuhan location”.

The reason it’s IMPORTANT is not anything to do with “updating on the likelihood of pandemics and therefore changing policy on gain of function research”, the reason it is IMPORTANT is the REVELATION that our public health establishment and our government in general are run by psychopaths who would *purposely HINDER response to a pandemic by hiding everything they knew about the germ that caused it*.

Quokkas need to update VERY VERY HARD on that.

Expand full comment

I'm a bit worried that "It doesn't matter if COVID was a lab leak" is going to be read as "It's not useful to talk about whether COVID was a lab leak".

I agree that the object-level question doesn't matter, but it's useful to talk about WIV doing gain-of-function research in a BSL-2 lab. My priors for lab leaks in general were lower than Scott's because I assumed virology labs doing anything remotely risky would be similar to the BSL-4 lab near where I live. Even if COVID hadn't happened at all I'd still update on the knowledge that people study animal respiratory diseases in a lab that doesn't require respirators.

Expand full comment
Jan 17·edited Jan 17

>It’s hard to define “mass shooting” in an intuitive way, but by any standard there have been over a hundred by now. You can just look at the list and count how many were by Your Side vs. The Other Side.

The source you reference is, in my opinion, itself biased in order to push a certain narrative. Per their website, they exclude shootings motivated by armed robbery and gang violence, and as a result end up with a list that retains a left-wing "hard on far right extremism" outlook but ignores support for a right-wing "hard on crime" perspective.

Expand full comment

That's not how Bayesian updating works. You're not supposed to have a single number for how many lab leaks you expect to happen in a decade (the parameter lambda for the Poisson distribution). If you only had a point estimate, you wouldn't be able to update it at all. You're supposed to have an entire probability distribution that is your credence for the value of lambda. Then you update your probability distribution for lambda. Doing the actual maths requires you to use some statistical software, but the point is that a single event can have a very large effect on your posterior if you start with a wide enough prior (which you probably should). There is a big difference between observing 0 lab leak pandemics in 20 years and observing 1 lab leak in 20 years in terms of how your posterior will end up looking like.

Expand full comment

(Reposted from Reddit): The Bayesian model which Scott adopts is generally the correct one. However, there is an implicit assumption that the occurrence of an event does not change the distribution of future events. If that happens, then a larger update is required.

For example, the US has always had mass shootings even prior to the Columbine shooting. Yet the US has seen a gradually escalating rate of mass shootings since 1999 ([see chart entitled "How has the number of mass shootings in the U.S. changed over time?"](https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/). This chart is not adjusted for population size but clearly the growth in mass shootings exceeds the growth in population.

The reason is the copycat effect. There are always psychotic potential murderers out there, but Columbine set out a roadmap for those murderers to go from daydreaming about killing their classmates to actually doing so. So a rationalist ought to update their model because Columbine itself changed the probability distribution.

Another example where an event changes the likelihood of future events is where the event has such emotional salience that strong countermeasures are enacted. To take the gun example, after Australia's Port Arthur massacre, the government introduced strong and effective gun control. Thereafter, the rate of mass shootings plummetted.

The same applies to 9/11. Prior to 9/11, there was always the chance of terrorist attacks. After all, Al Qaeda had previously bombed the WTC. But Osama inspired other jihadis to attack westerners in the west. It made future attacks more likely. But the world also enacted strong airline security measures, so the likelihood of 9/11 style airline attacks decreased. But its harder to stop e.g. train bombings so it shifted the likelihood of terrorist attacks away from planes to trains. Hence the London tube bombing. So a rationalist ought to have updated their priors to think that planes are safer and that other public areas are less safe.

PS: I really dont want to get into a debate about gun control. The Australian solution won't work in the US, but clearly it affected the Australian probability distribution. Please constrain your arguments to whether Columbine changed the statistical likelihood of a mass shooting.

Expand full comment

Might add that information availability and existence of opinions are potential missing factors. It's not that a single, hypothetical, highly-informed Bayesian is marginally updating existing beliefs; it's that people who hadn't thought about an issue at all (priors NaN) are suddenly forming beliefs at the same time. I suppose one could argue that everyone has an implicit, very weak prior about everything, but I don't really buy that. But if we suppose that's true, weak priors update a lot more dramatically than strong ones.

Re: lab leak: I don't think many people had any idea that there was an active area of research built around deliberately making dangerous viruses more dangerous to humans. Let alone that the field has minimal practical value (I haven't seen a good defense of it yet, particularly relative to the risks). Or that people involved were operating out of a lab in China with shoddy safety procedures, with some US government funding.

Awareness that such a risk is out there rationally should cause a dramatic updating of beliefs; far more than the incremental updating in your example. Colloquially, from "scientists probably study diseases, government funds it, that's reasonable enough I guess" to "wait, they do what now?".

To some extent that falls under the coordination problems and stupid people buckets, but think stupid is unfair. There are a lot of things in the world, and most people (including smart people) don't have opinions, let alone subjective probabilities, about most of them.

Expand full comment

Gain-of-function research was legally restricted in the US (and Richard Ebright played a role in getting those regulations in place). That's part of why funding was being funneled into Wuhan, which wasn't so much a legal loophole as a way to avoid notice of skirting the law.

Expand full comment

I may be missing the point of the article (which I largely agree with!), but... if it was a lab leak, knowing what caused the leak could be very important. Lab leaks are relatively rare and I assume the folks who run these labs try very hard to avoid leaks. Knowing how a leak occurred would be useful information that could help make future leaks less likely. Likewise 9/11 probably shouldn’t change your estimation of how likely a significant terrorist attack is, but in a very short time frame passengers (apparently) learned that being passive in the face of a hijacking is not the ideal response and it led to locked cockpit doors. Both responses probably should reduce your estimation of the probability of an airliner being flown into a building or populated area again. (The less said about TSA and security theater the better). Overall I agree that dramatic events shouldn’t necessarily cause you to dramatically update your priors, but that shouldn’t mean the truth doesn’t matter and that we can’t learn from dramatic events.

Expand full comment

How is 20% a reasonable prior for "lab-leak pandemic"? Has there ever been one, other than possibly covid? Shouldn't this be way under 1%?

Expand full comment

FWIW, a physicist did the Bayesian analysis of the lab leak hypothesis, put it out there, got a bunch of feedback, and updated it a few times: https://michaelweissman.substack.com/p/an-inconvenient-probability-v30

Just FYI, if you actually want to see the deep dive on this.

Expand full comment

I've definitely always thought of 9-11 as "the day the terrorists won." Not only did we waste hundreds of billions of dollars, and thousands of soldiers' lives over it, we instituted flat-out stupid "security" measures that are purely theater.

The increase waits at airports over the ensuing 22 years have almost certainly cost more American lives (in US life-hours wasted) than 9-11 itself! My back of envelope estimate has 240k US lives lost to security theater, vs 3k in 9-11 itself. The terrorists *really* won on 9-11, authoritatively. The biggest self-own we've ever had.

Expand full comment

Huh from the wind-up (especially the point on hyperstitional cascades) I really thought this would end with either a re-affirmation on how you would be sticking to the Substack platform or a gesture towards looking at an alternate.

FWIW I hope you at least consider moving

Expand full comment

One interesting dramatic revelation that DIDN'T result in any updates, debates, or changes at all: for years, computer geeks had been assuming / jokingly saying that the NSA was spying on them and everyone on the internet. They were called crazy, paranoid, and unstable for years. If the government were spying on all of us, we wouldn't take it as a society! The tree of liberty must be watered, etc...

It was of course, true, and it came out in public in a big way ten years ago. And the result?? Absolutely nothing. Not a peep. Nobody cared that every single thought, text, email, search query, the slightest dribble of text or ascii, all monitored, all stored forever in your Permanent Record (or whatever the NSA calls it).

And anyone who said they'd water the tree of liberty? Not a peep, nary a protest, nary a change.org campaign, not even any sternly worded emails or entirely cosmetic changes in Cabinet officials or administrators. We're all apparently happy about it, zero updating any direction. Why is that?

Expand full comment

Regarding the Effective Altruism movement; I don't agree that it's that hard to draw more specific lessons from the two disasters you mentioned. However, even granting that it is, by analogy to the regular school shootings or terrorist attacks of cases of sexual harassment, it seems that one should update toward expecting regular disasters of a similar magnitude.

Perhaps you already had a distribution that assigned somewhere in the range of a 10-90% chance of this level of disaster occurring at this frequency; but I did not.

Expand full comment

Dramatic events are about as un-bayesian as it's possible to be though. Usually if an event is a big one-off we didn't see it coming, or had very little idea about its likelihood and no real way to estimate a prior ahead of time.

Lab-leaks (not pandemics), would be an example of this. No lab leak has/had lead to a global pandemic before, and any attempt to predict their likelihood is full of unknown unknowns that mix notoriously poorly with Bayesian thinking. Once you've observed the first instance of an event, I'd say you should update massively, if nothing else, on the fact that that event can plausibly happen.

With most dramatic events, I'd say nobody is truly thinking about the event at all beforehand. They're implicitly in the process of deciding whether the event is worth thinking about, which would be more like deciding between

"Lab leaks common" (3%), "Lab leaks rare"(2%), and "Lab leaks don't matter" (95%)

So tautologically, the only time you shouldn't update from events is when you think relevant events were already predictable, and were able to assign well founded priors to them ahead of time.

Expand full comment

> You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge.

But this is obviously false. The infinite descent of public knowledge was clearly established; popular culture was full of acknowledgments of the phenomenon. The musical of The Producers includes the lyric "I want to be a producer / with a great big casting couch"!

Expand full comment
Jan 17·edited Jan 17

>(it’s always a power law)

Power laws can be expected sometimes, but can also be surprising in some other contexts.

Here is a conceptual toy thought example: Small scale terrorist attacks that require only handful of people to execute are much "easier" than large scale operations, which require larger organization and more resources. However, after organization has been founded and organized, it has a steady income, permanent resources, and recruitment. Why wouldn't they be able to carry out large scale attacks at a constant rate, instead of them following a power law phenomena? Power law is less surprising if one notes the terrorist organizations are opposed. Every moment a prospective organization spends preparing an attack and building larger network , more opportunities the authorities have time to stop them. Additionally, after an successful attack, their organization is often disrupted and the security apparatuses step up their threat model -> further action of similar scale requires either gathering similar amount of resources again or coming up with a novel attack the enforcement is not prepared for (another form of gathering resources).

Coincidentally, this is more or less the toy model proposed by Clauset, A., Young, M., & Gleditsch, K. S. (2007), pages 76-78 (though they conceptualize it as "planning time" than "gathering resources"). Their final model is x^(-alpha), where the exact shape of the power law depends on constant alpha = 1-kappa/lambda, where kappa/lambda is supposed to reflect the relation between filtering effect (by state action) during planning vs increased severity of attack due to planning.

Expand full comment

I'm unfamiliar with the e/acc movement, what's the problem with them?

Expand full comment

I predict that Scott will be the first liberal rationalist mass shooter

Expand full comment

The thing about mass shootings (using the Mother Jones definition, and not some of the broader "four or more shot, regardless of deaths" definitions) is that they're actually pretty close to a representative racial sample of the country, even though they're seemingly always portrayed in the media (and even in this post) as "crazy white guy" or maybe "Muslim terrorist."

Out of 149 shootings on the list, there's 10 Asian, 26 black, 12 Latino, 3 Native American, 80 white, and 18 "other," "unclear," or with no race listed. At least 6 of that last group (and two of the white people) have some sort of name connected to an Arabic/Muslim-majority country.

(What really baffles me is that the media, as much as they love to talk about white male mass shooters, never fixated anywhere near as much as I thought they would've on the 2022 Buffalo shooting, perhaps the most obviously anti-black racist mass murder on the list. Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?)

Expand full comment

This misses an important challenge with extreme events: they may, or may not, be governed by the mechanisms that control the more mundane events. Observational evidence of extreme events updates *on the physical limits* of the system.

It is notoriously sketchy to extrapolate power laws to predict the tail of the distribution. Power laws work until they don’t (If things were actually power laws weird things would happen).

Extreme event are likely probing the edge of the power law, in this sense they are a rare bit of information to inform what actually governs the tail.

I’ll give an example (one that I am quite familiar with): what it the biggest possible earthquake?

I could fit a power law to the small events- this turns out to fail quite dramatically in many cases, for many reasons that I won’t get into.

I could figure out what limits the size of the earthquake. Ok, but this is not very empirical. And note that this does not depend on the things that control small earthquakes. This is sometimes used for building codes.

I could update A LOT on the largest earthquakes that we have in longer term records. This is a decent approach that is used for building codes.

The key here is that if we know of ~10^6 earthquakes in California (for the most part tiny). A new large earthquake is NOT a 10^6 + 1 update.

To bring this back to some of the examples in the text, in a world without nukes or bioweapons, I would update A LOT if I learned about a terrorist attack that killed an entire city. This is because my model of the world placed physical limits on the scale of terrorism. New extreme evidence, significantly changes my model on these limits.

Expand full comment

> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school.

I would have put extreme odds that this was someone assigned male at birth. I know that lesbians commit domestic violence at relatively high rates, but I heard in my head JK Rowling saying "these are not our crimes". This post has many extremely valid points - among many, "when you accept risk, don't overreact to risk paying off" - but I'm still viscerally shocked by losing so many internal Bayes points.

Expand full comment

Does it matter how most fires start?

We know that some fires start by arson and some start by other means.

Let's assume prior odds there's a 20% chance per decade of a deadly fire being set by firefighters who are secretly arsonists.

The cause of one very famous fire is disputed, but even if it were caused by arsonist firefighters, that would only update our priors to 27% odds.

Either way, we should have heavy restrictions on firefighters, because 20% and 27% are both alarmingly large numbers. Because we don't know which firefighters are arsonists, and they refuse to all confess to the alarming criminality among their profession, we should call many of them before congress to testify.

And, since non-arson fires are too boring to talk about, let's pretend those don't exist and make absolutely zero changes to reduce the rate. After all, hiring more firefighters would probably just cause more arson, right?

That argument should sound absurd, but it's roughly where the public conversation is, with regards to viruses and virology.

So, what's wrong with your argument?

Well, your prior odds might be reasonable. There is perhaps one research related pandemic in history, the 1977 flu, which some people think could be the result of some kind of vaccine trial. It's not proven, no one even knows which lab would be to blame, but let's just assume that's real, and there's 1 in 50 years. That's 2% odds per year, or 20% per decade.

Okay, but there was nothing exceptionally bad about the 1977 flu. The number of people that died was about the same as every other flu year. So in 50 years, 49 of the flus were natural and one was possibly research related. The natural ones are 98% of the problem.

And during the same time, nature also brought us HIV, Ebola, several novel coronaviruses, and lots more diseases. So the natural diseases are well over 99% of the problem. Putting some 50% reduction in the risk of natural viruses would have much higher impact than improving lab safety by 50%.

Even if the virologists were like the firefighters above (and some are arsonists), you'd still have a net positive effect from hiring more virologists, just as you'd have a net positive effect from hiring more firefighters.

For some reason, people keep making this mistake, again and again.

With vaccines, we focus on the small rate of side effects, not the large rate at which they save lives.

With police officers, we focus on the small amount of police brutality, and not the large extent to which policing save lives.

If virologists have a 2% chance per year of making a flu no worse than the average flu, then focusing on the labs and not nature is a waste of time.

I suppose covid could be different, if we're talking about a chance of labs making something worse than typically comes from nature. And in that case, perhaps you're going to have to come up with different priors -- you can't use 1977 anymore. There has never been a gain of function pandemic in history, so it's hard to know what the prior odds are.

In practice, I'm not sure this matters. Covid is simply not a lab leak. The congressional hearings came up with no evidence. Rootclaim came up with no evidence. Covid came from nature, with at least 99% certainty. The next pandemic will almost certainly come from nature, as well.

And our society's obsession with a false lab leak theory will only make it more likely that we are unprepared for that next pandemic, because we've focused on a hypothetical 2% per year risk of a future lab accident, but have done very little to reduce the much higher annual risk of a natural pandemic. We've started cancelling viral surveillance programs because of the popularity of the lab leak theory and we've lost good communication with scientists in China.

It's not even clear that we've done anything to reduce lab risks -- if you're worried about lab safety in China, we now have even less transparency than ever as to what's happening in Chinese labs.

And if you consider the (low but real) annual risk of biowarfare, or warfare in general, having the world's two largest powers blame each other for creating covid certainly doesn't lower those risks (American people think it started in a Chinese lab, but Chinese people think covid started in an American lab).

Expand full comment

If I follow your formulation about "COVID as lab leak", the conditional probably is something like P(lab leak's happen once per decade | this lab pandemic was caused by a pandemic). I'm updating my expectation that I should expect a lab leak in the future based on what I learn about this particular pandemic (proven hypothetically to be 100 percent a lab leak vs not).

But we don't really know or cannot know whether COVID or any other pandemic was absolutely a lab leak. So we flip the conditional to get a prior on whether this or that pandemic was a lab leak such that P(this pandemic is caused by a lab leak | lab leak caused pandemics happen once per decade). If I have a higher estimate, then I'm more inclined to think that COVID was a lab leak; lower if my prior is lower. But there is a more nuanced formulation that isn't effectively modeled: P(Covid was a lab leak | the sum total of evidence), where "sum total of evidence" includes my prior on whether a lab leak pandemic is caused once per decade a well as the empirical and circumstantial evidence.

How should we update those priors? It turns out doing so well is epistemically difficult because our "sum total of evidence" is shaped by the news media we read, our skepticism or acceptance of government science, and so forth. I don't know how we can be a naive Bayesian about this and minimize our ideological priors.

Maybe that doesn't change the fundamental point Scott's trying to make here. It might. I'm still thinking it through.

Expand full comment

I learned this a long time ago under the guise of, anecdotes are not data. My primary objection to even open the news is that I want to learn about the statistics, not about isolated events. And btw by this metric of course the new media are the worst, Twitter especially.

Expand full comment

Any organization of sufficient age and clout has a huge scandal. Yet, people are always surprised that org x doesn't live up to it's own values. But there is a corollary too. There's also a percent chance of flubbing the response. And there's a percent chance of taking response actions that make "things like this" more likely or some other horrible thing more likely in the future. This corollary goes a couple levels deep and when you truly roll all 1s, it's going to be a bad time.

Separately, I just read "The Voice of Saruman" and so calling people "the stupid people" rubbed me the wrong way. For one, it's needlessly condescending - focusing on *their* stupidity, rather than the call to be wise, or one's own stupidity. Two, it doesn't help us deal with the dynamics of living alongside non-distribution-model-having humans. Three, people are stupid, for letting base emotional hits guide them instead of Lady Philosophy and explicit models, but why insist on it?

I hope to cultivate love or at least pity for the madding crowd rather than contempt. Hard as it is to do. Oh, and I want you to help me in this. Your job is to benefit my character too. I have large expectations of you. Sorry.

Expand full comment

At the end of Antifragile he suggests that you should avoid the news because among other things it should almost always fail to update your model of the world. Year-in-reviews or Decade overviews should be enough.

Expand full comment

These things do matter and the most sophisticated supercomputer probabilities are meaningless for new events. Extrapolation from the past has always been doubtful: even more so now that change has accelerated.

Expand full comment

I kinda see what you're doing with the FTX vs OpenAI contrast, but the example falls flat for me. The problem with SBF was not that he was a CEO doing shady things or whether or not his company had a board. The problem was that the entire EA movement went all-in on this one charismatic person, tying its own public reputation to that person, shifting priorities in the direction of things SBF wanted, being unprepared for the FTX future fund being backed by nothing, etc etc.

Ironically, just as SBF himself, EA's turned out to also have rather naive linear utility models, a clear lack of deontology, common sense, due diligence, whatever you want to call it. The thing that makes you not bet your entire movement on one person.

Expand full comment
Jan 17·edited Jan 17

"A few months ago, there was a mass shooting by a -->far-left<-- transgender person who apparently had a grudge against a Christian school. " This has in no way been proven. There was a manifold market about it and it resolved NO, because this has not been proven.

https://manifold.markets/johnleoks/will-it-be-revealed-that-the-nashvi-57950dca88ed

Expand full comment

> I think same mechanism

Typo, missing "the".

Expand full comment

I mostly agree, but want to add an important caveat here:

You are not supposed to significantly update on dramatic events only if you indeed have a probability distribution that these events fit in.

Otherwise, you get a trapped prior, an awesome excuse not to update on any evidence. The "pretending to be wise" mindset where you fail to notice your own confusion and just pretend that you are one of the cool kids who totally saw the event coming.

It's okay not to expect some things and be surprised. It doesn't make you a stupid person. Personally I didn't expect FTX collapse. It was a "loss of innocence" for me. I knew that crypto is scammy, but these considerations were outweighed by the halo effect of EA - obviously good and reasonable people there know what they are doing. So my probability distribution didn't really include this kind of dramatic event. Not that I was literally thinking that it was impossible, but I just didn't really think about it, and if I did, would put a tiny probability estimate of such thing happening per year. And so I updated from my halo effect. Not to a point where I disavow AI, but to a point where I see it just as another community with its own failures and biases. A community that I mostly ideologically agree with, one that I wish I could truly be a part of. But not more than that.

Expand full comment

Re lab leaks: you can reasonably not have a very high prior on them. You read about the occasional minor case or close call, but without having been there it's hard to know whether it was a real risk or just sensationalized news making something that never really had a chance to go bad sound scary. Maybe it's like that "there's a mass shooting every N days" thing where it turns out they define "mass shooting" very loosely and most of the examples are non-central. And you could try to do your own research, but without having been there or being a biosecurity expert it's genuinely hard to be sure (and you can't just trust the biosecurity experts either, since they have their own agenda).

So having a specific thing example of a really bad lab leaks pandemic really does provide good important evidence of lab leaks being worth worrying about (unlike say the SBF thing, where "crypto is full of scams" is something that was already obvious with many central examples).

Re 9/11, your conclusion of "then after 9/11 we didn't have any mass terrorism" is kind of bad because it only observes the world where we *did* have a massive response. It's easy to imagine a world where all the terrorists weren't suddenly busy fighting US Marines in Afghanistan and had time to go "wait you can just crash planes into buildings? Let's go!" And you get a massive terrorism spike (the sudden rise in global terrorism since withdrawing from Afghanistan gives at least some positive evidence for this). This works for true stochastic processes, but terrorism is a game against a human adversary, not a random process. It's the difference between gambling on a horse race and the stock market.

Expand full comment

This is straightforwardly a clash of the different descriptors in use, I think...

I reckon the news originally was a blend of two kinds of events: the unexpected, and the impactful. The unexpected (e.g. violent crime, because most of us live non-violent lives) should make you update because it's unexpected. Impactful (e.g. who won the election) should make you update because it affects your base assumptions.

I would argue that 9-11 was worthy of an update for most people, because most people did not realise that terrorists were capable of organising something that big; or that small numbers of people could weaponise infrastructure against a major city like that. Perhaps we should have known, but I don't think it had ever crossed my mind before 2001.

The word dramatic gives us a clue about why some news is not worthy of an update: it's just rubbernecking. Spectacle. There's plenty of that, particularly as news has become more national, and then more global. For example: a murder in your community is worth thinking about. A murder in your local paper is important. But a murder *somewhere in the USA* is not worth thinking about; however they are now presented to us in the same way. For that kind of news, I think Scott's right - not unexpected, not worthy of our attention or an update of priors.

There's one more problem, which good news organisations had ways of dealing with: slow events. Like, WHO fails to deploy malaria vaccine for another day, 1,000 children die. (For those who didn't know, or who forgot about it, this is very important information and should prompt a change of priors.) Good news outlets had dedicated reporters for that sort of thing, but news is in the middle of a big transformation, so sometimes it's getting lost.

Expand full comment
(Banned)Jan 17

Lab leak matters enormously.

It shows that scientists, politicians and the media (the ones accusing Trump supporters of "misinformation") are perfectly willing to engage in outright lies and propaganda about even scientific issues in the name of ideological agendas.

It means that whenever anyone (rational) hears the term "misinformation" or reasonable explanation for things being called "conspiracy theories" (especially "racist" conspiracy theories), this should be a giant, MASSIVE red flag that the person is engaging in propaganda.

Additionally, the people who engaged in this massive lie suffered ZERO consequences for it, because approximately nobody on the left has any principles other than in group good out group bad. Maybe the right are no different, but they're not the ones acting high and mighty over such things.

Of course, even if you believe that lab leak shouldn't make us more worried about gain of function, we should still be very very worried about and opposed to gain of function. But doing anything about it would be seen as vindicating Trumpian "conspiracy theorists", and besides, all the scientists whose careers are based on this stuff said there isn't a problem, and you're not an anti-science republican are you?

Expand full comment
(Banned)Jan 17

Can we all agree then that the Holocaust is of absolute political irrelevance?

Expand full comment

Probabilistic design is in civil engineering the governing economic rationale and is always related to the potential death toll. Engineering logic applied to terrorism would be by far to prioritise the prevention of large incidents over the prevention of smaller ones. Which seems to be the opposite of what we are seeing, also in geo-politics.

Uncertainty is a prerequisite for the evolution of life. Hence our continuous navigation to cheat it:) (to stay dumb:)?. So we should use its principle to help guide us in making smarter future-enabling decisions. Hence my quote: “Human progress can be defined as our ability to create and maintain high levels of future uncertainty“. Which at first may seem a paradox...but it's not.

Expand full comment

I like the idea of harnessing the reactions of people to drive policy. Discerning actors should be able to craft opinion pieces so that they are ready to go when the right event happens. I know it's common to have two different pieces ready to go when it comes to presidental elections, but I'm curious if there's anyone that have prepared something for the day terrorist nuke a city. Might be a good time to use public sentiment to drive disarmament.

Expand full comment

This post on LessWrong seems to partly be arguing the opposite side of this: https://www.lesswrong.com/posts/TgKGpTvXQmPem9kcF/notice-when-people-are-directionally-correct

Expand full comment

Would this logic also apply to the once in a while killing of black American males by the police?

Expand full comment

This article makes a reasonable case that drastic events don't tell you much about the base rates of said events.

But is that the only thing you can learn from a drastic event? Sure, in the abstract, we knew that flying a plane into a building was possible, but the data points of large terrorist attacks was still n=0. The 9/11 attacks were an unprecedented event, in terms of scale of planning and execution. Of course you can learn stuff from it!

You might know, in the abstract, that there are probably sexual harassers or abusers in a large community. You can, and should, put preemptive measures in place to protect against them. But you are still shooting in the dark, knowing nothing. Whereas if an article comes out listing a ton of examples of harassment, suddenly you have a sense of who some of the abusers are, how and when they operated, and how they got around whatever protections you had in place to catch them. If you care about preventing sexual harassment, this is very useful information! It can inform the proactive measures that can be put into place to prevent similar events from occurring in the future.

Expand full comment

On the origins question, the problem David Relman described is the early case data is "hopelessly impoverished". Still, location, sampling history in Yunnan and Laos, lack of secondary outbreaks, features of the virus (binding affinity to human ACE2, low genetic diversity, FCS, codon usage) research proposals fit with lab origin.

1. Chinese researchers Botao and Lei Xiao first observed lab origin was likely as the nearest bat progenitors are ~1,000km from Wuhan. The Wuhan Institute of Virology sampled SARS-related bat coronaviruses in these locations - Yunnan, Laos and Vietnam.

2. Patrick Berche, DG at Institut Pasteur in Lille, notes you would expect secondary outbreaks if it arose via the live animal trade (screenshots below) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234839/

3. The features of the virus are consistent with lab origin. SARS-CoV-2 is the only sarbecovirus with a furin cleavage site. It arose well adapted to human ACE2 cells with low genetic diversity indicating a lack of prior circulation. Again inconsistent with the claim it arose via the animal trade. The CGG-CGG arginine codon usage is particularly unusual. The restriction site SARS-CoV-2 BsaI/BsmBI restriction map falls neatly within the ideal range for a reverse genetics system and used previously at WIV and UNC. Ngram analysis of codon usage per Professor Louis Nemzer https://twitter.com/BiophysicsFL/status/1667224564986683422?t=Vh8I9fl3lwj6k6VJ8Kik8Q&s=19

https://www.biorxiv.org/content/10.1101/2022.10.18.512756v1

4. The Wuhan Institute of Virology was part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses. https://www.pnas.org/doi/10.1073/pnas.2202769119

Jesse Bloom, Jack Nunberg, Robert Townley, Alexandre Hassanin have observed this workflow could have lead to SARS-CoV-2. Nick Patterson notes work often commences before funding is approved and goes ahead anyway. The Wuhan Institute of Virology had separate funding for SARS-related spillover studies from the NIH and CAS.

5. The market cases were all lineage B but as Jesse Bloom observes lineage A likely arose first. So *the market cases were not the primary cases*. WHO has also not accepted market origin as excess death data points to earlier cases. Peter Ben Embarek said there were likely already thousands of cases in Wuhan in December 2019.

https://academic.oup.com/mbe/article/38/12/5211/6353034

See also Kumar et al (2022) https://academic.oup.com/bioinformatics/article/38/10/2719/6553661

6. The evidence for both lineage A and B in the market itself is tenuous. The evidence for lineage A in the market is based on a single sample found on a glove tested on 1 January 2020 out of 1380 samples. Liu et. al. (2023) note this is a low quality sample.

7. Bloom found the market samples are *negatively correlated* with SARS-CoV-2 genetic material. Another Bloom analysis published 4 January 2024 shows abundance of other animal CoVS but not SARS-COV-2. https://t.co/i0HzwvIPeo

https://academic.oup.com/ve/article/9/2/vead050/7249794

https://t.co/i0HzwvIPeo

8. Lineage A and B are only two mutations apart. François Balloux notes this is unlikely to reflect two separate animal spillovers as opposed to incomplete case ascertainment of human to human transmission.

9. There is a documented sampling bias around the market. Something even George Gao, Chinese CDC head at the time, acknowledged to the BBC stating they may have focused too much on and around the market and missed cases on the other side of the city. David Bahry outlines the sampling bias.

https://journals.asm.org/doi/10.1128/mbio.00313-23

10. Wuhan was actually used as a control for a 2015 serological study on SARS-related bat coronaviruses due to its location.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6178078/

11. Superspreader events were also seen at wet markets in Beijing and Singapore (Xinfadi and Jurong).

12. The Wuhan Institute of Virology has refused to share their records with the NIH who terminated their subaward last year. This year the Biden Admin issued a wider suspension over historic biosafety concerns and refusal to share records. https://www.cnn.com/2023/07/18/politics/biden-admin-suspends-wuhan-lab-funding/index.html

Expand full comment

Excellent post! I think this instinct to use probabilities/deduction/Bayes/rationality to understand problems like these is the biggest difference between STEM types and humanities types.

After a lifetime of maths, science, engineering and software, I'm doing a degree in the humanities. I have an assignment due next week on Aristophanes and, as far as I can tell, the question has absolutely nothing to do with the text. This happens to me with every single assignment.

Instead of figuring out the answer, I use what feels like a kind of blindsight and I start making stuff up. After a couple of hours, I have a story that kind of makes sense but it doesn't (to rational me) seem to answer the question. It's worked for me so far and I am getting good grades.

Fingers crossed for next week!

Expand full comment

I worry that this essay begs the question a bit. If you already knew enough to know that, say, 10% of people are sex offenders then observing a sex offender should not cause an update. If you thought that 0.1% of people were sex offenders, as is certainly the case for some, then observing one should cause an update.

I think the counterargument is already in the essay when Scott says "I think whether or not it was a lab leak matters in order to convince stupid people" if we interpret 'stupid people' to mean people with miscalibrated priors. Personally, I was not aware of gain-of-function research before the lab leak discourse bloomed, and if asked prior to it would have estimated like a 1% chance per decade of something like that happening, or even less. Now that people are making credible arguments about this I have updated, and this is rational. I would have been more on board if the essay had been titled Against Learning From Dramatic Events When Your Priors Are Well Calibrated.

Expand full comment

I'm confused (and possibly repeating existing comment, sorry) about the implication that one's prior on lab leaks ought to be blind, due to No Evidence. Haven't there already been several lab leaks over the years, including some big newsworthy whoopses, thus updating towards [Not Actually That Unlikely]%? The footnote does amend this to "lab leak *causing a pandemic*", which makes a bit more sense but also doesn't seem like the actual crux at issue wrt covid origin debates. Either way, I find Zvi's conclusion more persuasive, that even if one pegged the chance of covid lab leak at just 10% or whatever, there's definitely positive trades to be made insuring against that small-but-very-painful possibility in the future. I guess in that sense it "doesn't matter" whether lab or zoonosis - many prophylactic precautions would be the same either way - but the Schelling still needs to actually happen to get some redeeming value from Holly Hindsight. I very much hope it's not permanently lost to the partisan melee, like so many potential cause areas.

Similarly scratching head at your prior of one mass shooting by a far-left transgender person every few years. I wasn't aware that had ever actually happened until reading this post*, and just on base population rates for T that seems...really high. We're comorbid for lotsa mental illnesses, sure, but doesn't that cash out far more often in self-directed violence (say, suicide) rather than other-directed violence*? And similarly tend to be strongly anti-correlated with gun ownership? Maybe it's the same kind of precise definitional thing as previous paragraph, where you really mean "far-left mass shooting" or somesuch...sadly there's no potentially-clarifying footnote here.

*with some notable exceptions like, iirc, schizophrenia

Expand full comment

In your lab-leak example, you assumed that in the "rare risk" case, there's a 10% chance of a lab-leak-caused pandemic per decade. That's very high! My guess: If most people thought this was the actual risk, we would no longer have labs.

I'd bet that prior to covid, most people would have assigned a <1% chance per decade to a covid-sized pandemic caused by a lab leak. With this prior, you'd get a much bigger update. That's what the public response, I think, reflects.

Now, you could argue that this prior is stupid, or whatever. But arguing about priors is a difficult thing.

Expand full comment

This also applies to elections -- if someone you predicted would get 48% of the votes ended up getting 78% (or 18%), that would mean your model of the world is badly broken and you need to fix it, but if they end up getting 51% (or 45%), then that's a completely ordinary occurrence that means no such thing (but only that you'll have to cope with a different president/mayor/whoever than you expected), and other people shouldn't get to feel smug at you about that.

Expand full comment

"So if there’s a community of 10,000 people, probably 1,000 of them have sexually harassed someone" -- okay, but in certain communities it does seem to be a lot more common than that -- to the point that people will point out how Keanu Reeves or "Weird Al" Yankovic are not known to have sexually harassed anyone as though that were something particularly noteworthy.

Expand full comment

Seems wrong; you should update on everything.

Expand full comment
Jan 17·edited Jan 17

Re: OpenAI, I think it's been sufficiently demonstrated by now that the board was *not* strong - Altman was re-instated and the board was replaced by a new, more 'light touch regulation' friendly board.

I think the lesson from FTX and OpenAI in conjunction is "A weak board is better than no board, but a weak board will be beaten by a strong board/whoever is holding the purse strings, if those are not one and the same". FTX *badly* needed governance and an organisation chart; the book by Michael Lewis, even where he is sympathetic in parts, had several bits where I wanted to slap the face off SBF, e.g. his resistance to appointing a Chief Financial Officer because (according to Lewis) he felt insulted that people thought he didn't know where the money was going. Well, turns out he *didn't* know.

EDIT: If you believe Lewis. If you don't, he didn't want a CFO appointed because that would have revealed all the money he and his family were siphoning off for their own little projects and personal use.

Expand full comment

Once is chance. Twice is coincidence. Three times is enemy action.

The fourth and subsequent times, those ones are your own darn fault if you weren't expecting them.

Also: this is basically the Chinese Robber Fallacy applied to small sample size, yes?

Expand full comment

A few thoughts. First, I agree completely about all of your positions on regular events that have a known probability and distribution. I think we update far too much on new (or newly reported) individual events that are part of a pattern that we could have known about for years.

For extremely rare events, I don't agree. There are two kinds of knowledge in a society. Things that people can or should know, and things that people actually do know. I'm seeing this more and more as my children get older and I think about how I know something and realize that whatever mechanism that was, my kids haven't had the same thing happen. They could know the same things, but they're busy learning the things that they actually do know and aren't updating on things that may have been important in the past (and may be again in the future) but instead on things that seem important to them now. My kids don't need to learn how dial-up internet works, because they're not going to use it. Maybe they should learn about terrorism, but maybe not. The answer depends on information we don't have (future major terrorism) and not prior events.

Prior to 9/11, most people put the chance of a major terrorist attack on US soil at approximately 0.0%. If you asked them directly, they may not say absolutely no chance, but in practice we lived our lives that way. I'm sure there were people in the US who had lived in countries with major terrorism who were not surprised, but the vast majority of people in the US clearly were. And this is not surprising - there have been no major terror attacks in the US - either ever or at least in living memory (depending on how you define "major" and "terrorist").

Similarly, the chance of a nuclear detonation in a major city is not 0%, but most people live their lives as if it is. And this is rational. There is some amount of resources devoted to ensuring that no nuclear weapons are detonated by terrorist groups, and this amount of resources is apparently sufficient to achieve this purpose. There's no distribution of nuclear terrorist attacks because none have happened. This doesn't feel like a fluke because we try very hard to make sure this doesn't happen. As long as our collective efforts are sufficient, there will be no nuclear terrorist attacks. If a nuclear terrorist attack happens, we *should* heavily update. It would mean that our efforts are not or were not sufficient.

Bayesian updates are naturally very difficult in situations with insignificant numbers of data points (particularly zero data points). Most people live their lives categorizing events into two pools - things that they should worry about and things that they should not. They don't have enough data to be more specific, even if they were good Bayesians. As bad Bayesians, they haven't even tried. So when a lab leak happens, they should *heavily* update, because they aren't updating from 20% to 27.5%, but from an approximation of 0% (don't think about it, not worried about it) to a non-negligible percent (think about it, worried about it).

It's how we hold politicians to task. We don't want or need to know every negotiation and consideration. We just want to know that they're taking things seriously enough. That Fauci hired a US company to perform gain-of-function research in Wuhan China and then had that same US company's CEO weigh in on the chances of a lab leak is all relevant information. Not in a "this is a thing that provably happened" but in a "this may be a thing where politicians were insufficiently protecting my interests and I need assurance this will not happen again." People, with limited bandwidth to follow all things happening, want to know if they should continue being worried about something or to know that it's been fixed. The biggest thing George Bush did after 9/11 was make a big show about how he and the rest of the government were taking this seriously and working to ensure that it never happens again. A lot of the efforts and money spent was inefficient or even pure waste, in terms of preventing future terrorism. But it shows that they took it seriously, and that people could go back to their lives without worrying about continued attacks. (It helps that we're aware of increased investigations of terrorism and by things like locked cockpit doors on planes that will probably fully prevent any repeat attacks even if the TSA turns out to be worthless). But again, most people aren't even trying to do Bayesian reasoning on terrorist attacks, but asking "should I worry about this?" The government has clearly answered with a "no, you should not worry about this" and it worked. So long as that proxy for safety exists, people aren't sitting at X% chance of major terrorist attack, but 0%. If another attack happens, it tells people that the government was wrong, that they should worry about it, and this is and should be a major update.

Expand full comment

In general, I liked this post. But is it just me, or is mentioning "the stupid people" so many times sort of off-putting? I won't deny that such people exist, but I think it gives the blog a sort of kicking-down/us-against-them attitude that I have previously not found here, and is one of the reason I appreciate the blog.

Makes it harder to share with people who are not in the rationalist-sphere too.

Expand full comment

I really like this post - it harkens back to the glory days of SSC. Sadly I cannot adjust my prediction priors, but a couple more great posts and I could.

Expand full comment

I think it’s important to note that this doesn’t mean that the probabilities are static - they do change over time, *and they can be altered by deliberate action*.

For example, nuclear terrorism is very rare (literally never happened yet), but it would be wrong to conclude “we should stop spending so much effort on nuclear security, because look how rare it is”. Because of course, part of the rarity is precisely because we spend a lot of effort making it hard for terrorists to obtain a functional nuke.

On the other hand, if you’re someone who wants nuclear terrorism to be less rare (ie a terrorist), you wouldn’t say “gee successful nuclear attacks are vanishingly rare, may as well give up”, you’d say “maybe I can make it more likely by getting lots of double agent terrorists into key roles at nuclear facilities”.

I guess this is in some sense your “stupid people who think that it’s sci fi just because it never happened before”, but I think it’s more subtle than that - I’m talking about actually altering the likelihood, not just our estimate of it.

Expand full comment

But isn't the issue with SBF that this wasn't some sort of major screw up? His actions were the logical outcome of his fundamentalist EA position. It's how he justified his behavior. In that case, we're outside the realm of probability analysis, and in a realm of discussing how These ideas led to This outcome. Arguably, 9/11 is the same thing: it was the logical outcome of US foreign policy. So you'd want to alter that foreign policy in ways that create a different outcome, not "change nothing" because a probability distribution convinces you it would have happened anyway.

Expand full comment

typo ... and if the answer was yes, *spent* the money on counter-terrorism beforehand.

*spend

Expand full comment

- the reaction to the lab leak is something that causes me to update. Lab leaks are a lot more common than we think they are, because whether you actually hear about them is extremely political.

- the criteria for Mother Jones to include an incident as a "mass shooting" is extremely biased. For instance anything considered "gang-related" is not included, even though bunch of people still get shot. What this means is that your priors are wrong, and your updates will be wrong (since you're updating based on MJ and the news), so Bayes won't help you there.

- the post-9/11 security updates have mostly been useless I agree. Locking the cockpit doors is likely more useful than adding another layer of bureaucracy to the security gates, and was a lot cheaper. I do notice that we don't have airplane hijacking any more, but that was true before 9/11. I think that has more to do with violent movements not getting as much funding for a while.

Expand full comment

I think there's a big issue here that you haven't addressed, Scott. Based on my own knowledge and experience, I think that the chance of dangerous AI takeoff is next to zero, much too small to worry about. But, I notice that many people who claim to have studied this closely disagree with me. Should I update my beliefs or not? There are plenty of groups that worry about silly things, e.g. the dangers of vaccines causing autism. So, if we're just going by background rates, the update should be tiny. However, the people worried about AI takeoff also claim to have expertise in evaluating risks, so maybe their claims are worth taking more seriously. The problem with that is that a big risk in the form of SBF was right in their midst and they didn't seem capable of correctly evaluating it at all. Is there some other track record of correct risk evaluation that outweighs this, or are we just back to the background rates of treating them like any other group with weird hobbyhorse beliefs?

Expand full comment

I’m not sure if I interpret Scott’s argument correctly, but at least the way I read it, I disagree with most of it.

He seems to suggest that people update a lot from dramatic events, and that it’s a bad thing. While I agree it would be a bad thing, I think that hardly anyone does that. They might *learn* from single dramatic events, but that’s a very different thing, both epistemologically and practically.

I feel that the post fails to differentiate (at least, explicitly) between three types of people who might, say, call for a ban on certain types of biological research if COVID turns out to have a lab origin:

- People who thought a lab-caused pandemic was likely/inevitable, and now see themselves vindicated

- People who thought a lab-caused pandemic was highly unlikely/impossible, but now think it's probable and dangerous

- People who never thought about lab-caused pandemics and now think those are dangerous

The first group is not the target of the post’s criticism; I assume we can ignore them. Let’s look at the second one, which seems to me to be the main target.

I haven't followed the debates, but, based on other topics, I'd venture a guess that only a tiny minority of people who previously thought doing pathogen research was OK will significantly change their opinion; and most people who call for a ban will be from the second and third groups.

Scott writes that he didn’t update after a recent mass shooting by a far-left transgender person. But I bet hardly anyone did! I assume (although I don’t have data) that most of those who thought transgender people were evil treated this in exactly the way Scott described: “we already had 100 cases of transgender being bad, here’s the 101st one.” And I assume the opposite side also reacted exactly like Scott said one should: “we know people do terrible things sometimes, one instance of a person doing a terrible thing doesn’t change anything.” Perhaps I just misunderstand Scott’s claim that “people fail to consider events that have happened hundreds of times, treating each new instance as if it demands a massive update”, which precisely uses mass shootings as the example; I am under the impression that people hardly update at all, and tend demand an update from their opponents on the basis of the cumulative evidence, not just the latest instance.

I even think that people often make this argument relatively explicitly. To take examples from Europe, which I’m more familiar with, most right-wingers who demanded action after the 2005 French riots (https://en.wikipedia.org/wiki/2005_French_riots) or the 2015–16 New Year's Eve sexual assaults in Germany (https://en.wikipedia.org/wiki/2015%E2%80%9316_New_Year%27s_Eve_sexual_assaults_in_Germany) said something like “We’ve been warning you about this; the Muslims were breaking laws in small ways all the time, and it was just a matter of time until something big happened.”

So, the point of all these examples is that people generally don’t *update* a lot on dramatic events. If they had prior expectations, they tend to update them only a little or not at all (or to update them in the opposite direction, because the other side is so insulting and loud and arrogant). Coming back to the lab leak issue, I assume that, if we ask the people who argue about this if any result would lead them to a major update, I assume they’d say no (it would be interesting to see data on this, of course).

What remains is the third group – people who didn’t really have strong priors because they’ve never given much thought to whether gain-of-function research might be dangerous. I’d say it’s most of the population, and I’d say that their ignorance is not stupid; it’s rational. It’s one of hundreds of difficult topics with complex arguments, complex implications, and experts on both sides of the divide. Why should the average person spend weeks of their time researching the matter, given that they don’t have any way of influencing the developments anyway? Scott does it; but then, I assume Scott enjoys the intellectual challenge, and Scott surely has many thousand times the influence of an average American.

So I deny the “A good Bayesian should start out believing there’s some medium chance of a lab leak pandemic per decade” part – I think you can be a good Bayesian and don’t start out with any beliefs at all.

I think this group, or at least significant parts of it, *does* learn from dramatic events, for the simple reason (that has been mentioned in the comments like https://www.astralcodexten.com/p/against-learning-from-dramatic-events/comment/47462922 already) that they didn’t really think about the problem before, but it has been thrust into their lives now by the media. And I assume that they do tend to assign a high probability to such events happening again – presumably one that is too high.

If so, this might be systematically wrong, but not as intellectually indefensible as one might think. Of course, the epistemically optimal way to deal with this would be to dive into the biosecurity literature and debates and form an educated opinion based on long-term data and experts' opinions – but, as mentioned above, I think that would be irrational for most people. The rational thing would be just to take what they know and make the best guess based on that. I’m not well-versed in epistemology, but as far as I understand, there are no clear rules on how to assign probabilities based on a single observation. If you think about it, it might make sense that the examples you get to read about are particularly egregious, and probably not representative – but that’s already more effort than most people do. If that’s what Scott criticized, I agree with that point, but it seems to me it wasn’t the main thrust of his argument.

So I don’t think most people *update* too much on dramatic events – but people who didn’t have an opinion probably often overreact, and that’s what might change the public opinion or a society’s attitudes and rules.

Expand full comment

"I’m part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing."

The biggest disaster you ever faced YET

Expand full comment

I think "a lab leak should only increase your beliefs from 20% to 27%" is downstream of your assumption that even in the common-leak world, leaks only happen 33% of decades!

An alternative toy example: the Common/Rare Lab Leaks worlds have lab leaks occurring in 90%/10% of decades, and my prior is 10%/90% on them, so before covid I expect an 18% chance of a lab leak per decade. One lab leak will update my beliefs to 50/50 between the two world states, which also puts my expected chance of a lab leak per decade at 50%. So an 18% -> 50% is also possible!

An even more extreme example: I only model the world as "Lab Leaks are Impossible/Happen Every Decade", at 80%/20%, so my prior is they should happen 20% of decades, but once I see one I update to 100% chance every decade!

A final example where the probabilities are not discrete: if your prior is "the chance we live in a world where lab leaks happen p% of decades is proportional to (1-p)^3", an update takes you from 20% to 33%. This isnt as big a jump as the previous examples, but its still almost doubling your expected harm from a lab leak!

Expand full comment

I think you're wrong about the SBF/OpenAI thing. The board there did the right thing initially and only the backlash was incorrect. If anything, what we should learn about the Altman fiasco is that no EA can be trusted who is named Sam.

Expand full comment
Jan 17·edited Jan 17

This was nice, Thank you. I guess I must be one of the stupid people, because I mostly don't think about any of this stuff happening (or not happening) beforehand. And so when it does, there's a knee jerk response, and then maybe some time for reflection. And to be honest I just don't want to spend a lot of time thinking about terrorists getting nukes. (Or any of the other shitty things that could happen.) I figure there is maybe someone in government, or a think tank, or on substack doing my worrying for me, and I thank them for it.

As an aside, I think the most important thing to do now is more nuclear fission. (so less regulation or whatever it takes.) And yet this leads right in to a term in 'chances of terrorists getting nukes'. And that is a risk I think we must take... more nukes leads to more nuclear 'accidents'.

Expand full comment

There's also an important factor, having an updating strategy that's difficult for others to manipulate. As long as no terrorist has detonated a nuclear weapon, there's no good way to check whether a security agency that wants money to prevent a detonation is presenting you with a correct cost/benefit analysis. Indeed, I think that factor is important relative to a lot of arguments that circulate among highly-educated tech communities: one of the most intense evolutionary needs is to be difficult to deceive by the many, many actors you are interacting with that have an interest in deceiving you. Do not optimize your analytical tools for being *correct* relative to "the environment", optimize them for being *unexploitable* by your cohorts.

Expand full comment

Can I assume that "there will be a nuclear terrorism attack once every 100 years" actually means "if I run 100 simulations of the year 2024, one of them will contain a nuclear terrorism attack"? Because obviously the actual passage of time and evolution of tech and society will change so many things that may render such an attack impossible or irrelevant.

In 1970 it would've seemed roughly correct that there'd be a 60 HR season in baseball about every 25 years, but if you simulated the 1970 season 100 times I doubt you'd see 4 (or even 1) 60 HR seasons, the actual leader Johnny Bench only hit 45 despite the rare outlier feat of having played nearly everyday as a catcher. But a quarter century later during the Steroid Era, you'd have to simulate 100 seasons to find one that did NOT have a 60 HR hitter, because the game changed.

Doing things to change the frequency of the event therefore seems a LOT more impactful. The danger we want to avoid is not really the 1/100 chance of such an attack in 2024, or the cumulative chance over a decade, the danger is that you could wind up with the equivalent of a Steroid Era of Nuclear Terrorism that your priors treated as nearly impossible but in fact was nearly certain to occur because the tech reached the threshold that made it so.

Expand full comment

<i>Does it matter if COVID was a lab leak?</i>

As others have said, it's not so much the fact that COVID might have been a lab leak, but rather the fact that all the respectable government and media outlets went all-in to smear the lab-leak idea as an ignorant conspiracy theory when we now know that it wasn't. *That* seems like a thing you might reasonably learn from, unless your priors were already weighted quite heavily towards the "respectable government and media outlets are fundamentally untrustworthy" end of the spectrum.

<i>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.</i>

Similarly, it's not the shooting itself, so much as the reaction to it. The media refused to publish the shooter's manifesto and instead rushed to lecture us all on the dangers of transphobia; Joe Biden made a speech calling transgender children "the soul of our nation". So whilst the shooting probably shouldn't change your priors very much on the likelihood of left-wing transgender violence, it probably should change your priors on the likelihood of the left in general condoning violence when it comes from someone they view as sympathetic -- again, unless your priors are already weighted in that direction.

Expand full comment

I agree with the basic sentiment in the article, and I guess I'm jaded enough about the world that I don't have the feeling of updating much when bad stuff gets reported. But as people have hinted in different ways in their responses, there is a big difference between society-wide expectations about the world, and individual ones.

A functioning society makes sure to have people engaged in estimating all kinds of dangers, and figuring out how much it makes sense to prepare for them and/or prevent them. So there is a conversation going on, and institutions get created and specialists do their research, and the occasional interested member of the general public may join in.

Whereas on an individual level, for things like pandemics or major terrorist attacks, where you individually have no role to play either way, it's perfectly rational to just not have or seek a precise prior. You basically know the thing exists, but you don't need to worry about how likely it is because there isn't anything you would do with more accurate information anyway. So you just think, if it happens in my lifetime, I'll notice it right there and then.

Expand full comment

Motorcycles are fun, because the safe thing to do is often the opposite of what instinct tells you. If you are too fast in a sharp turn, your brain screams at you to slow down and round off the turn, but the safe thing is to lean in to the turn and roll on the throttle, because it forces the tires into the road and increases traction. Your survival instinct actually increases your chance of dying.

The Wuhan lab thought they were lowering the risk of a pandemic by tracking down potential pandemic pathogens. They were trying to be proactive (hugely overrated), but in the process caused the thing they were trying to prevent.

The operators of Chernobyl were trying to lower the risk of a meltdown, but in the process of running a safety test caused the thing that they were trying to prevent.

Complex systems increase the possibility that higher order effects of precautionary measures will cause more damage than doing nothing. This basically neuters the precautionary principle. People seem to understand this intuitively when it comes to the environment, which is why there is such a strong hesitancy around geoengineering, or efforts to wipe out malarial mosquitos. When it comes to society or the economy we're much more interventionist.

That's what Hayek meant when he said "the curious task of economics is to teach men how little they understand about what they think they can control".

Expand full comment

>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all.

Bear in mind that the media and the Twitterati aren't neutral parties. If a right-winger commits a mass shooting, it gets trumpeted far and wide as an example of evil right-wingers. If a far-left transgender person does so, it gets buried, or reported in terms that omit the perpetrator's affiliation.

Given how the media buries misdeeds done by allies, having one such misdeed that's so bad that you managed to hear about it anyway should update your beliefs towards evil leftists a lot more than the corresponding report on the other political side.

Expand full comment
Jan 17·edited Jan 17

That was a very interesting and thought-provoking read. However, I’m skeptical of the OpenAI/SBF comparison. Maybe it’s hindsight (or the outside view, since this blog is basically my only window into EA), but the two affairs seem deeply different – so there’s no reason why the same lessons should apply.

In the OpenAI thing, the board (the “EA side”, I guess?) made the first public move against Altman. He was in an extremely strong position: the CEO of a company whose ground-breaking products had become household names in less than a year. The board firing him, even though it was their right, should have been backed up by comparable evidence, which the board didn’t do. Not that they had to address the public, but them not even explaining themselves to other stakeholders (I’d say investors, Microsoft with which they had a partnership, and enough key employees?) destroyed their credibility. Of course, hindsight is 20/20, but this reads like Rational Debate 101: “if someone’s extraordinary claim is not backed by extraordinary evidence, they’re the unreasonable ones”.

For SBF, it’s very different, in that the EA movement was “reactive”, and only tarred by association. There weren’t any actions as resounding as the one above to justify (or was there?), so I think it would have been safest to weather the Week of Hatred (of EA people) and be mostly silent until they could do some good again. I also doubt that SBF’s position within the movement was as strong (both de jure and in perception) as Altman’s at OpenAI.

Of course, it can be that in both cases, the EA movement (or “side”) didn’t have a rhetorical position they could really defend, and got outplayed by people who were better at power or status games. In which case, the (slightly snarky) lesson would be “welcome to the real world, congrats for making it, now get good”.

Regarding the “deontology vs 5D utilitarian chess” lesson, isn’t this taking it to the extreme. No one (afaik) can play with any confidence 5D utilitarian chess. But many non-EA people can ably play regular 2D chess with a mixture of consequentialism and deontology – so maybe the conclusion is that you should stay at a level where you have strong evidence that you’re not out of your depth?

By the way, I’m sure you must have thought about it (perhaps even blogged about it), but you wrote a while ago that a pandemic risk as high as one percent should have been enough to drive all states to seriously prepare for this scenario. This was, of course, completely correct.

But what would have been a reasonable prior on SBF doing something shady enough to tar EAs by association? Given that he was operating some crypto “stuff” in Bahamas without any corporate control or serious oversight, and that he became a billionaire extremely fast, shouldn’t this baseline probability have been higher than one percent? And given how bad it would be if SBF was a fraud, even if your assessment was low, how come this “tail risk” wasn’t taken into account and prepared against, since it’s so central to the movement’s stated philosophy?

Expand full comment

It's fine to update priors and all, but the main reason we want to know whether there was a lab leak is so we can make sure the people responsible are never allowed near a lab or a funding spigot again, and also so the people who lied about it are never in any trusted position ever again.

Expand full comment

I'm going to try to put your position and my position into a quality control analogy.

You have a part that is supplied to you by several vendors (A, B, C) and you do quality control.

You set your acceptable error rate of failed parts at X, based on a dimension Y, and one day you see that the parts from supplier B came with more failed parts than your acceptable X.

Given this factual observation, there are three possible alternatives:

- Assume that all suppliers are behaving well, assume B's failure is an expected outcome, and simply inform them that you are going to raise your quality requirement Y (raise the quality requirement) so you observe fewer X failures.

- Assume that supplier B is misbehaving and take the necessary measures specifically affecting that supplier.

- Or a bit of both

You choose the first option. In my opinion, you do this for ideological reasons, since you are ideologically motivated not to blame China because that would mean agreeing with horrible people like Trump.

This is a classic example of what Taleb refers to as wrong-normal distribution statistics-mediocristan based reasoning. I ask you to consider that this is not a normal situation where supplier B sent you a part out of spec, you throw it away and ask for another batch and wait for the normal distribution to operate and expect to see everything according to distribution again.

In this case, your supplier B sent you A HIGHLY CONTAGIOUS VIRUS. It's important to know if supplier B is a complete idiot, a son of a bitch, or both. It's important, and action must be taken accordingly.

Expand full comment

You make sense. I have long thought along similar lines when it came to conservatives/libertarians who decided they were less pro free market after the Great Recession. (I am thinking specifically of Richard Posner and Razib Khan.) Don't you guys read economic history? Why didn't the Great Depression make you less pro free market? It was much worse than the Great Recession, and the American economy was way more laissez-faire in the '20s than the '00s.

Expand full comment

Please consider replacing the term "stupid people." I'd bet non-statistically literate people make up >95% of the population. Many of these people are brilliant in ways we aren't. I'm not the language police, but outright dismissal of folks who think differently from us isn't likely helping us build the community or future we need. It reads as low-confidence in-group signaling and risks the bottom 5% of your audience (the stupid people?) latching onto this kind of language and thinking. This only stood out to me because of how different it seemed from your typical style.

Love your work. Wishing you all the best in fatherhood :)

Expand full comment

Is there evidence that the kinds of stupid people Scott is arguing against actually exist? (I know I know, no limits to stupidity, etc.) Maybe everyone is just engaging in the same sort of coordination game around visible events, however consciously?

E.g. in the Harvey Weinstein case, did most people really update how likely they thought it were to get sexually abused as a young actress in Hollywood? Or did they just think, "This is a visible and upsetting event, and everyone I know is upset about this! We can finally do something about it!"

Expand full comment

This relies on everyone having a really good model for everything that could happen, despite there being a million issues that *someone* is saying should be my #1 priority right now.

For example, it seems like everyone's been worried for decades about bee populations. I probably first read an article about this 20 years ago, but I can't quite recall what the upshot of this "crisis" is. For now, I think it's reasonable to not make "bee policy" a major part of my voting or charitable decisions.

If tomorrow, some sort of catastrophe happens due to a lack of bees, that signals to me that I should reorient in a big way on the bee thing. I don't have to be "stupid" to wait for the catastrophe, and then update on it, I just have to have non-infinite capacity for research.

But yes, once I point my attention there, I shouldn't then assume bee stuff is our biggest problem just because it was our biggest problem yesterday.

Expand full comment

Interesting post

But

Yes, it matters if it was a lab leak because obviously JFC

That argument is a great example of so-smart-you-are-stupid

Expand full comment
(Banned)Jan 17

Great response by Curtis Yarvin: https://graymirror.substack.com/p/you-will-probably-die-of-a-cold

You know its bad when yarvin feels the need to write a direct response article to someone

Expand full comment

My major takeaway for the media from this post is to be assiduous about putting events into context especially in terms of frequency and magnitude. Of course in the heat of the moment putting things into context is considered to be somehow betraying the victims.

Expand full comment

> So this hypothetical Bayesian, if they learned that COVID was a lab leak, should have 27.5% probability of another lab leak pandemic next decade. But if they learn COVID wasn’t a lab leak, they should have 20% probability.

You talk about lab leak probabilities as if they're handed down from on high, and our job is to figure out the correct number. That is not how this works. The possibility of a Wuhan lab leak is important exactly because of the official response to it. And that response is important exactly because it *causes* future lab leaks to be more likely.

Safety procedures developed via the "some guy sits in an office and brainstorms what procedures a lab should follow and writes up rules" have an extremely poor track record. If we used that method, then the whole system would fall over as soon as a lab encountered real-world considerations like "Hey, what if we're moving to another building and need to pack all our viruses into a van - how do we handle that?" Therefore, that isn't the method we use. Instead, safety is maintained via constantly, constantly updating.

A healthy lab is constantly seeking out times when things didn't go as planned. At a basic level, reports are solicited from individual team members, often monthly or quarterly, and filtered upward. The more serious happenings get a full report written up; different labs call these different things, usually some bland variant on "Incident of Concern". The lessons learned are then socialized throughout the team, with everyone hearing the short version and the people who think they might face similar situations able to read up on the details. Everyone intuitively grasps that the top-down regulations are often only loosely connected to reality, and that it's important to always have information flowing up and out about how the regulations meet the real world.

I cannot stress enough how central this process is to safety in the real world. The top-down regulations are nothing; the feedback process is everything.

An unhealthy lab treats the top-down regulations as perfect and does not seek out information. Any deviation from how the "guy in the office" expected the lab to work results in a punishment for whoever was foolish enough to mention it. Consequently, the gap between what the head-office rules say and how the lab actually works continually grows.

In a *pathological* lab, the entire leadership chain is complicit in fraud from top to bottom.

People who talk a lot about ethics in governance talk about "tone at the top". People at the bottom obviously do not know exactly what the people at the top are doing, but each concentric circle interacts with the circle one step closer. And each gets a sense of what they are expected to do to "play ball" (or, if they fail to learn this, their careers go nowhere).

Binance's Chief Compliance Officer wrote internally in December 2018, "we are operating as a fking unlicensed securities exchange in the USA bro." Assuming that your brain is normal, you do not directly care at all about whether your financial institution is following all the Byzantine U.S. financial regulations, such as the one where they have to file a suspicious-activity report on you if you mention that you know they have a regulation that requires them to file suspicious-activity reports on people who know about the regulation that requires them to file suspicious-activity reports. Nevertheless, I claim that you ought to be concerned about the gap between what Binance's leadership was claiming in public and what they knew in private. Because of tone at the top. As long as Binance's leadership is actively engaged in cover-ups, anyone trying to have a career in Binance will have to learn to cover up any inconvenient facts that their bosses might not like. They will have to learn to do this automatically and implicitly. When everyone in an organization defaults to cover-up mode, this has consequences far beyond the question of complying with the voluminous U.S. regulatory regime.

I am in favor of gain-of-function research. (Just as you do, I use gain-of-function research as a synecdoche for any risky research, such as collecting viruses from the wild and bringing them into labs filled with human researchers. Anything that amounts to proactively seeking out the kinds of things that cause pandemics instead of waiting for them to come to us.) We got lucky this time, in that the pandemic was basically just a standard flu, and standard countermeasures worked fine, so gain-of-function research didn't end up being relevant. (The fancy sci-fi mRNA technology was only needed to overcome the challenge of mass-manufacturing a huge number of vaccines quickly.) I have no confidence we will continue to be so lucky forever.

As far as I am concerned, the number that matters for the next pandemic is this one: https://ourworldindata.org/urbanization

The next pandemic will come. It is inevitable. The only question is how prepared we'll be when it does.

Gain-of-function research makes us more prepared. But it also (probabilistically) makes the next pandemic happen sooner. That latter makes us *less* prepared. (Not only because we'll have completed less gain-of-function research itself, but also because we'll have less development in other areas, like mRNA manufacturing.) Therefore the safety practices of labs are incredibly important.

That's what I think. I am not going to try to make an argument here in favor of gain-of-function research, tgof137-style. Maybe you don't agree with me. Maybe you want to shut it all down. Maybe you want to murder me in my sleep to stop me polluting the system with my vote. That's irrelevant. The fact is that my side won. (Even if a few programs, like DEEP VZN, become casualties of political spats, much like Donald Trump killing individual Ford Mexico plants Batman-style.) (Yes, granted no war is ever over, but HR5894 has poor prospects in the Senate, and in any case gain-of-function per se is just a synecdoche, not the whole.) We won *seven years ago*, gain-of-function research is being done even as we speak, and therefore YOU SHOULD CARE A WHOLE HECKUVA LOT ABOUT THE ETHICAL PRACTICES OF THE ORGANIZATIONS DOING IT.

What are those ethical practices? Well.

Expand full comment

I did the lab leak calculation but am getting different numbers for the observed-leak update. Would someone mind sanity checking? I'm just getting the hang of things here, anyway.

Let A denote the 0.33 decade-rate hypothesis and B the 0.10 decade-rate one. Under our problem context C, observed data D of a single pandemic, and 2 sig figs, we have:

p(A|C) = p(B|C) = 0.5

p(D|C) = 0.5×0.33 + 0.5×0.10 = 0.22 (our prior: round to nearest even)

p(D|AC) = 0.33^1 e^-0.33 / 1! = 0.24 (Poisson distribution)

p(D|BC) = 0.10^1 e^0.33 / 1! = 0.09 (Poisson distribution)

p(A|DC) = 0.24 × 0.5÷0.22 = 0.54 (Bayesian update)

p(B|DC) = 0.09 × 0.5÷0.22 = 0.20 (Bayesian update)

So A:B odds just boils down to straight division of the distributions, i.e. 24:9 = 8:3, or roughly 73:27. This is within rounding error of 74:26, so I'm unsure whether Scott just Spoonerized the 6 and 4 to get 76:24 or I am doing something wrong. That said, the final expected base rate matches up: 0.73×0.33 + 0.27×0.10 = 0.27.

Expand full comment

I seem to remember a thread on Less Wrong about (IIRC) something of which Fermi had there was a 10% chance that turned out to be true, and whenever people in the comments pointed out reasons why that was not an unreasonable figure given what Fermi could possibly have known back then, EY would smugly say something like "pretty argument, how come it didn't work out in reality", which would have been a reasonable reply if Fermy had said something like 0.1% chance, but I found pretty ridiculous for 10%.

Expand full comment

By the way, already back in mid-2020 I found lab-leak hypotheses much more plausible than the mainstream did, because of the epistemic good luck of simultaneously remembering that ESR had blogged about such a hypothesis back in early February which I had found convincing, and not remembering any of the details of his particular hypothesis, which had been pretty much ruled out by March at the latest.

Expand full comment

"This strategy is mathematically correct..."

I'm pretty sure this entire post is rather, incoherent ramblings. You made up these numbers (sometimes you admit to this e.g. "fake bayesian math"). There was no objective starting point. This isn't how thinking works, let alone thinking rationally, though I know you disagree because you believe Bayes has ascended to divinity or whatever. But Astrology is literally more objective than everything in this entire post. At least one can point to the position of planets in the sky as a starting point before making "mathematically correct" calculations pertaining to them. As an example, I'll briefly discuss your example.

You made an argument that survey's suggest 5% of people admit to having sexually assaulted someone, this is something people would want to lie about, so 10% actually did it? Whatever happened to the "Lizardman’s Constant Is 4%"? Didn't Bush do North Dakota? Why would you assume anybody would care to answer this question honestly at all, why wouldn't they lie in the other direction to mess with you? Depending on circumstance, I probably would.

Actually, nobody admitted to anything! You can't point to them, you only have an abstraction, a number so low it's not worth anything at all in this case. And your entire post is basically just restatements of this made up math.

Here are some real reasons why one should care if Lab Leak is true. If it's true, it means some people with political and social power knew about it, but didn't tell the public. Not only did they not tell the public, they actively hid the fact. Not only was the fact hidden, an international instantiation of censorship pervaded the Internet, creating a situation where to this day people are concerned on what they can and can not say online regarding this stupid virus. And then this was used by moral busybodies to start attacking people and dismissing concerns, and feel righteous about it, when in fact they didn't know any better than anybody else who had no political power. This isn't even getting into the fact that whatever research was conducted, it seems to have done nothing to mitigate the health, societal, and economic damage from the virus itself.

Now you may note that your real point is some nonsense about future lab pandemics. This is what people call a "straw man". It's true that some people embody the straw man. But the straw man is a straw man because it's a carefully constructed argument that isn't the argument being made. While the possibility of future lab leaked pandemics is a concern, it's not the only, or even the primary concern to most people regarding Covid being a lab leak. Insofar as it is a concern, a normal person, as opposed to a good Bayesian, would weigh the usefulness of funding lab viruses/gain-of-function research in China in the first place, against the possibility of even one leak.

Expand full comment

This is such an idiotic take. One would only expect such brainfart dressed up with a few Bayesian updates to make it sound sophisticated to work on idiots and that's what the rationalist community looks like to me. I didn't have strong feelings either way but the inability of many in the community to see the obvious just helped solidify my hunch that many in the community like the author of this article are gullible idiots.

The important point that somehow doesn't even register these self annointed rational thinkers is the implication of it being a lab leak. It implies that their Lord and Saviour, Saint Anthony Fauci disregarded concerns of well meaning scientists to overturn the ban on gain of function research of concern and when he realised how big of a cock up that was, Fauci and some influential members of the scientific community gaslit the public about the origins of Covid-19.

Now spinning this as it doesn't matter seems like a way to cope with the fact that the community has been wrong and slow to accept the high likelihood of a lab origin. Anyone can call themselves rationalists but whether or not someone truly is can only be judged by their conduct and this particular topic shows how terrible the average rationalist actually is at being rational and are as prone to political biases as any other community. Not denying that there is a small section of the community who see through the charade and I respect them but on average the community is just a substitute for religion for many of the members to make them feel good about themselves. The community could use more practicing than preaching.

Expand full comment
Jan 18·edited Jan 18

>I think it would have been possible to have gotten this right. Before 9-11, we might have investigated the frequency of terrorist attacks. We would have noticed small attacks once every few years, large attacks every decade or so, etc. Then we would have fit it to a power law (it’s always a power law) and predicted a distribution

This kind of thing is usually done by extreme value analysis - engineers use this to get return periods (on average exceeded once in 50 years for example) for e.g. floods or waves. I'm sure this can be done also for terrorist attacks. I would also assume the right kind of government agency would routinly do this sort of analysis, and therefore be aware that they should not to update too much on such events. So we already know this and the over-compansating is probably more due to incentives in media and politics than anything else. (A caveat is that this kind of analysis typically requires the assumtion that the data is homogenous - i.e. the mechanism causing the random events is not changing over time, or at least that the trend is known. )

Expand full comment

>In retrospect, updating any of our beliefs - about Islam, about the extent of the terrorist threat, about geopolitical reality, based on 9-11, was probably a mistake.

Well... No. 9/11 didn't change everything, it was a loud sign that Americans and Westernerners should pay attention to how much things had changed.

Since 9/11, we have had 44,000+ jihad attacks, at least two genocides, and varoous European nations coming to various accomodations with islamic imperialism (decriminalising rape, abolishing free speech, accepting murderous Jew hatref etc)

Kind of a big deal? Maybe if we all actually had updated our beliefs we wouldn't have e.g. Yazidi refugees running into their slavers in Germany. Maybe we might even have prevented the slavery and genocide in the first place?

I know. Whacky ideas.

Expand full comment
founding

Though my personal instincts are very much "against learning from dramatic events," I fear that this is a bad heuristic. Consider the stock market: it's not at all uncommon for individual stock prices to go up or down *a lot* in response to a single headline or earnings release. Very often this irritates the most intellectually inclined "investment professionals," who complain that other market participants are being hysterical and overreacting. Yet academic research suggests the opposite: there is a phenomenon of "post-earnings-announcement drift" (https://en.wikipedia.org/wiki/Post%E2%80%93earnings-announcement_drift) in which stock prices that go up immediately because of good news tend to keep going up for a little while afterwards, even though this violates strict market efficiency. Investors are systematically slow to fully update. The larger "momentum" (https://en.wikipedia.org/wiki/Momentum_investing) effect, in which various kinds of financial assets that have done well in the recent past tend to keep doing well in the near future, probably has a similar cause: investors are (slightly) irrationally reluctant to jump all the way to a new belief state, so they get there over a series of smaller steps.

If this is how fairly smart people tend to behave even with strong financial incentives to be correct and a lot of resources to spend on research, we should be suspicious of any general restraint on updating from events. It sounds reasonable and sober to say "sometimes a terrorist attack happens every so often," or "sometimes a company misses earnings for no important reason but gets right back on track the following quarter," but in practice these sorts of human events involve many concrete details and are never just pure instances of a simple reference class. There are many ways that aspects of an event can be surprising and rationally induce large updates of some sort, even if "an event in this broad reference class happened" is not itself, rationally speaking, a surprise, and I think the financial-market experience suggests that the more common failure mode is under-updating, not over-updating.

Expand full comment

I did proper base rates for pandemics and basically agree with your bottom line, although think your numbers are too low. I think COVID being a lab leak should move you from 56% to 76% on there being at least one accidental pandemic over the next decade.

More here: https://open.substack.com/pub/deconfusiondevice/p/forecasting-accidentally-caused-pandemics?utm_source=share&utm_medium=android&r=11048

Remember that the median pandemic is a lot lower impact than COVID. E.g., the other example of an accidental pandemic (1977 Russian Flu) was 700,000 deaths, compared to ~20 million from COVID.

Expand full comment

You should consider joining the Radical Centrist movement. We are like regular centrists but instead of being passive when extremists in either major political party overreact, we believe in slapping them down hard.

https://questioner.substack.com/p/we-live-in-a-society

Expand full comment

The lesson here isn't "don't update from dramatic events". The lesson here is "broaden your horizons". The issue with dramatic events like 9/11 is the infrequency, not the magnitude. When n = ~1, your updates should be large. When n = decades of datapoints, your updates should be small.

The average joe is not an expert in terrorism. For the average joe, n is small. Maybe even zero. Sure, it'd be nice if everyone were more familiar with niche subject-matters like terrorism. But from the limited perspective of the average joe, large updates are completely reasonable.

Expand full comment

kind of tangential, but I'm just amazed that Yoshihiro Kawaoka appears to just be plugging away doing his thing, even after all of this

Expand full comment

This article's focus on how much to update broad, outside-view statistics seems rather limited and anti-curious. It's true that some oracle telling us "yes, it was a lab leak" or "no it wasn't" doesn't tell us very much. But an investigation into *how* a lab leak happened will hopefully discover a lot more than that. The goal of an accident investigation should be to find specific ways that biosafety procedures can be improved.

Similarly, maybe you shouldn't update too much after a plane crash about air safety overall, but that's different from thinking that investigating crashes isn't worthwhile.

It doesn't mean everyone has to be curious about everything. There are many subjects where my own curiosity is limited. But we should support the people who actually look. The details are more important than the headline.

I found Matt Levine's columns about FTX and the OpenAI board controversy to have plenty of interest, even if it didn't change my overall views of cryptocurrency or non-profit boards very much.

Expand full comment

Usually when our emotional reactions to events hinder us, it's by preventing us from updating.

E.g., if I'm already inclined to think that there's something wrong with the EA movement, and I'm just hoping for some public blunder to surface, then I haven't really updated much if I later observe one of them to occur. I think a lot of the public performances which accompany those events (affected gasping and so on) are not in fact evidence of updating, per se. If the opposite were the case, that our untempered emotional reactions cause larger-than-justified updates, we'd expect to observe more chaotic behavior in groups of people, betrayals and side-switching and so on. But tribal behavior often involves a lot of people digging their heels in.

Expand full comment

This has a GM/Player problem (after baseball statistics; https://community.fangraphs.com/stop-thinking-like-a-gm-start-thinking-like-a-player/).

The chances of a major terrorist attack or lab leak depend on the existence of people who use as much knowledge and detail as they can get to try and stop every last terror attack or biosafety failure. If COVID were a lab leak, it would be ridiculous for the NIH Biosafety Committee to say "well, that's in line with our once a century major lab leak model; we wouldn't want to overreact to a rare event".

Does it matter if COVID was a lab leak? Maybe not to you or me. But it should matter to the people who write biosafety protocols, and if it doesn't, the historical rate of laboratory safety failures won't be a good guide for the future.

Expand full comment

If we look in aggregate and say there is just a 20% just of a “global pandemic inducing” lab leak every decade, and the first lab with the status of Biosafety Level 4 opened in 1969, that is roughly 1 in 5 decades, so you know, fine.

If we look specifically at one country though, and it’s first BSL 4 lab opened in 2015, and it’s now had one global pandemic inducing lab leak, should the global science community permit that country to continue opening labs (they currently have 2)? Should they be held to a higher standard than other countries? Or do we just trust them to get it right the next time?

https://en.m.wikipedia.org/wiki/Biosafety_level

Expand full comment

So this is correct and a reasoning mistake that makes me crazy.

BUT once again I think the rationalist take ignores meta-concerns. Here, specifically, I'm thinking about agenda setting. Prior to 9-11 the public wasn't "stupid" because they thought a major terror attack wouldn't happen. Instead, a major terror attack wasn't even on the radar. Then it was, so people updated to "oh large scale terrorist attacks can happen I guess." They updated badly, but updating from "my base rate for things I don't think about" to "oh I guess I should have been thinking about that" is completely rational when a dramatic event occurs.

Even if you updated your priors on every event that happened in the whole world, agenda-setting would still come into play. For instance, with both the OpenAI thing and the SBF thing, my instinct was to update the same way: "These wackadoos are so lost in abstraction that they can't make common-sense choices based on the heuristics normal people use." With some thought I see that's too general, but even so I could easily update in a way that confirmed my every bias, using the same information you're using to say the events were polar opposites.

Anticipating events that *might* happen is important. Understanding that fluke-y stuff just happens sometimes is important. But there's a finer line than I think the community realizes between these two ideas:

1) "You have to use reason and weighted hypotheticals to think about the world, what actually happens is just one part of that statistical space and basing your entire worldview on that is a recipe for ignoring obvious threats just because they haven't happened yet."

2) "Data literally doesn't matter at all and will be assimilated into my pre-existing worldview in a way that makes me more confident despite an absolute absence of evidence."

Expand full comment

While I do agree with this framework with regards to disasters, I fear it slips into being fatalistic about the opportunities for improvement. Maternal deaths used to be much more common. So did plane crashes. Food born illness. Etc. The difficult bit for me is developing the discretion to recognise a tragedy/"state of affairs that led to tragedy" as fixable vs not fixable

Expand full comment

> My worst enemies - e/acc people […] weird anti-charity socialists

Wait what? E/acc is explicitly capitalist, its members are capitalists, its founders are capitalists, wikipedia describes it as capitalist, the e/acc manifesto even says so: https://beff.substack.com/p/notes-on-eacc-principles-and-tenets

Expand full comment

I think the reason why EAs were disturbed by sexual harassment happening in the EA community is not because we aren’t aware of the base rate, but because we hold ourself to a much higher standard than the base rate (because it explicitly goes against the altruism)

Expand full comment

Apparently we're assuming by "matter" we must mean "matter in order to update my estimates on this pointless made-up probability I decided for some bizarre reason to focus on".

You go on to suggest one would care about this non-data point because it would somehow be controlling to guide one in crafting a gain-of-function research regulation policy ... well first off: there are many other "matter"-ful topics one can care about that don't have anything to do with regulation of gain-of-function research. And second, the estimated "per decade chance of a lab leak pandemic" is about the worst datum I could imagine to use to guide the crafting of a gain-of-function research regulation policy.

Expand full comment

This fits nicely into my understanding of how wisdom is attained.

We can easily observe that there are elderly people who are not particularly wise. Therefore, although the acquisition of wisdom requires time and experience, age does not equal wisdom per se.

So, when we encounter a sentinel event, we must ask two questions.

Is this event something I can learn from?

and

What is the lesson?

Expand full comment

I really enjoyed this! You crystalized a similar conversation I'd had relating to 9/11 and airpot security (conversation being what level of frequency one would accept of a 9/11 style attacking it meant we could remove airport security and why 9/11 was possibly an over update).

On a meta note, is Scott's blog particularly slow to load? I checked some other sub stacks and they seem to load fine, but I find my browser (safari) really struggles with ACX.

Expand full comment

There's a level of meta-reasoning lacking in the Covid arguments.

It matters very little whether it was in fact a lab leak. But it matters immensely whether the authorities and medical/scientific communities responded appropriately given the facts at hand.

We saw abject failure of our sense-making apparatus to act appropriately. For many, this triggered a much more important update than the base rate of pandemics. And ensuring those institutions respond appropriately (e.g. reasoning in terms of base rates at all) is much more important than mildly updating what the base rate of pandemics is.

Expand full comment

Doesn't this perspective only work if you assume that the factors that caused the past distribution have not changed? If 9/11 seemed to show a qualitatively new kind of terrorist movement, wouldn't that change the likelihood of future attacks in a way that was not bounded or predictable based on past models? Like, if there was a nuclear terrorist attack, it would matter a lot whether terrorists had scrounged an existing nuke or made their own, and whether they'd made their own with the help of a rare sympathetic expert or using unskilled labor and newly affordable tools. In the latter case you have a reason to expect many more attacks than in previous experience.

In a similar way, maybe the reason for the stability of the rate of such attacks over time is the overreaction of "stupid people" each time an attack occurs, which updates the focus of, and focuses public attention on the efficiency of, society's ongoing anti terrorism work. Maybe a successful attack was just bad luck for anti terror efforts, but maybe it indicates an atrophy in those efforts or an incorrect focus. Without freaking out, how would the public ever know?

Expand full comment

Wouldn’t the over reactors just claim that their update was a “how the world works” style update. I.e. a new and more virulent strain of Islamic extremism has emerged and shown its colors. Just like an apparently unfaithful spouse has emerged and shown his. In each case we must update hard and react because we just learned something critical about how the world works

Expand full comment

Great article.

Expand full comment

This is all very rational, except that it comes to naught because the majority of people are and will always be stupid.

Also, to be a devil’s advocate, it was not entirely stupid to think that after 9/11 terrorist attacks could be more severe and frequent.. there is the massive inspiration it may have provided to other potential terrorists as well as the copycat effect, just for a couple of factors. The sociology of terrorist attacks is a different thing to model than the frequency of a severe earthquake that does not depend on anything human beings feel or do. Financial risk managers know these difficulties well.

Expand full comment