Hm, if it is about climate, anyone NOT arguing, nay shouting, for a "precautionary approach" before something (big) happens is be will be shouted down - "how dare you" / "I want you to panic". - me no climate-skeptic, just wary of the catastrophists/ "degrowth-environmentalists" https://unchartedterritories.tomaspueyo.com/p/there-are-2-types-of-environmentalist
Thank you. I know and (mostly) agree with Taleb's statement. De-growth / switching off nuclear power / banning geo-engineering are "actions or policies" I suspect of a huge risk (nay certainty) of causing severe harm to general health AND the environment. Being poorer is worse for health and the environment. You can disprove that? Greta-nomics make sense?
As I am from the past, a boomer even, I assure you: We spent HUGE efforts to improve health and environment. With huge success. Maybe too much "pp", when it came to nuclear power. Getting a brake on CO2 was for a long time not as high a priority as we were suffering from much worse fumes and issues. You surely went to read the Pueyo post, I linked to? I noted too late: it is a paid one. Well, he has several free for all. https://unchartedterritories.tomaspueyo.com/
Firstly, I don't see how your point here contradicts Mark's point or supports your earlier point in the comment Mark responded to. You say the precautionary principle prescribes not making large-scale changes to the environment. Previously you said that anyone arguing for a precautionary approach before something happens will be shouted down. But at this point, people arguing for the precautionary approach in the case of climate change aren't getting shouted down in mainstream circles; if anything, it's those arguing against it getting shouted down, or just both are shouting at each other.
Secondly, preventing climate change by eliminating fossil fuel use at a forced pace *also* has a definite risk of causing severe harm to humanity's well-being via impoverishing us, although there is perhaps less uncertainty about the extent of the harm. And it's not a small number of big companies causing CO₂ emissions and benefiting from them at public cost, we're all (perhaps indirectly) causing them and benefiting from them (while also being harmed by them in some ways). One can argue that the harms exceeds the benefits, but it's not the case that a few people get only the benefits, while the rest only gets the harm.
I generally don't think the concept of "burden of proof" is very meaningful when we aren't talking about a court case. In a policy question, we have to estimate the probability of something (or the expected value and the probability distribution), and policy should depend on that estimate. Who provides the evidence our estimate is based on doesn't matter, except for evaluating the trustworthiness of the evidence. If we estimate a significant risk of severe harm from climate change, we should prevent it even at a significant cost. If we estimate the risk as very small, we shouldn't, whether it's those causing the climate change who have proved that the risk is very small, or others.
> In November of 2022, three associates of Ziz (Somnulence “Somni” Logencia, Emma Borhanian, and someone going by the alias “Suri Dao”) got into a violent conflict with their landlord in Vallejo, California, according to court records and news reports. Somni stabbed the landlord in the back with a sword, and the landlord shot Somni and Emma. Emma died, and Somni and Suri were arrested. Ziz and Gwen were seen by police at the scene, alive.
Not terribly relevant to the point of your post, but what *I* found most interesting about the Lab Leak hypothesis was how vehement a lot of people were (very early on) about eliminating it as a possibility. Clearly, the chances of this being the case were greater than 0%. And the location of the outbreak being near the lab wasn't exactly evidence AGAINST a lab leak. Still ...
What I find interesting about the Harvey Weinstein affair is that the response from Hollywood seems to be, "Yes, *everyone* knew about it. And it was bad." And my question is, "So who is doing this NOW such that in 20 - 30 years we will be hearing (again), yes, everyone knew it ...?" We'll have to wait 20 - 30 years, though.
About SBF ... high trust groups (e.g. Mormons) are known to be at higher risk of this sort of thing than low trust groups. Because obviously ... It happens.
The relevant prior to update in the lab leak hypothesis isn’t the chance that another lab leak happens. It’s: “should you trust the US government and major corporate news networks when they swear up and down something is true and that no one should disagree.”
'The louder he talked of his honor, the faster we counted our spoons.'
-- Ralph Waldo Emerson
My current trust when 'it is made clear that even ASKING the question makes you a bad person' is approximately 0%. The folks might actually be correct, but I put approximately no weight at all on their claim. Data will have to come from some other source. I'm not inclined to increase this value based on any recent-ish events :-)
'The louder he talked of his honor, the faster we counted our spoons.'
My sentiment exactly. All public moralists are somewhat suspect to me - some exceptions aside where publicity is necessarily. I assume that they are, not often than not, the thing that are inveighing against.
I'm not assuming that the folks who were vehement against even CONSIDERING a lab leak were leaking Covid themselves from a lab.
I do suspect (in the general case) that when folks insist that even asking the question makes you a bad person then maybe things aren't as clear as one would like.
I think it was an anti racist ideology, or adjacent. Trump had his China flu rhetoric after all. Then Biden comes to power and is also anti China and now it’s ok to be anti China.
China is an adversary. It is silly not to be wary of an adversary. But it may be possible to work together to accomplish certain goals, even though it isn't possible to work together to accomplish others. Neither government is trustworthy, so they properly don't trust each other.
> I do suspect (in the general case) that when folks insist that even asking the question makes you a bad person then maybe things aren't as clear as one would like
The main problem with applying this heuristic is that pretty soon you wind up going down the Holocaust revisionism rabbit hole.
Sometimes, being popular is more important than being right.
As I understand it, the big push against even considering a lab leak came with the 19 Feb 2020 open letter in "Lancet", which gave traditional media sources an excuse to effectively declare the matter settled on the basis of scientific consensus.
That letter was co-written and privately circulated by Peter Daszak, who is in fact the person most likely to have been leaking Covid from a lab. Well, directing others to create human-contagious bat coronaviruses in a known-leaky lab while he stayed safely on a different continent.
"Person most likely to" is not the same as "proven to", of course. It's still quite possible that this was a natural zoonotic spillover. But the plausible scenarios involving lab leaks are dominated by ones involving Daszak. So, pretty serious conflict of interest there.
At least some of them had been involved in funding research at the WIV, which gave them a strong reason to persuade people that it was not a lab leak. That is most obviously true of Peter Daszak, president of EcoHealth Alliance, the organization through which money passed from the federal government to fund WIV research, and of Fauci, ultimately responsible for the government end of the same funding. Daszak signed, probably organized, the March 2020 the Lancet piece “Statement in support of the scientists, public health professionals, and medical professionals of China combatting COVID-19” which rejected "conspiracy theories suggesting that COVID-19 does not have a natural origin."
That's too simple a model, but it has a large number of correct predictions. It's just that it also gives you a large number of false positives. It's definitely true that many folks who feel guilty about their desires or actions project those same desires or actions onto others, and also project the guilt.
My estimate is that many moralists feel guilty about their desires, but do not commit the actions that feel guilty about desiring.
I've found an idiomatic meaning, something along the lines of "making sure out stuff hasn't been stolen", but I'm not sure that's what is meant. Can you clarify this at all?
Edit: I actually found the passage with more artful digging. Short version: yes.
"See what allowance vice finds in the respectable and well-conditioned class. If a pickpocket intrude into the society of gentlemen, they exert what moral force they have, and he finds himself uncomfortable and glad to get away. But if an adventurer go through all the forms, procure himself to be elected to a post of trust, as of senator or president, though by the same arts as we detest in the house-thief,—the same gentlemen who agree to discountenance the private rogue will be forward to show civilities and marks of respect to the public one; and no amount of evidence of his crimes will prevent them giving him ovations, complimentary dinners, opening their own houses to him and priding themselves on his acquaintance. We were not deceived by the professions of the private adventurer,—the louder he talked of his honor, the faster we counted our spoons; but we appeal to the sanctified preamble of the
messages and proclamations of the public sinner, as the proof of sincerity. It must be that they who pay this homage have said to themselves, On the whole, we don't know about this that you call honesty; a bird in the hand is better."
"They were a primary target of opportunity for Victorian era burglars. A court record in 1840 lists the thefts of Edward Abbey, sentenced to four years of hard labor: some gold and silver and three teaspoons. Dickens has several instances that show how the spoons were luxury items: when Scrooge dies, in Christmas Carol, his housekeeper strips the sheets from his bed and his night clothes, along with his teaspoons and sugar tongs, to sell to a fence. Fagin in Oliver Twist is sent to prison for two years, for theft of the equivalent of $120 or so – and a silver teaspoon."
And, yeah. Emerson (1803 - 1882) wasn't living in Victorian Era England, but the idea seems to have been on both sides of the Atlantic.
To follow up, it's not just the archetypal night-time sneak-thief burglary. If you invited someone to your house for tea, and put out the good china and the good silver spoons, you might later find out that some of the spoons had "wandered off", which is to say, that your guest turned out to be unscrupulous and had stolen a few of them by slipping them into a pocket. (Presumably the china would be too bulky.) Sometimes this would happen with an elderly-and-forgetful person, or an eccentric-but-harmless person, and everyone would know about them and know how to go about retrieving the items later. But sometimes it would be someone of bad character, stereotypically someone "of bad breeding" posing as a lady or gentleman in order to enrich themselves, either through some complicated social manipulation, or just basic thievery.
And so the situation is that you invited a stranger over for tea, and they started showing "red flags" about not behaving like a proper lady or gentleman, and now you're on guard in case they turn out to be a thief.
So, going forward ... how do I take into account political leanings in statements about what should be 'truth' (the virus leaked from a lab or it did not)? I was aware of this BEFORE Covid, but I'm wondering if I should increase my weight that various statements from the health establishment are driven by politics rather than data/facts.
If Trump had been pushing masking very early on would the health establishment have decided that most masks did little for average citizens? I don't know.
I don't know how much you weight the possibility that statements from scientific authorities are informed in whole or in part by politics, but from what I can tell of the reproducibility crisis, I would probably increase that weight.
> I'm wondering if I should increase my weight that various statements from the health establishment are driven by politics
Wow dude. Yes. You should set that weight to 1.0.
I went really deep on this during COVID and concluded that statements from the health establishment are entirely driven by politics, in the sense that everything they say is first filtered or transformed by the question "is this compatible with far left collectivism?"
So, if there's a true fact that's compatible, they will report it honestly. If the facts are not compatible, they either won't report it, or will just make shit up, and will then tell you that every expert agrees with them. And they will because everyone who isn't a collectivist has been purged from the health establishment years ago with the exception of private doctors, but they're licensed and so under the thumb.
Examples: how vaccine herd immunity went from expert consensus to "I think it was hope", and how social distancing went to "it just sort of turned up", those are direct quotes from health establishment figures in the USA.
This is a very painful thing because lots of people spent the COVID years telling themselves that as upstanding citizens on the side of science, they would listen to and respect the health establishment, unlike their crazy uncle who gets conspiracy theories off ZeroHedge. But those people were all wrong, and the crazy uncle was correct. Nothing you heard from the health establishment about COVID can be taken at face value - every single claim, no matter how apparently basic or trivial, was tested to ensure its compatibility with advancing collectivist ideology (or harming those who stand against it).
I still have a problem understanding the "masks bad" side.
I mean, covid is a virus, right? It is spread by tiny droplets that come out of infected people's mouths and noses, right? So... how does putting a physical barrier to these droplets *not* work?
(If the objection is that it doesn't help 100% reliably, I agree with that. It's just that in absence of 100%, I would take even 50% over 0%.)
(If the objection is that other things are more effective, such as staying away from strangers, or meeting people on the street rather then inside a building, I agree with that, too. It's just, sometimes I can't avoid the other people.)
Where I live, the typical objection against masks is that they were designed by evil Americans to suffocate our children. I find this somewhat falsified by the fact that my kids survived. Whom do Americans blame for trying to suffocate their children? Is there any other specific problem with masks? Is it just a generic "you can't make me do X" tantrum?
I think the objections were at root mostly about rules requiring people to wear cloth masks (which I think have protection closer to 5% than 50%). The mask itself becomes a symbol of the mandate/loss of freedom/cowardly submission to authority etc. This motivates exaggerating the minor discomfort into suffocation.
Covid is a virus that is spread by very small droplets, which follow the airflow very closely. And the air mostly follows the path of least resistance. So if you're wearing a typical improvised cloth mask that you bought on Etsy in April 2020, and your breath mostly goes out through gaps at the side of the mask. Or where the nose pulls the mask away from your face, so your breath fogs up your glasses while spouting a fountain of virus to waft down on everyone in reach.
If you're inhaling, more of the air will be forced through the mask, and the weave might be tight enough to filter out even few-micron droplets, but there's still a lot of leakage. And anything that *does* get filtered, just accumulates on the mask until you take it off at the end of the day, when some of it smears onto your fingers as you take off the mask, right before you scratch the itchy nose that has been bothering you all day.
A properly-fitted N95 respirator, used once and carefully disposed of afterwards, is designed to deal with these problems and does so quite well. But almost nobody in North America or Western Europe had access to those in early 2020; the East Asians had bought up pretty much the entire world's supply. Even American health-care workers and first responders didn't have *enough* N95s to dispose of them as recommended. And, as Scott has noted from his own medical training, properly fitting an N95 is not an intuitively obvious thing or a comfortable thing and lots of doctors don't manage it. But an improperly-fitted and oft-reused N95 still offers reasonably good protection,
The CDC, many years before COVID, published a white paper on how to make an improvised cloth mask out of cut-up T-shirts and the like, that would actually do what an N95 is supposed to. Unfortunately I'm at the wrong computer to dig up a reference, but the thing looked like the mutant offspring of an "Alien" facehugger and a laundry hamper. I didn't see anybody wearing one of those in 2020, nor was the CDC telling people to do so even though this was almost exactly the situation they'd designed that mask for.
The sort of improvised cloth masks people were actually wearing, offer protection that is statistically indistinguishable from "entirely useless" and could easily cross over into "worse than useless" with even a little risk compensation.
This might be why it was _repeated_ but the original dismissals were by the people who were responsible for the research happening at WIV--i.e. the ones with the most to lose if lab leak was widely believed to be true.
I tend to agree. Here in Europe we did not pay that much attention to whatever The Donald had to say. But in Germany the "mainstream" was very much anti-lab-leak for a long time. As we listened to our Fauci: Prof. Drosten - a true expert, who developed one of the first good tests. And he said: No leak. https://en.wikipedia.org/wiki/Christian_Drosten - Well, he had published before with Shi Zhengli. Checking for her name I just found on wikipedia: https://en.wikipedia.org/wiki/Wuhan_Institute_of_Virology#Virus_origin_allegations
Yeah, that's my guess. I mean, the lab leak hypothesis was denigrated in other countries than the US, too, even without those countries actually have Trump. However, all Western countries had the same issues of wanting to cooperating with China and avoiding a topic that could foul up this cooperation.
We freely condemn China for things the Chinese government doesn't care about. They don't what to look incompetent, so we tend to ignore that. (And when we're involved in the incompetence, the tendency becomes a LOT stronger.) The denials always had a strong flavor of CYA...but this didn't even address whether the thing happened or not. (FWIW, I tend to disbelieve the lab-leak hypothesis, but don't consider it important in either case...except that lab protocols definitely need to be strengthened...in either case.)
In a technical sense it did enter the public consciousness via someone playing to Trump's prejudices, because Senator Tom Cotton thought Trump might be more willing to dismiss the reassurances of his friend Xi if Cotton pointed out it might be something that accidentally got out of a Chinese lab.
The lab leak (if true (probably is IMO)) can be put down to stupidity/ bad practice/ sloppiness. What is of far more concern (again IMO) is the smoking gun of cover up by Fauci et al, especially as it is from "the elite ®" who now are demanding to have full control over the levers of information dissemination in the name of stopping disinformation.
At first I thought "of course it matters whether Covid was a lab leak! The media has been confidently insisting that it isn't, and this factors into whether you should trust them".
Then I realised the same argument applies to that: the media has been confidently wrong about many things, so also being confidently wrong about this isn't much of a surprise.
What surprised me significantly about the media narrative is that a group of scientists, who in emails specifically stated that they were quite unsure if it was a lab leak or not, chose to publish a schelling point paper that the narrative coalesced around -
I continue to be surprised at how easily the media swallows anti-lab-leak papers with major flaws (like the one claiming to prove that Covid *did so too* cross over at the market, based on samples concentrated around the market - real "drunk looking for his keys under the lamp-post" stuff), and at how approvingly it continues to quote Kristian Andersen, despite his demonstrated involvement in the Lancet cover-up.
I think this brings up a good point though. Scott mentions that it only matters to convince stupid people. This is wrong. It *also* matters to convince people who aren't paying attention (which is most people most of the time, many of whom are very much not stupid!).
The big event is too big to not notice, and it might cause you to realize that "huh, no *only* was the media confidently wrong about this really big, important thing, they have consistently been confidently wrong about lots of little things for a long time!" That kind of realization warrants a big update.
This article is very much true for things where someone is paying enough attention to be quasi informed. Big things often bring the attention of lots of people who were, previously, not either of those things, and probably had a default belief that "everything is fine because most things are fine", and must now make a big update to that belief.
-edit- to try and put it more succinctly, the large event causes you to make a large update because it's what causes you to notice the decades of previous information that you were previously unaware of. So you are actually updating on a lot more than just the big event, but the big event is the trigger for you to notice.
>(which is most people most of the time, many of whom are very much not stupid!).
Someone pointed out that Scott recently used "lie" multiple times in an essay in a much looser manner than in his "Very Rarely Lies" post. Similarly, he's likely using a broad (and frankly, obnoxious and offensive) definition of "stupid."
Especially since he's in EA DEFENSE MODE, this isn't Scott at his most charitable and well-crafted.
Edit: And now I find my post even funnier given that just downthread people are calling it terrific and an "instant classic." Which of us has not updated correctly?
That's the wrong update. The right update is "Scientists have been confidently insisting that it isn't". Most people pre-COVID did not have scientists being collectively systematic bullshitters as a very high prior. So that's why it gets a lot of attention: a lot of people are updating hard.
It's just not true that the media has been confidently rejecting lab leak. There is lots of sympathetic coverage of lab leak throughout the media and has been for years. You can just search "lab leak" on google news and find plenty of coverage that treats lab leak as plausible hypothesis, including by the most mainstream sources like AP, NYT, BBC. My update after lab leak controversy is that no amount of positive coverage seems to persuade people that the establishment doesn't actually hate them.
Only when they had no choice (for example when the Energy Dept Z-Division scientists at LLNL shifted to favor lab origin joining the FBI). Otherwise they have pushed any papers that suggested market origin despite the glaring lack of any intermediate host and missing early cases. The NY Times didn't report on the DEFUSE proposal to introduce furin cleavage sites into novel SARS-related bat coronaviruses or the NIH terminating WIV's subaward for continued refusal to share records of their SARS-related bat coronavirus research.
I agree that: 1) media coverage tended to be more pro lab leak after expert assessment supporting lab leak by Energy Dept and FBI 2) media coverage tended to be more pro market origin after prominent papers supporting market origin were published in top scientific journals.
How else could it be? Journalists aren't virologists; they lack the expertise to evaluate origin claims themselves. Of course coverage is going to be driven by expert assessment in the scientific and intelligence communities.
I redid the calculation with 8 decades, 1 previous lab-leak-pandemic, and original priors of 80% probability on 5% per decade and 20% on 15% per decade. After 7 previous decades and 1 lab-leak-pandemic, my updated prior would be 72% on 5% per decade. Another decade without such a pandemic increases that to 74%, while a second such pandemic decreases it to 46% (with 54% on 15% per decade). The result is an expected value of ~7.5% per decade if COVID wasn't a lab leak, and ~10% if it was: a smaller absolute difference than in Scott's calculation, and a similar relative difference.
Might be easier to just use frequentist statistics here, when we don't have a clear prior probability. 1 leak per 8 decades = 12.5% leaks per decade, agreeing with your math of "looking at the evidence moves the probability weight closer to 15% and farther from 5%".
The problem with such calculations is that they are looking only at lab leaks, not on how many labs existed that could have leaked, how well the labs were run, or any of the other relevant metrics.
With the numbers of actual (known of suspected) leaks being so low, a single new case can change the numbers a lot, including if there was a successful coverup in the past that we are not using to calculate.
y'all I'm not going to update my prior that scott is a great writer just because when he has newborn twins he publishes some meandering thing full of reheated Taleb. ;)
Scott, prioritise mate. You don't need to feed the substack maw right now.
Obviously Taleb says outliers are important, but I don't remember him saying we shouldn't update on dramatic events because of them - but I've only read one or two books. Can you direct me to what he has to say about this?
I liked it; it was a well-expressed laying out of something that's been bothering me about public discourse for a while now, which I found pretty cathartic. Also, my maw ever hungers.
I thought this post was inspired, and marked it mentally an "instant classic" while I was reading it. That, about evaluating the content.
The fact that Scott managed to write it at all while having twins impressed me (I am a father, and I've passed through that twice - one child at a time). That it turned out to be so good, is nothing short of astounding.
> This is part of why I think we should be much more worried about a nuclear attack by terrorists.
Agreed, but we should worry even more if nukes were known to be accessible.
We did worry about nukes post 9/11 - it was both implied and stated openly. Remember the WMD and Iraq. Iraq had nothing to do with 911, of course, but the general population was, I think, genuinely concerned that if something like 911 could happen, then an even bigger attack could happen.
A bigger attack could have been one with weapons of mass destruction of course, and even though Saddam had nothing to do with any of this, and even though the neo-conservatives were using the fear post 9/11 to start a war that they had planned anyway, it is somewhat understandable that people’s priors about terrorists using WMD had changed.
"You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge."
My SWAG is that everyone knew that everyone knew that everyone knew that there was a generalized problem, but not everyone knew specifics.
The Casting Couch is not exactly a secret, but it's a jump to go from "directors have been known to use unorthodox methods to audition starlets" to "Director X told Starlet Y on this date in that place the specifics of how she could best secure her big break."
And there's also the issue of consent - sure it's skeezy to use an offer of stardom to get a young girl to have sex with you, but she might have said yes and knowingly did it. I would want that guy out of hiring decisions immediately, but he had real power in the industry (at least his own company making movies). Ultimately, Harvey wasn't convicted of offering jobs in exchange for sex, but for when he crossed the line into rape.
Right, but if people were aware that he gave out jobs for sex, but thought it was (mostly) consensual, there would be little reason to report that. If he needed a fig leaf, he could have claimed he was dating some pretty unknown actor and also giving her jobs.
That's the question, and it depends on how it's phrased. Nepotism isn't good, but it's a common occurrence that would be hard to stop. In privately owned companies it's a fact of life. If someone with power offers a sexual relationship and *also* offers a job, that's a lot like nepotism. If someone offers a sexual relationship *for* a job, that's at least borderline and worse than nepotism, but really really hard to differentiate. In many cases these women would enthusiastically agree and freely tell everyone how much they love this situation, even if they personally hated it and only did it because of the power differential. They know that getting the job requires pretending that the relationship is consensual and exists outside of the job, even if everyone knows that it's not true. It's a lot like Anna Nicole Smith marrying some ridiculously old guy to get his money. Nobody can tell either of them (legally) that they can't do that, but we can all frown at them a lot for it.
as you are doubtless aware, the law treats "marriage for money" somewhat differently than it treats "sex for money" or "sex for a job"
And, of course, getting back to the original subject, the existence of the Casting Couch is no secret. Just that this time, we have more details beyond rumor.
This is apparently the new definition of "consensual" that starts with "was the sex in question skeezy", and if the answer is "yes" we find an excuse to insist that the person knowingly saying "yes" because they wanted the deal being offered wasn't really "consent".
That's at least defensible when the person in question is a child.. For adults, oh hell no. Consent is consent, and "yes I will do X in return for promised Y" is consent. If you want to ban it or object to it, find another excuse, own up to believing that sometimes sex between consenting adults is wrong, because consent is too useful a concept to throw away over this.
I'd known that it was a historical thing, but I had assumed that it had died out some time around the 70s (except presumably in the case of porn). I was surprised, but not shocked, that it had continued to the present day. I wasn't surprised that so many actresses went along with it in the moment and remained silent during their immediate job. What shocked me was that so many actors and actresses knew about it, remained silent, lied about it in public, and even facilitated it by passing on "fresh meat" to Weinstein. At times I wish Rose McGowan could channel her anger into making people's heads literally explode.
"Harvey Weinstein abusing people in Hollywood didn’t cause / shouldn’t have caused much of an epistemic update. All the insiders knew..."
But to non-insiders this was a huge moment - for them, it was an example of #5 in your list of exceptions, teaching them something horrifying and hitherto unknown about the world. A COVID lab leak is the same way; (if true) it takes a bunch of people who *haven't* formed models about how common lab leaks are (or whose models are just "scientists are smart people and wouldn't be that dumb") and instigates a truly massive update.
This isn't a point of disagreement with your post, probably just a question of emphasis instead. There are a lot of people ignorant about any given risk/phenomenon, nobody can know everything. It makes sense that there are lots of dramatic updates going on after dramatic events, and this needn't be (especially) irrational; and the headline of this article makes sense only within domains where you already have considered opinions.
I was aware that the casting couch was a thing but to learn that Weinstein had a running joke with a well known actress (not A-list but In Lots of Good Movies) because she still hadn't slept with him was a real shock
I don't know how much this is Hollywood specific. Is it that fame is so intensely supply limited?
What I question is my assumption that most harassment has the "really, no" option somewhere there in most cases. It seems it's not just a question of whether you literally need the job to eat but a much broader vulnerability to 'needing to psychologically submit if you feel your belonging to the tribe is under threat'
> I don't know how much this is Hollywood specific. Is it that fame is so intensely supply limited?
It's not all that Hollywood-specific. Right now I guarantee that somewhere out there is a McDonalds employee sleeping with her boss to get better shifts.
Or consider the Vice-President of the United States, who got her political career started by banging Willie Brown, who was twice her age and married.
I think the “Hollywood-specific” part is how ubiquitous it is (or was): it was very common, and was to some degree *the* way that influence and fame was handed out.
I would think this would be most likely in places exactly like Hollywood: getting famous in the movie industry is a winner take all tournament, with vastly more supply of aspirants than demand for A-list actors. And these aspirants are fairly hard to objectively distinguish, so a lot of success comes down to who you know... and what you’re willing to do for them.
Power brokers like Weinstein, who are basically the judges of this tournament, get to set their terms of competition, and if you’re a sleazy dude with thousands of largely indistinguishable desperate actresses coming to your door wanting to be the next hit thing...
Hollywood is not the only “tournament job” - academia is the same way, but we generally assume there is less transactional sex there (although probably lots of transactional other stuff). I think this largely comes down to culture, given that Hollywood fame already involves a degree of commoditized sex (or at least sexiness). When you’re competing to be a public sex symbol, you’re probably already partway down the road of accepting having to do sexy things you wouldn’t do otherwise, making you more vulnerable to Weinstein types acting predatory.
I wouldn't rule out academia running the same way pretty often, maybe as often as Hollywood. I think academia does a better job of hiding it, and the stakes are much lower for individuals involved. I doubt there are many, if any, Harveys in academia, but probably a whole lot of professors having sex with grad students and helping them get positions in colleges.
It isn't impossible, but off hand I cannot think of any examples that I have observed in about fifty years in academia. Currently there are pretty strong norms against sexual relationships between professors and their students.
There are now. But in the 2014-2016 period, academic philosophy had its “me too” moment a bit early. John Searle and Colin McGinn and Peter Ludlow all lost their positions over things that were sort of open secrets for a while, and there are well known examples from earlier generations, like Alfred Tarski.
That’s a fair point. On the other hand, politics has more centers of power - there are LOTS of campaigns to be involved in and pols to work for, with a diversity of styles and ideologies, all the way from the local to the federal level. You’ve got Willie Brown, but you’ve also got Mike Pence, and everything in between.
“Big shot Hollywood producer” is more rarefied air than even the Senate, concentrated in one place, with basically a monoculture.
I think it was particularly likely in Hollywood because being sexually attractive is a major asset in the profession, hence women trying to break into it are likely to be women that men giving out roles would like to sleep with.
Sexually attractive *and* at least somewhat willing to parlay their attractiveness into commercial success.
I’m not saying “they’re asking for it” or anything like that, just that I don’t think you find “be a Hollywood A-list star” enticing unless you’re more okay-than-average with being a public symbol of sexiness. It’s, for better or worse, part of the job description.
I think that "really, no"-assumption is what changed in a lot of people around MeToo's peak popularity.
If a lot of people's understanding of sexual misconduct in Hollywood went from "actresses sleeping their way to the top" to "actresses pressured into sex at risk of losing their livelihood," that explains why they saw confronting sexual harassment in Hollywood as suddenly urgent.
Thinking about it some more, it might just be that everyone knew about it, but almost no one was considering the implications of it. Yes, obviously actresses were sleeping with directors for favors, but if pretty much everyone was doing it... Not sleeping with directors isn't a viable option. It's a pretty basic coordination problem: everyone (except the directors) would be better off if no one slept with directors (or if the director got punished by the law for doing it).
I don't think it's as simple as a coordination problem. Now that it is no longer acceptable, beautiful women are at a relative disadvantage to those with other connections in the industry (e.g., Nepo-babies).
The argument seems to assume my priors are more or less accurate. But that seems unlikely for events that weren't salient enough for me to think about. Re the Covid lab leak specifically, I'm not updating (much) on the likelihood of a lab leak, since I think I had a good enough handle on that before. but I am updating substantially on the probability that the gov't and official health authorities will lie to me. The same problem arises at that level. Was I previously grossly underestimating the probability that the authorities would lie to me? Or have I overcorrected from the Covid lies? It is possible that I underestimated the probability before and I am overestimating it now. On the other hand, the dramatic events made something salient that was not salient before. Maybe that has caused me to think carefully about it now and my current estimate is more accurate than previously. So I expect that sometimes it is wrong to update substantially in light of dramatic events, but I doubt it's always wrong. And give that ex post, I've tried to think carefully about it and make my estimates as accurate as possible, I don't think it makes sense for me to discount my current estimate to account for the possibility that I'm overcorrecting.
There have been a number of comments along these lines, and I'm putting this reply here because it has to go somewhere.
I'm confused as to how there can be a significant update here. I consider myself a person who is more than usually trusting of authorities, but I don't think that all offiicial statements are 100% accurate. It would be very surprising to me if anyone had thought pre-Covid that all official statements were 100% accurate. There have been numerous prior examples of government and official health authorities making false statements, so the same point applies as in the post.
The update is on the strength of the government positions that turned out to not only be false, but likely or provably lies.
A minor position held by a single self-interested politician? People have suspected lies for as long as government has existed. A major position trumpeted from multiple federal agencies in official releases to the public? That's a new one for most people.
Yes, exactly. It is of course possible that Robert himself had a much more realistic appraisal of gov't lying prior to Covid than I did. But my point is that people - well, me anyway - are going to have inaccurate priors about low salience events. If the priors are inaccurate, then it is not necessarily wrong to have a substantial update after a dramatic event.
At least in part, I would think, there should be an included factor for updating to the degree that the dramatic event is likely to affect you- and, for that matter, how others will react which is really the big wrench wrecking this post meaning much of anything (as Scott points out with society being imperfect).
If you live out in BFE, even a once-a-century terrorist attack is unlikely to affect you directly (for that matter, for the US, if you're not in NYC, DC, maybe LA and Boston, even other major cities are unlikely targets). How society reacts almost certainly will, though; is there a meaningful difference between you updating on [drastic event] or [reaction to DE]?
A pandemic (regardless of origin) is likely to affect you even in BFE, even if you have (slightly) more warning than NYC. To what degree does the origin matter if we're "due" for a once-a-century plague? What does updating here entail?
Assuming you live in the US, you're used to an unusually stable and relatively low-corruption government; it was easy to underestimate the degree to which they were likely to lie in a delicate scenario. Is it dangerous to overestimate that? Skepticism to the degree that their lies will affect you seems wise, and IMO you have to go pretty far into the tinfoil hat spectrum before the overestimation becomes worse than the risks of underestimation. That said, that skepticism could lead to a certain epistemic anarchy or helplessness, which does have risks.
Ultimately, all the made-up math is a rationalization for following one's preferences anyways (https://blog.ayjay.org/silence-violence-and-the-human-condition/ ctrl+F Parfit, but the whole post is good if you ask me). Applying numbers gives an unwarranted illusion of solidity and reasonableness (does the difference between a 19% and 20% risk of lab leak *mean anything*? What the hell are those percentages? how would you even measure?). Consider, instead, something akin to Michael Pollan's advice on eating: "eat food, not too much, mostly plants." Should you update on drastic events? "Yes, mostly cautious, don't wreck your life."
For most of Scott's readers, there are a couple updates that I would consider nearly required: the government does not have your interests in mind (ideally, they are interested in "the public," which is not the same as interested in *you*), and they are not primarily communicating to people like you. I would also say you should update on Public Health having a major obsession with harm reduction, which is barely even related to things like "the public good" and has basically no concept of tradeoffs, how their messaging on harm reduction affects how people receive everything else they say, etc etc.
"I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before."
Objection! Beware the other kind of stupid people who don't know how to use probabilities and believe every dramatic thing that happened once will happen again and kill them unless it has exactly a 0% chance of happening. For they will turn your argument from drama against you and demand infinite safety requirements for all nice things and then we can't have nice things like nuclear power.
No, that's the same kind of stupid people. Most people in modern industrialized nations eschew quantitative risk assessment and mitigation in favor of an intuitive binary classification where all things are either "perfectly safe" or "unacceptably dangerous". If pressed they will of course acknowledge that e.g. airliners sometimes crash, but the rate is low enough to round to "perfectly safe" and so nothing needs to be done beyond what we're already doing.
Until an airliner or two is observed to crash, and then airliners are "intolerably dangerous", or at least the model that just crashed is intolerably dangerous. At which point, they'll want to wholly eliminate the intolerably dangerous thing from the world they live in, and they won't be satisfied by "we have mitigated this risk to a rationally appropriate degree". They'll insist on something that flips "intolerably dangerous" all the way back to "perfectly safe", and you correctly note the nuclear power industry as an example of what that can look like.
If you want to win this sort of battle, you probably do want to have the "stupid people" on your side. But that means the end state you are fighting for has to be seen as either "perfectly safe" or "intolerably dangerous and thus very very severely regulated".
So, we need to know whether we'd prefer a regulatory environment where e.g. gain-of-function research is allowed with no significant restrictions, or one in which it is nigh unto banned. And we've got sufficiently few data points on that one that it would be genuinely helpful to us to know which side of the line COVID fell on.
I only disagree in calling these people stupid. Given the limitations on how someone can plausibly follow a wide range of very different topics, intuition and heuristics are necessary. Not just expected, but plain necessary.
When Boeing's 737 Max planes started crashing, the intelligent thing to do would be to categorize them as "airplanes" and maybe slightly update on overall airplane safety. The "stupid" heuristic would be to become fearful and avoid flying - either generally or on whatever identifiable aspects seem most dangerous. Obviously overkill to avoid flying in general, but correct in direction that something unusual was wrong and needed corrected. In this case, an automated pilot that could directly crash a plane under certain circumstances.
I agree that intuition and heuristics are necessary, but that's a separate question. It is entirely possible to use intuition and heuristics to define and populate a category, "things that are a bit dangerous but *worth the risk*". Having that category in between "perfectly safe" and "intolerably dangerous" greatly increases the utility of your intuitive heuristic-based risk assessments, and it also gives you an easy way to slide in quantitative risk assessments when appropriate.
Most of the human race for most of history has, I think, been willing and able to use "a bit dangerous but worth the risk" at need, and I think it is still common in working-class American culture. The bit where the middle and upper classes have dispensed with it is perhaps not "stupid", but it definitely seems foolish.
like statistics takes mortality and abstracts it, but the dramatic event escapes the cage of abstraction and forces consideration. risk management is actually a coping strategy, and the dramatic event collapses it, causing overreaction.
i think the real issue might be acceptance of mortality to avoid both overabstraction and overreaction. We'd kind of maybe accept nuclear power but ban motorcycles or go with socialized health care.
The airplane argument is ridiculous. If 99.9% of planes are safe 99.9% of the time then one plane has repeated problems, you absolutely should be up in arms. We know how to make planes "perfectly safe." This isn't some woke philosophical thing. An airplane that crashes absolutely is "intolerably dangerous." If you (as an airline) put it in the air, it will do irreparable harm to your brand. If you as a government agency continue to let it fly, you will do irreparable harm to your brand.....
Speaking as am experienced pilot and an aerospace engineer, no, we do not know how to make airplanes "perfectly safe". We know how to make airplanes that crash only rarely, and if you want you can round that to "perfectly safe". But if a single crash is enough to push it over into the "intolerably dangerous" category for you, then we don't know how to make airplanes safe enough for you and you should never fly in an airplane.
An aerospace engineer who can't fathom that quotes around "perfectly safe" means "not literally perfectly safe." And if you are an aerospace engineer, then you should try explaining to your boss that the plane you designed crashes only a bit more frequently than your competitors, so it should be fine. This probably won't adversely affect your career trajectory.
If you're saying that an airplane that crashes even once is "intolerably dangerous", then it certainly seems to me that your definition of "perfectly safe" allows for literally zero crashes.
Otherwise, it's foolish to update significantly on the basis of a single crash.
If something that you believed was infinitesimally probable occurs, you absolutely should update. However, your original statement to which I objected was, "Until an airliner or two is observed to crash." If you thought the odds were near zero and it suddenly happens twice. Not only should you update, you should probably reevaluate your entire method for assessing risk.
Once you have done the fraud, admitting to the fraud is good actually! It doesn't come close to undoing the damage of course, but admitting to it is better than not admitting to it.
Every time someone asked him, "did you commit fraud?" he said no. But he did admit to all of the elements of fraud. This was a de-facto public service to the victims and prosecutors.
Good for what purpose? For your reputation, it's either indifferent (it will be clear to everyone you've committed fraud anyway) or bad (if it wouldn't be clear to everyone without your admission). If you've committed a wrongdoing that doesn't involve dishonesty, admitting it may be good for your reputation for honesty, but that doesn't work for fraud, after which you won't have a reputation for honesty either way.
I kind of agreed with that part. But I disagreed with almost every part of the contrary position w/re Altman vs the OpenAI board. To me, the same principles that applied in the SBF case ought to apply to to the Altman case. To a substantially lesser *degree* because we have compelling proof that SBF was basically a nerdy Bernie Madoff and we don't have proof that Sam Altman was basically Gaius Baltar. But the *sign* should be the same, e.g. we should want strong corporate boards to safeguard against that sort of thing and we should want them to act before it's too late even if that sometimes does mean having to say "oops, we're sorry!"
But apparently I'm an outlier in that respect, at least in the broad rat-sphere. And I'm not sure why.
Agreed as well. The problem with OpenAI was that the individual board members were either too close to Sam (i.e. employees working under Sam or who helped him start the company) or too inexperienced with running a major company. A more common board would have people who are more experienced with these kinds of things and more independent.
The failure of this board was in its creation, far more than how it did or didn't handle this situation.
Yeah, that part just smacks of siding with the victor because he's the victor. Actually, Sam Altman is an enemy of humanity, and the systems that were supposed to keep him in check failed! When I'm feeling pessimistic, I think that failing to keep him fired was the metaphorical event horizon, i.e., it doesn't feel like the apocalypse but now all possible paths lead towards doom. (I'm often feeling pessimistic.)
"Enemy of humanity" is unproven, to say the very least. But we can go with "insufficiently transparent w/re the alignment of his incentives with humanity's", and given his position that should have been enough for the board to fire him.
But as Mr. Doolittle notes, the board seems to have been set up to fail if it ever came to that, and it did. Possibly that was just carelessness when OpenAI was being founded.
I'm afraid, however, most answers would start with: "Well <MY_GROUP> did so and so..." after which, you can be pretty sure that however <THEIR_GROUP> actually managed it, the person putting them as an example thinks they did stellarly, so...
That Altman is well-connected and better at massaging publicity doesn't seem like a failing of the board exactly, even if it's an overall failure of the Altman-control system. Certainly, calling them "unprofessional" is ridiculous; it's a Sam Altman propaganda line, not a fact about corporate governance.
Maybe? The problem is roughly that they assumed professionally discharging their responsibilities would be sufficient. But the point of the entire corporate structure was also that that should be sufficient.
The most crucial aspect of the lab leak hypothesis is the aggressive censorship it faced, with individuals considering it being labeled as conspiracy theorists. It was evident right from the start that something suspicious was happening.
Secondly, all bioweapons labs should be closed. This should not be a difficult task. Take Germany as an example, which successfully shut down all its nuclear plants, albeit for more or less wrong reasons.
One thing that's missing from this is that dramatic events usually have a great deal more evidential supply than less dramatic ones. For nearly everyone, you don't have close to first hand reports of something like historical lab leaks, you have to rely at best on one or two scientific studies of average quality, for which you probably didn't read the methodology section. And for most categories of event, you haven't even read those reports, you're going by your impression using the availability heuristic. For dramatic events, you have evidence from all directions which you can cross check. So it is a suitable time to evaluate whether the evidential underpinnings of your previous impression are fit for purpose. Of course, in a lot of cases a lot of the evidence is still third hand, but there is still so much more of it that you can get a clearer picture of this particular event.
"When we’re inferring general patterns from individual events, we put too much weight on events of personal, moral, or political significance.
We focus too much on whether we were harmed by some risky behaviour, relative to what’s happened to other people. Obviously on average there’s nothing special about yourself.
And we put too much evidentiary weight on what happens in large countries, such as the US, relative to small ones. We can learn a lot from events in small countries, even though their practical importance is smaller.
Likewise, we over-generalise from events of great historical significance, like World War II, whereas we neglect less significant events."
I really wish that announcements of dramatic events were routinely accompanied with:
Here's the previous data. Here's the fit to the previous data, and the corresponding exponent.
The exponent really makes a huge difference. If it is less than 1.0, the total damage done by the whole probability distribution is dominated by the single largest event. And, as a probability distribution, if the exponent is less than 1.0, then it can't be normalized, so it must break down somewhere, and _where_ it breaks down matters a lot.
There are also some other critical values for the exponent where other qualitative changes happen:
>A power-law x − k {\displaystyle x^{-k}} has a well-defined mean over x ∈ [ 1 , ∞ ) {\displaystyle x\in [1,\infty )} only if k > 2 {\displaystyle k>2}, and it has a finite variance only if k > 3 {\displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable of black swan behavior.
The benefits from gain of function research is near zero and the risk is monumental. It took ONE SINGLE lab leak to kill millions and cost trillions. Nothing gain of function has produced or likely will ever produce can make up for this singular event, let alone ones that will almost certainly happen in the future if such research is continued to be permitted.
a) This applies more directly to events like mass shootings, where there are plenty of events for statistics.
b) Irrespective of where it came from, Covid is also in the the class of epidemics, and, while the statistics on epidemics are sparser than for mass shootings, there are enough of them to make a stab at estimating the statistics and estimating what a rational policy is for _epidemics_ regardless of policy choices towards gain of function research.
I agree that such announcements should come with context. X's community notes seems like it's working for this purpose. Some websites seem to have it. Most places don't, because doing so it at odds with their purpose. The Media wants to sell sensation. Politicians sell a narrative and/or viewpoint.
I'm not on X, but it's apparently something that can be added to posts by certain community members as a type of fact check, adding context or additional information where needed.
My take: no, it's entirely irrelevant, because whether or not it leaked from a lab, it did indisputably, uncontroversially, get deliberately exported from a nation.
We know when China locked down Wuhan to internal travel, and we know when China shut down international air travel out of Wuhan, which was significantly later. That makes the pandemic a bioweapon attack. Whether or not it was developed in a bioweapon lab doesn't matter; a thing is a weapon *if it is used as one,* and China knowingly sending infected people to other countries absolutely counts as using Covid as a weapon.
You're probably right, but that doesn't mean that deliberately encouraging the spread is a blameless act because of it. "It is impossible but that offences will come: but woe unto him, through whom they come!"
I don't know that they were deliberately encouraging the spread. Usually I attribute things to incompetence before malice, and China seems like one of those places where they're fine locking the proles down but getting on the wrong side of wealthy travelers and especially international air travel chains might have been harder/scarier.
> Usually I attribute things to incompetence before malice
Generally, in the absence of better information, yes. But malice of this form seems quite consistent with the character of the CCP.
> China seems like one of those places where they're fine locking the proles down but getting on the wrong side of wealthy travelers ... might have been harder/scarier.
> They've never intentionally released a bioweapon before, so it's out of character.
Perhaps, but they are known to have provided chemical precursors to Mexican drug cartels in sufficient quantity to make enough fentanyl to kill everyone in the USA and then some. That sure sounds a lot like a widespread *chemical* weapon attack to me; is it really that out of character for a regime that would use sneaky means to release one mass-casualty weapon upon geopolitical rivals to use sneaky means to release a different one?
Such actions come straight out of an old Chinese book of war practices known as "The Thirty-Six Stratagems." Often regarded as a companion to Sun Tzu’s The Art of War, the book outlines various stratagems (deceptive tricks or schemes) to win at warfare by fighting dirty. The third stratagem, “kill with a borrowed knife,” involves employing a third party to strike at an enemy when you can’t easily do so yourself. (Drug cartels, for example.) And stratagem #25, “replace the beams with rotten timbers,” refers to disrupting, sabotaging, and interfering with an enemy’s normal ways of doing things so their organization collapses. (Can you even imagine a more perfect example than introducing a germ to an enemy’s populace, introducing the idea of massively societally-disruptive lockdowns to combat it, and then using the pandemic and the lockdowns as an excuse to tighten exports and throw their supply chains into chaos?)
This sort of stuff isn’t what we tend to think of as “war” in the West, but it very much is “war as China understands it.”
If any one person leaves the country it spreads. As I recall the Chinese were condemned, not praised, for lockdowns. As I also recall a fair number of Americans thought back then that it was all fake. Including Trump. And of course countries can close down airports without the Chinese having to.
Is this... actually true? I don't think it is for covid. We had literally almost nothing to do for extreme symptom sufferers except put them on a ventillator, which didn't actually help them in any way, and watch them die. That didn't stop us from scrambling to put together as many ventilators as possible of course, but...
well. Flattening the curve is very important if your health care industry is actually capable of doing something. But I just don't think it was in this case, so the fact that we overloaded it literally didn't affect anything.
If the spread slows, you reduce the number of people who get sick before the MRNA vaccines are rolled out, so yes, it does matter how fast/soon it spreads.
with corona I think each strain got more infectious and less deadly, we only have a sample size of 1 bio terrorism virus, but its possible that theres a fairly hard law that those evolutionary trade offs will always happen in the wild
The longer you wait the more likely the version you get is a variant that is less deadly, maybe, if this happens again I will consider hiding in a bunker for a year every time
Didn’t the ventilators have a survival rate that greatly exceeded the survival rate of people who *would* have been out on a ventilator but couldn’t get one? (Something like 20% vs 10%)
my understanding is that this is not true, the problem never had anything to do with the lung's ability to contract and expand, rather the problem was in the transfer layer where the blood becomes oxygenated. throwing a ventilator at someone whose blood cannot become oxygenated doesn't help them? i think that's what happened anyway, i haven't researched this since 2021, but i *did* research it
and that as this slowly became more known, the push for random geeks to manufacture ventilators in their garages correctly slowed
Depending how you're modeling your distributions, it is often completely correct to drastically overhaul your model in the face of dramatic events. Sure, changing an estimate from 1,000 to 1,001 generally does not require that you update your mean or standard deviation in any way whatsoever. But an update from 0 to 1 can and in many cases should result in a massive update of your model, because if a seven-sigma event happens, your model needs to be thrown into the trashcan.
One instance of something coming to light should often not be updated as one instance of something occurring. If I believe that the amount of daily shoplifting in a big box store is $200, because there's an inventory check at the end of the day that yields of average $200 discrepancy, and I notice one day that one shoplifter has managed to evade this system and steal $500 of product in one go undetected by the system, I need a massive update to my estimates: there could be anywhere from $200 to tens of thousands of dollars of product being stolen every day.
Any instance of sexual harassment coming to light can be an update being any number of sigmas. Consider the the model airplane community, consisting of 100 chapters of 100 members each. If I learn that one, or two, or three people in the model airplane community were persistently making unwanted advances on co-community members, that's not something that needs updating on, because we know this kind of thing happens everywhere all the time. But if I learn that one, or two, or three chapter leaders engaged in (presumably) rarer forms of sexual harassment like drugging them, assaulting them, then blackmailing them into recruiting more victims, and every time someone tried to blow the whistle it was somehow covered up, that's the kind of update that does in fact require taking a second look at the model airplane community because it implicates a lot more than "three instances of sexual harassment occurred."
"But if I learn that one, or two, or three chapter leaders engaged in (presumably) rarer forms of sexual harassment like drugging them, assaulting them, then blackmailing them into recruiting more victims, and every time someone tried to blow the whistle it was somehow covered up, that's the kind of update that does in fact require taking a second look at the model airplane community because it implicates a lot more than "three instances of sexual harassment occurred.""
I would indeed update, since I REALLY doubt anything like that has ever happened. In Hollywood, sure...
I think this is one thing I assume he will talk about if he detailed the mathematics enough. I also instantly think about how the severity of each event affects the update and how we have so little data to talk about century events to update our prior meaningfully.
I think a big thing about "learning from dramatic events" is that everybody else is learning too, which can either dramatically increase, or dramatically decrease, the odds of a similar event happening again, depending on the specifics.
RB applied this to 9/11 specifically: it increases the odds of similar events happening again quite a lot, but all the other passengers also learned that, which *caused* similar events to be much less likely.
With mass shootings, it goes both ways at the same time: media reports of mass shootings make J. Random Nut more likely to pick that particular method, but also make people more likely to take countermeasures. (Wasn't there a shooting that didn't clear the threshold of four victims because the killer was shot so quickly?)
I think that this article could benefit from more discussion on correlated events. Yes, if one extreme event occurs every t years, independently of other events, then we shouldn't worry too much. Extreme, newsworthy events can actually change the covariance of subsequent data and potentially increase their likelihood. Perhaps the possibility of correlated events, marginalized to only include your most disliked subgroups, is still very low and your point still stands.
Scott, what was your prior estimate on, "expected number of billion-dollar frauds motivated by effective altruism per decade"? If you think a better measure would be, "expected fraction of EA-funds acquired fraudulently," you can express it that way.
I feel like there is a bit of sleight of hand here, where you use math show to how updating a lot on dramatic events is usually bad, but then you don't actually use this math to show why you shouldn't update that much on the specific event you are feeling defensive about.
MIRI had some pretty big financial scandals (before it got renamed from the Singularity Institute). Not at the same scale in absolute numbers obviously, but enough that your model of the Rationalsphere should already have a term for the possibility that some of its highest-profile people and organizations will get involved in major financial scandals.
I remember that there was an employee (who wasn't a rationalist) who stole money from the organization (and then was fired?). I don't remember other "pretty big financial scandals" (and can't even google them because I get "Singularity University" or "financial singularity" instead).
I don't think SBF should really be considered as "motivated by ea". And "number of massive crypto scams per decade" is obviously way high anyway for any decade that has crypto.
>I don't think SBF should really be considered as "motivated by ea"
He was raised by a relatively prominent utilitarian, Will MacAskill himself nudged SBF into earning to give, and most of his staff came from EA.
It's easy to overestimate the degree he was motivated by EA, sure, but there's a substantial degree of defensiveness that wants to ignore EA's contribution as well.
>"number of massive crypto scams per decade" is obviously way high anyway for any decade that has crypto.
While accurate, EA being the "effective", smarter-than-everyone, we-know-where-our-money-goes people probably should've been more skeptical of crypto than they were.
Part of the defensiveness is that the coverage is basically claiming that EAs are *uniquely* vulnerable to billionaire crypto fraud, that you would expect it to be *more* common than fraud in non-EA crypto billionaires.
So maybe the argument comes down to “what’s the denominator?” Personally I would expect the base rate of fraud, or at least some sketchy behavior, to be pretty high among crypto billionaires, whether they are EAs or not.
Now, if you were someone who thought EA billionaires were uniquely *unlikely* to do fraudulent stuff, then SBF probably should make you reassess that prior.
If EA crypto billionaires do fraud at the same rate as non-EA crypto billionaires, but crypto billionaires are more likely to do fraud than non-crypto billionaires, and EA billionaires are more likely to be crypto billionaires than non-EA billionaires are, then you risk controlling for what you are trying to measure. You have to ask *why* so many EA billionaires are crypto billionaires. Which way does the causality flow? Vitalik Buterin got into crypto first, then EA, but SBF seems to have gotten into EA first, then founded Alameda Research as a result of his intention to earn-to-give.
You’ve also got the problem of low sample size. It’s not like we have a million EA crypto billionaires to run statistical experiments on.
Is fraud on the scale of SBF 1 in 10, 100, 1000? With just a couple examples it’s hard to say if it’s common, or we’ve just gotten unlucky.
(On the gripping hand, I don’t get the impression that SBF would have NOT tried to get really rich in crypto if he’d never heard of EA and “earn to give”)
The WHO report on Covid origins said that just before the first reported case, WIV put all their samples in a van and moved them down the road to another building (as part of a planned move). As they say, a leak during such an event when usual containment is disrupted is more likely. The timing is a remarkable coincidence if you don't think it's causal.
I don't understand why banning "gain of function" research would be "going mad" or overreacting. Even on paper and before the first leak it sounds like an idea so deeply terrible and silly that my working assumption is that it's a way for people to do and fund biological weapons research in plain sight without being ostracized for doing so. What is the expected advantage and what's the reasonable rationale for this expectation?
How do you actually know this? I only know one person who works in this field, and he went from an ordinary bog standard individual dating girls and playing progressive politics, to moving out into the middle of nowhere and building an airtight hermetically sealed doomsday bunker because of covid and the treatment of the lab leak hypothesis
I work in the field myself, and followed the debate through various channels (Twitter, reddit, podcasts like the TWIV network). They all match quite well with the chatter I heard from conferences and from colleagues.
Many of the arguments for a lab leak advanced in the popular press also fall away if you have a minimum of molecular biology background. So there were a lot of conversations making fun of the journalists advocating the lab leak hypothesis for misunderstanding basic concepts.
I write in the past tense because nobody seems to care anymore.
One of my friends is a big Noah Smith fan, apparently he's a kind of big name? My friend compared him to Yglesias, a bog-standard neoliberal with perfectly ordinary neoliberal takes, but idk if that's actually accurate
either way, apparently Noah offhandedly made some kind of comment about how something, maybe to do with israel, was sorta like how the establishment deliberately suppressed the lab leak hypothesis for so long and how this really hurt institutional credibility in a way that maybe isn't the skeptic's fault
this was apparently recent, like last month?
and, according to my friend, immediately a bunch of institutional medical types came out of the woodwork and started calling him a crazy right-wing conspiracy theorist, and how all of his crazy fans were just being racist against the chinese, and how the lab leak hypothesis was even crazier than J6 (whatever exactly that means, i'm not sure what the metric looks like)
this surprised me because i thought we'd settled this, that the accepted narrative at this point was "yeah the lab leak hypothesis got suppressed by people who believed it was plausible but wanted to hide that fact, the main tool of the suppression was social status games and ridicule, but later on some establishment people admitted that actually it wasn't totally impossible like the establishment said, even mainstream liberals took note and said stuff like "wow that's really bad, don't do that again", the trust we all had for our institutions continued to crumble but we were all at least impressed that they were willing to come clean."
As far as the actual object issue of whether covid was a lab leak, i thought the consensus had settled on "there's not a whole lot of strong evidence in either direction, and the evidence that does exist is very weak. Mostly if you're participating in the debate, the arguments are going to be about the game theory of how we should handle china, not about virology."
I looked up this Noah Smith guy to see if my friend was exaggerating, and it sure looked like my friend was pretty much right. Noah offhandedly mentioned lab leak during a conversation about misinformation in general, some establishment medical folk *leap* on him and equate him with Alex Jones.
This makes me wonder. Is my impression of what happened, the above summary regarding the progression of the lab leak hypothesis over the past 3 years, just conservative propaganda? Have I bought into the bullshit?
Or maybe the medical establishment looked at the current israel/hamas situation, realized "wait, literally not a single damn person cares about the truth and we can lie all we want, maybe let's go back and try to memoryhole the lab leak thing and see if we can push it back into alex jones territory so we don't look so tyrannical anymore"
I just don't know what to believe
(this is all a pretty minor datum, too, compared to the PhD Lab Bio guy I know who left his 7 figure job at big pharma to build a hermetically-sealed doomsday bunker in the middle of nowhere, while screaming all the while that security precautions at virology labs were nowhere near good enough to reach the insanely high difficulty bar of not killing millions of people on accident, and who now won't shut up about the illegal chinese virology lab discovered in Reedley where the CDC simply refused to investigate. I pretty much either have to believe that he had a mental breakdown and is now totally insane, or that he's got a point)
Yeeeaah, but it still seems pretty obvious that the downsides outweigh literally the entire benefits of the field of virology collectively, and are probably not far off from the entirety of medical research combined. There's a sense of scale missing.
No opinion on the first half of that, but as for the second half... well, I haven't died of smallpox, polio, typhus, tuberculosis, dysentary, or the plague, which is pretty nice actually.
If I understand correctly, GoF research has only really been around since 2011, and there was a moratorium on it from 2014 to 2017. I'd say that the accomplishments of all medical research during that time period is not *that* much greater than the losses from a risk of a Covid-level pandemic (~25 million dead, lengthy global shutdown) every, let's take a guess, 10, 15 years?
“We think it’s really important to understand how to detect and contain a nuclear disaster, and how exactly it would proceed, so we’re going to deliberately create the conditions for a full scale nuclear disaster and see what happens. In a totally controlled and contained lab environment of course!”
> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.
I know it's not "the point" of the post, but since you're talking about distributions, the one you imply having here (pretty much no difference between right- and left-wing terror) is wrong, for a few reasons.
1. There are 50-100x as many right-wing people as transgender people (~50% of the country is Republican, i.e. right-wing, and <1% are transgender). Naively, we should expect 50-100x as many mass shooting events where the perp is right-wing vs. trans. Even if you limit "right-wing attacks" to specifically "motivated by right-wing ideology", you're still looking at a large delta between the incidence of far-right extremists and trans people.
3. In point of fact, there is almost no left-wing terror/mass shootings in the US, whereas right-wing terror attacks are commonplace and rising. Heck, there was a ~~bloody~~ god-damned coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law... We are in a moment where the violent right is ascendant, and the right in general is getting more violent and less patient with democratic problem-solving.
Edit to strike out "bloody", as it was confusing. I meant "bloody coup" as in "a god-damned coup", just using bloody as an intensifier, but can see how that was confusing given that "bloody coup" means "a coup where many died".
Then why is it the left that's so against Islamophobia, and the right that is persistently antagonistic.
The definitions of "right wing" that the establishment likes to use is completely worthless. These are people who label a party literally called the National Socialist German Workers Party as right wing. This parties members literally called each other comrades, but according to our glorious anti-extremism establishment they were the polar opposite of socialist.
I've read up on it extensively. That word is absolutely there because they were socialists. It is absurd to argue otherwise given the historical evidence, in fact. The party leadership was always very clear on that point, stressing it repeatedly. That's why retellings of the 1930s never seem to trust the audience with the actual translated texts of Hitler's speeches, showing instead only a few seconds of a shouty man shouting in a foreign language. They don't dwell on what was said back then because senior Nazis can't go more than two minutes without praising socialism, calling people comrades, talking about how socialist they are, how they're going to eliminate class differences etc.
And it was no mere rhetoric! The policy gap between Germany and the USSR was very small. They did many of the same kinds of acts and for the same reasons.
This is really tired right-wing propaganda, and it's honestly kind of sad to see people trotting it out here. Either you're a troll/right-wing propagandist and know full well that you're peddling bullshit, or you're sincere but have completely and utterly misunderstood the topic.
The polar opposite of “socialist” would be something like “individualist”. Both are compatible with right wing or left wing variations. Historically, socialism has most often been motivated by left wing concerns about the poor and minorities, but in several notable cases it has been motivated by right wing concerns about national or racial greatness.
The contemporary left is far more supportive of radical Islam than the contemporary right. Just look at who were the first to call for a ceasefire in Gaza, or who currently opposes military action against the Houthis.
Radical Islam is, without question, a far-right movement. That stands on its own and is unrelated to the point you make here.
The contemporary "left" has inherited from the traditional left a slant towards pacifism, respect for human rights, etc., and so of course they are calling for cease-fires and for ends to bombing and so on. They did the same during the neoconservative adventures in Afghanistan and Iraq.
The "left" isn't calling for ceasefires because they support radical islam (the modern "left" famously supports women's rights, lgbtq+ rights, democracy, etc. etc. that radical islam is literally violently opposed to), it's because they're against violence and military adventurism in general.
And _even if they did_ support radical islam (which they don't, but just for the sake of argument), that wouldn't make a movement which is by definition far-right any less far-right. It would just make the "leftists" who support them confused, or compromising on some axis for some reason.
Have you already forgotten "what did you think decolonization meant"? The contemporary left has minimal if any slant towards pacifism.
The degree to which Radical Islam is considered far-right just demonstrates the simplistic fecklessness of a basic left-right spectrum; such labeling is grossly propagandistic in an American context.
> Heck, there was a bloody coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law.
...Wait, what's even your objection to that? They didn't even blame it on Trump. If a large group of people attempt to physically stop a transition of power in a democracy to prevent the current president from being unseated, that's effectively an attempted coup, yes?
No it isn’t. You would need an army or a large group of armed paramilitaries for a “bloody” coup. Protests that have broken into parliament buildings are common enough.
Anyway as a European centrist I have broken my own rule about not talking about American politics anywhere because you are, both sides, utterly bat shit crazy.
In British English, “bloody” is an expletive attributive that is commonly used as an intensive. It is often used to express anger, frustration, or emphasis in a slightly rude way.
The important thing is that you have found a way to say nothing of substance but also proclaim yourself superior to both sides. Bloody good job done, that.
Ohhh, wow, I completely overlooked that and just assumed that memories were getting continually more exaggerated as time went on. Good to know there aren't people suddenly "remembering" corpses lining the streets or anything.
The point isn't just breaking into the parliament, the point is breaking into the parliament with the exact goal of changing an election result to install a non-winning candidate to power. I consider it noncontroversial to claim this was the goal of Jan 6 protesters, at least the more organized section, even if it turned out to be a farce in execution.
But they had free run of the place, literally nobody was stopping them from doing what they wanted to do, and they didn't do anything! They milled about awkwardly, took some selfies, and left!
That's pretty much exactly what would happen, I think, if the cops let in the crazy progressives who like to storm government buildings during protests, and let them 'accomplish their goal'
This is false. They did not get into any of the areas where congressmen or senators were. They did not leave of their own volition, they only left when cops cleared them out. It took physical force to do so.
They also broke into some offices and stole stuff. Their main goal was stopping the certification of the election, which they accomplished for a few hours.
It's hard not to comment on such hilarity. Don't be too hard on yourself.
[Lest I appear to vitue-signal that I'm 'not like other Americans', I should add that wherever you are, your politics probably aren't much better -- just less funny.]
It was a poor choice of words, as @Forrest points out, "bloody" is an intensifier and that's how I meant it, not considering that "bloody coup" is also a term meaning a "coup where many died". I just meant something like "a god-damned coup".
When was the last “disruptive protest” that “attempted to physically stop a transition of power in a democracy”? In the United States, I am unaware of any such protests since Jan 6, 2021, but maybe there are some examples I should be thinking more about.
> VP Pence, presiding over the joint session (or Senate Pro Tempore Grassley, if Pence recuses himself), begins to open and count the ballots, starting with Alabama (without conceding that the procedure, specified by the Electoral Count Act, of going through the States alphabetically is required).
> When he gets to Arizona, he announces that he has multiple slates of electors, and so is going to defer decision on that until finishing the other States. This would be the first break with the procedure set out in the Act.
> At the end, he announces that because of the ongoing disputes in the 7 States, there are no electors that can be deemed validly appointed in those States. That means the total number of “electors appointed” – the language of the 12th Amendment – is 454. This reading of the 12th Amendment has also been advanced by Harvard Law Professor Laurence Tribe. A “majority of the electors appointed” would therefore be 228. There are at this point 232 votes for Trump, 222 votes for Biden. Pence then gavels President Trump as re-elected.
> Howls, of course, from the Democrats, who now claim, contrary to Tribe’s prior position, that 270 is required. So Pence says, fine. Pursuant to the 12th Amendment, no candidate has achieved the necessary majority. That sends the matter to the House, where “the votes shall be taken by states, the representation from each state having one vote . . .” Republicans currently control 26 of the state delegations, the bare majority needed to win that vote. President Trump is re-elected there as well.
> One last piece. Assuming the Electoral Count Act process is followed and, upon getting the objections to the Arizona slates, the two houses break into their separate chambers, we should not allow the Electoral Count Act constraint on debate to control. That would mean that a prior legislature was determining the rules of the present one – a constitutional no-no (as Tribe has forcefully argued). So someone – Ted Cruz, Rand Paul, etc. – should demand normal rules (which includes the filibuster). That creates a stalemate that would give the state legislatures more time to weigh in to formally support the alternate slate of electors, if they had not already done so.
> The main thing here is that Pence should do this without asking for permission – either from a vote of the joint session or from the Court. Let the other side challenge his actions in court, where Tribe (who in 2001 conceded the President of the Senate might be in charge of counting the votes) and others who would press a lawsuit would have their past position – that these are non-justiciable political questions – thrown back at them, to get the lawsuit dismissed. The fact is that the Constitution assigns this power to the Vice President as the ultimate arbiter. We should take all of our actions with that in mind.
This was what Trump was trying to do on Jan 6, 2021. This is a direct quote from his own lawyer. When he sent the protestors towards the capital building, it was to pressure Pence; when he remained notably silent as they breached the capital building for hours despite people close to him begging him to act, that was also to pressure Mike Pence. I feel like that is, in fact, thousands of right wingers storming the capitol to overthrow the democratic rule of law, unless you think that this plan is in accordance with democratic rule of law (particularly when combined with fake electors and storming the capitol).
I don't see how encouraging or not discouraging the rioters really put pressure on Pence to try some crazy legal manoeuvre and trash his reputation.
Trump's plan to try use procedural shenanigans to be declared winner doesn't require storming the Capitol. The rioters weren't really trying to seize the Capitol so that Trump's plan could succeed. Sure the rioters wanted Trump to be president, but they surely didn't know all the details of this legal strategy; they seemed to be Trump fans and conspiracy nuts who got overexcited. They were mainly unarmed, so even if they had fully taken over the Capitol temporarily, armed police or the army would have cleared them out soon enough.
"storming the capitol to overthrow the democratic rule of law" would be a fair description if they had been a paramilitary force who could actually take lawmakers hostage and hold off the army.
Surely the really bad thing was Trump refusing to accept the election result and considering crazy legal strategies to try to be declared winner; the riot seems kind of a sideshow to that.
OK, so they knew at least enough that they were mad at Pence. Did they have any credible capability and intent to actually hang him? Or was this perhaps hyperbole? A lot of people say 'politican X should be jailed/hanged' but that doesn't rise to the level of a dangerous insurgency/revolution unless they can make it happen.
Also, if the plan was to pressure Pence to do procedural shenanigans, what is the use of sending in a bunch of rioters to disrupt proceedings? In the worst case were they overwhelmed the Capitol security forces and actually hanged Mike Pence, that kind of ruins the plan that depends on Mike Pence's cooperation! I think everyone involved is too stupid and disorganised to call this an actual coup or insurgency attempt.
But I think the case for coup isn’t strong. He asked people to protest in front of Capitol, which is legal. When they broke in, he told them to stand back (after 3 hours, which you can interpret in infinite amount of ways).
<i>Heck, there was a bloody coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law...</i>
>According to the US government , right-wing terrorism is a much more significant threat to the US than left-wing terror.
The wingedness of terrorism is a notoriously contentious question, and rather like the abuse/confusion of what counts as a "mass shooting," so goes what counts as terrorism of one wing versus the other.
LIkewise, the amount of Islamic terrorism looks quite different if you start counting from 9/12/2001, 9/10/2001, or 02/25/1993 (https://www.state.gov/1993-world-trade-center-bombing/). Choose any other date as convenient.
If you include the Weathermen and the other Days of Rage, the Long Hot Summer, etc, the degree of right vs left terrorism being more significant will change drastically.
> Before 9-11, we might have investigated the frequency of terrorist attacks. We would have noticed small attacks once every few years, large attacks every decade or so, etc. Then we would have fit it to a power law (it’s always a power law) and predicted a distribution
I disagree with this example; I think that the model we had of terrorism prior to 9/11 was significantly different. Typical pre-9/11 terrorism involved making plausible political demands and using limited quantities of violence to terrorise the civilian population into accepting them. This was the terrorism of the PLO, or the IRA, or the ANC, and there was a rational (if amoral) political calculus behind it.
9/11 was sampled from a different distribution. We called it "terrorism" but it didn't really match that modus operandi, there was no rational political calculus going on (at least not one that we could understand), there were no specific political demands, nobody to negotiate with on those demands, and no sign of restraint. This new form of terrorism seemed to be simply aimed at killing as many people as possible.
Yes, this argument Scott makes here has another problem. Sampling from an expected probability distribution only makes sense if the distribution is actually semi-static, which in an adversarial scenario it isn't. In that case it's possible that rare events are rare exactly because each time one happens people freak out and do lots of stuff to ensure it can't happen again, forcing the adversaries to find new tactics.
I'm pretty sure that if the response to 9/11 had been a shrug and people saying "eh we knew it could happen, no big deal" then it would have happened again very quickly. OBL would have realised he'd found a soft underbelly on a peculiarly and irrationally unresponsive enemy, and would have just kept striking in the same way over and over until people realised that this type of rationalist Bayesian-priors argument was just plain wrong.
Fact is, in many scenarios past probability is no guide to future probability. Something can go from never-before-seen to very common, extremely fast.
In the case of COVID this lesson has NOT been learned, and so we would expect viral lab leaks to become a lot more common in future. Safety measures are hard work and virologists frequently ignore them as a consequence. They've now been taught in the strongest way possible that they can do whatever they like, deny it all, and nothing will happen to them. The establishment is so addicted to false narratives about progress that they can just do whatever the hell they like and be completely protected.
The result is that, unsurprisingly, the Chinese are now making new coronaviruses that are 100% lethal in mice due to some sort of terminal brain infection:
> I'm pretty sure that if the response to 9/11 had been a shrug and people saying "eh we knew it could happen, no big deal" then it would have happened again very quickly. OBL would have realised he'd found a soft underbelly on a peculiarly and irrationally unresponsive enemy, and would have just kept striking in the same way over and over
I'm not so sure. I think - and I think it's generally accepted - that the point was to provoke an overreaction. If it had been unsuccessful, maybe they would have repeated similar attacks until they got the desired result (entirely plausible), or maybe they would have looked for some other way of achieving their goal.
On a slight tangent, I find it fascinating how in both this thread and the other Godwin thread where we're discussing, you don't really assign any weight to what people say their motivations are. I'd be curious to drill into that further if you don't mind. Are you carefully considering both these cases, or is it more like repeating things you heard? When you say "generally accepted" who is this, exactly? Academics? TV talking heads? Books?
OBL was always happy to discuss his claimed motivations and as presented they were fairly obvious: he didn't like the US being in the middle east, supporting Israel etc. To me, Occam's Razor says to take him seriously when he explains his motivations unless there's some decent evidence that he's trying to mislead about them (if there is such evidence I'm unaware of it). Likewise, to me, a party that calls itself National Socialist should be treated as socialist unless there's some compelling evidence that this was a deliberate lie.
You on the other hand see 4D chess behind both adversaries. OBL wasn't really mad about US troops in the middle east, in reality that was an Al Qaeda conspiracy. He instead wanted to provoke an overreaction, in order to ??? Make more people follow him, presumably? To what end? Likewise the Nazis claimed to be socialists publicly, but this was 4D chess. In reality they were .... what? Extremist libertarian capitalists, pretending to the socialists in order to .... ??? I'm not sure. Get voter support?
I feel like any explanation of bad guy behaviour that starts by discarding their stated goals is on slippery ice and needs to be treated carefully, especially when those goals are common and have reoccured throughout time.
They weren’t libertarians, they were fascists. Read up on the history of the Nazi party and you’ll see that there used to be a socialist wing to the party, but hitler killed them all in the night of the long knives because they opposed him. This was a necessary step for him to come to power. After he came to power he spend some time killing other socialists factions too. That’s why the ‘first they came for the socialists’ poem starts with the socialists.
What's the opposite of socialists? It's got to be libertarian capitalists, right?
The claim that Hitler wasn't a socialist because he killed other socialists is extremely silly. All socialists kill other socialists. Just ask the ghost of Trotsky. It comes with the territory. If you don't believe in the marketplace of ideas, or any marketplace, then all that's left is the raw exercise of power.
What does it matter what the opposite of socialist is? Fascists can kill socialists. Among the reasons to believe that hitler was a fascists was that he killed the socialist, but that’s obviously not the only reason. This false dilemma between either believing in the marketplace of idea or being violent obviously doesn’t hold. Obviously not all socialists kill other socialists just ask the next socialist you meet whether they’ve killed someone, and obviously not all socialists disbelieve in a marketplace, just ask the next market socialist you meet.
If 9/11 was an attempt to provoke an overreaction, then e.g. the attack on the USS Cole was an attempt to provoke an overreaction, by the same people and not too far apart in time. So we *know* what happens when a terrorist "attempt to provoke an overreaction" fails - they look for some other way of achieving their goal, and the obvious approach is to kill a whole lot more people, closer to home.
It wasn't completely new (see for example the Oklahoma City bombing), but what was new was that how big / funded /coordinated the group was behind it. Lone wolf (maybe + 1 close friend or something) kind of attacks that you described were a possibility, but if you had asked me (or I assume many other folks) the probability that you could get 19 (19!) suicide attackers coordinated together in secret without it leaking, get them into the US and living there for a while, get some trained as pilots, get them though airport security with weapons and onto 4 planes the same morning; I'd have said the odds were really, really low. It's something out of a not particularly believable action movie where the supply of perfectly loyal and fanatically devoted, even suicidally so, to the cause yet somehow also quite capable and able to pass as a normie mooks is inexhaustible.
As such I should have expected to see dozens of similar in scale and/or approach and foiled plots of similar sophistication for this one success. So many places something should have gone wrong. Instead this seems like more or less the first time someone tried something like this. That's a radical update.
At minimum it's an updated that these sorts of things are much less likely to be detected and foiled then I had assumed, and if potential terrorists made a similar update then perhaps we would see a lot more of these kinds of attacks in the future.
> there was no rational political calculus going on (at least not one that we could understand)
I believe the rational political calculus was that a large and unprecedented terrorist attack would provoke an extreme overreaction from the USA (and potentially its allies), which would weaken America's standing in the world, provoke animosity towards the US/the West amongst the global Muslim community, and cost America enormous amounts of blood and treasure in futile adventurism.
And it was _extremely_ successful, achieving all of those goals.
"It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believed Saddam was hatching terrorist plots, and invading Iraq)."
the logic here seems to be that launching the war on terrorism was an overreaction because during and after there wasn't much terrorism?
The issue is that it's possible the war on terror decreased terrorism, both through greater security and government powers, and removing important bases in Iraq and Afghanistan as well as putting sanctions on terrorist organisations around the world.
I don't know the relevant timelines, so I don't know how likely this is, but it really seems like since the US retreated from the war on terror, reduced sanctions on Iran, Houthis, and presumably others, and retreated from Iraq and Afghanistan, there DOES seem to have been a big uptick in global terrorism. So maybe the response to 9-11 did something?
I recommend the book Days of Rage about terrorism in the US in the 70s. If I recall correctly in 1976 there was an average of three attacks in the US each day.
In these three months following October 7th, I think we've really seen there are no shortage of terror supporters in the US or around the world.
My objection to treating terrorist attacks as randomly distributed along a power law curve is that the big attacks are not someone randomly deciding to take action. They're sponsored by organizations, which are funded by rich people and protected by state governments. Those actors will look at a once-in-fifty years bodycount and think "That worked really well, we should do some more of that!"
The counter is for the anti-terrorist side to take action. Enough action that the next time someone has a bright idea for killing thousands of Americans or more, the response will be "Shut up! Do you want the Americans hanging around for twenty years making our children attend immoral schools?"
Good point. We are dealing with people here, on both sides of the "events". Not only those who react to the "events", but also those who cause them. The latter may change their behaviors depending on how others react to what they do - thereby changing the frequency of future "events". Therefore I am a bit uneasy about this blog post by Scott. It seems to assume that the statistical distribution of "events" is fixed. That may be the case e.g. with volcanic eruptions, but not with man-made "events".
Let's assume that this isn't preaching to the choir... are you sure that even normal people typically are updating based on the the occurrence of a dramatic event and not simply updating on the reaction to it?
A dramatic event is often latched onto by people who stand to gain by making it seem high probability or impact, and the fact that they create drama around (regardless of the specific logic they put forth) is signal too. If those people are (or are signal boosted) by people close to your usual sources of world truth, it makes sense to update your model of world truth.
Does this essay's commentary on terrorist attacks begin from the premise that they are... independent events? That seems absurd- It's perfectly reasonable to assume a massive terrorist attack being successful might change the frequency at which terrorists attempt to do massive attacks! If the goal of terrorists is to create chaos and terror in enemy society, they will try to replicate the tactics which successfully do so.
Maybe the assumption is that the potential terrorists are perfect Bayesians too, and have already incorporated all available data in their strategies and success predictions.
Very true. Humans are social creatures, including the terrorists among us. We learn from experience, from the example of others, and from how others react to whatever we or others do. So do would-be terrorists, would-be mass murderers, those who are tempted to use their local power to get laid more often, and everyone else.
That is why statistics about the frequency of man-made "events" can have a short shelf-life. If something new happens, it may induce behavioral change, making the frequency tables constructed based on past events not valid any more. Bayesians should adjust their priors accordingly.
Yeah there is definitely a copy cat effect to consider.
As an example, the fact that school shootings are the big fear and not school bombings or arsons can largely be traced to the historical quirk that two teenagers in Colorado were shitty at building pipe bombs.
You’re underselling the degree to which most school shooters have been obsessed with, and deliberately copycatting, Harris and Klebold.
Now, maybe “The Columbine Bombing” wouldn’t have had the same memetic staying power. Maybe it would have evolved into shooting anyway, being easier. Or maybe a different event would have overtaken it and become the prototype.
But I think there is definitely a plausible world where the bombs work and this spawns a couple decades of copycats.
> FS/S correlates at a fantastically high 0.62. For some reason, suicidal Southerners are much more likely to kill themselves with guns than suicidal people from the rest of the States, even when you control for whether they have a gun or not.
That is, Southerners who own guns are more likely than non-Southerners *who also own guns* to kill themselves with a gun as opposed to some other method. Convenience cannot possibly explain that. But cultural transmission can.
I strongly agree with the overall point of this article (expressed around the SBF vs. Altman episodes), but not sure about this:
"But terrorist attacks after 9-11 mostly followed the same pattern as before 9-11: every few years, someone set off a bomb and killed some people, at about the same rate as always.... In retrospect, updating any of our beliefs - about Islam, about the extent of the terrorist threat, about geopolitical reality, based on 9-11, was probably a mistake"
I think the argument is that part of the post-9/11 paradigm was the government instituting a ton of security protocols, and collaborating with other friendly nations to do the same. So the rate of terrorist attacks after 9/11 probably reflected a heightened security environment- I disagree with the idea that they're randomly distributed and we just happened to hit a big in the early aughts. There are a lot of foiled attacks.
Also, there are a lot of *successful* terrorist attacks in Europe by Islamists. It seems to me the rate increased from before the 2000s? The Spanish train bombings were not that long after
Yeah I was about to write something similar. I got curious and read the wikipedia for "Suicide attack" and "terrorism", and my impression is 9/11 really did cause, or at least coincided with, an upsurge in this type of organised suicidal terrorist attack. Additionally, other late 20th century examples seem to be very tied to specific nationalist/secessionist movements and therefore lower risk to a place like the USA.
Yes, al-Qaeda had struck before, starting in the 90s, and this made something like 9-11 somewhat more predictable. But this kind of structured international terrorism was fairly novel (perhaps comparable to the anarchist movement many decades prior), and was followed by a long list of similar successful and unsuccessful terrorist plots by them or related groups in the West. In hindsight, it does seem reasonable for people to have updated their priors significantly after 9/11 (or at least rapidly between 1998 and 2003 - much faster than the multi-decade power law expectations Scott is suggesting).
Again this is just my impression from wikipedia, would be interested if anyone has studied this more deeply.
>I think the argument is that part of the post-9/11 paradigm was the government instituting a ton of security protocols, and collaborating with other friendly nations to do the same. So the rate of terrorist attacks after 9/11 probably reflected a heightened security environment- I disagree with the idea that they're randomly distributed and we just happened to hit a big in the early aughts. There are a lot of foiled attacks.
Yes, that's what I was also thinking about. In particular, all the onerous air security measures after 9/11 were specifically instituted to prevent a similar attack from 9/11 from ever happening again. And it worked! Indeed, back when I was a child (in the 90s) hijackings in general were a strong enough thing in cultural memory, at least, that I expected I'd go through at least one of them during my lifetime, and if anything the security measurements now mean that that *cultural* memory no longer applies (even though, apparently, there still are some hijackings! https://en.wikipedia.org/wiki/List_of_aircraft_hijackings)
>Also, there are a lot of *successful* terrorist attacks in Europe by Islamists. It seems to me the rate increased from before the 2000s? The Spanish train bombings were not that long after
It increased and then decreased again (see https://en.wikipedia.org/wiki/Islamic_terrorism_in_Europe). The terrorist wave was the strongest in the mid-2010s, concurrently with the existence of IS as an actual force in the Middle East, but started trending down after IS was mostly defeated in those areas.
I. Lab leaks. Another option is recognizing the pandemic for the wakeup call that it was and ask do we want to continue with the (20%,2%, 0.2%?) annual risk of repeating a pandemic due to a lab leak. Arguing for continued gain-of-function study is the ultimate luxury belief.
II and III. 9-11, nuking of a single city and mass shootings (don't worry folks, we predicted something like this would happen, that's just life in big city). Some people (enough to affect national discourse and some elections) demand political responses to certain events. "We're not changing our response, not updating our priors, and continuing to be guided by Bayesian math," may not be a winning political response, which could lead to being forced to update priors after the next election. Maybe take out Bin Ladin and his lieutenants but not invade Iraq.
There are several varieties of this, and in some cases I agree with your point more than in others.
1. Exceptional events, perhaps first of their class. In these cases, an occurrence typically provides a non-negligible update on the frequency of the event (even if the correct update may be smaller than many people think).
1.1 Exceptional events with major consequences. E.g. a lab-leak-caused pandemic that kills millions of people, or a nuclear terrorist attack. Policy changes may be warranted.
1.2 Exceptional events where the consequences are minor on a world/national scale. E.g. 9/11.
The US response to 9/11 wasn't an overreaction because it wasn't an exceptional event that should cause one to update non-negligibly, but because a few thousand people dying in a country of hundreds of millions is, as sad as it is, is too little to warrant major policy changes, even if similar attacks were to happen slightly more frequently than we'd thought.
(In the particular case of 9/11 there is the further consideration that air crews and passengers having learned never to cooperate with hijackers is enough to prevent attacks of this form from being repeated. OTOH that was an important lesson to have learned, with the side benefit of disincentivizing more traditional, hostage-taking hijackings.)
2. Non-exceptional events (say, at least a few dozen have happened already). In these cases, you should update negligibly based on a new occurrence.
2.1 Events infrequent enough that you're likely to hear about all of them: e.g. mass-shootings, air crashes.
2.2. Events (e.g. assaults, instances of sexual harassment, even homicide) more frequent than audiences have the appetite to read about. In these cases, how many you hear about in the media is almost entirely uncorrelated with the actual amount of occurrences, and depends entirely on how many stories can still keep up the readers' interests; and which ones you hear about depends on which ones the media decides to talk about (either deliberately, or through a random chain of events).
One problem with the cynical strategy of using crises as organizational opportunities is the strong tendency they have toward becoming left/right coded. Once you've polarized on an issue - or leveraged polarization on that issue - you risk enacting reforms that are too targeted.
An example: Sex scandals involving allegations of the Catholic church covering up abuse have not been in the popular news recently. (At least not the news I consume.) When they were, I remember thinking, "Wow, that's bad. I'm shocked at the reports I'm reading!" I updated some of my understanding of the inner workings of the Catholic church, and thought there must be something uniquely wrong happening there.
Recently, I started looking at other statistics about sexual abuse of minors. It turns out this is very common in public schools as well, including reports that some school districts cover up allegations by transferring teachers, hiding the allegations, and allowing them to continue interacting with students.
As I was reading about this, I reflected on my previous updates about the Catholic church. I thought, "Maybe this phenomenon has nothing to do with religion or with one specific institution gone astray. Maybe when something sufficiently negative threatens to make an institution look REALLY bad, the people in charge respond by trying to hide the abuse." I don't claim to understand what these people are thinking at this point, but it doesn't seem like you need a particularly corrupt/captured institution for cycles of abuse to be hidden by the bureaucracy. Or at least, this isn't the kind of institution that is as highly abnormal/unexpected as I'd thought on reading the initial reports.
I also think that the impetus for abuse of minors by people in positions of authority is much higher than I used to think it was. Given these two updates, I no longer think a solution that specifically targets the Catholic church would do much to curb this phenomenon. Indeed, it might get in the way of good general reform to make a hyper-specific reform targeted at Catholics that does nothing to address the general problem rooted in human nature.
If, instead, we looked at general institutional incentives and tried to shift them to the point where any institution that failed to report abuse suffers, while those that proactively report abuse are rewarded as being forward-thinking and honest brokers (because they're on the lookout a behavior we expect to find under normal circumstances), we might be able to make meaningful changes. Indeed, we might find abuse that has previously been successfully covered up, allowing us to stop it.
The problem with large updates on single phenomena is that the hyperfocus of the moment can cause us to make changes that are far too specific to address the root problem we care about solving.
When I first heard about the scandal, I had the same thought. Yet you still have the exact same type of abuse in denominations without a celibate clergy. It's a great hypothesis, but based on the evidence I no longer believe it to be a significant factor.
Also married gym teachers. Or teachers of all types. Or those of any stripe who have access to the impressionable.
Since this article is also about 9/11, I should repeat the (factually accurate) joke that if the normal distribution of pedophiles in the population held for the twin towers, the attack killed 75 of them.
Exactly. The impression you get from the Catholic scandal is that you've some insight into the type of people who might abuse. "It's those celibate types," "it's those religious types," or whatever. It's the details of the one-off situation that create distortions you don't even realize are there. Then countervailing evidence demonstrates that maybe you've spent years with a defective pedophile detector because you updated too hard on a single event.
In this, as in all the other examples, the problem is that we've updated away from sound heuristics in the first place. Thinking that Catholic priests as a class are pedophiles is possible only when we've lost touch with more concrete and immediate cues. We're a highly evolved social species, and body language and facial expressions convey a ton of information (to most of us, of course -- it's a range). If you hear about some abuser being defrocked, imagine a normal Catholic priest that you've known or seen, and then imagine a pedophile. Only then look at the picture of the abuser, and see if he looks more like the former or more like the latter.
Similarly, foreign terrorism and Chinese lab leaks are in a category of risk that we're naturally suspicious of, unless we've learned to label this 'xenophobia' and update away from it. None of our ancestors before the industrial revolution would have been able to understand why unrestricted travel in our communities is a human right that we must extend to our enemies.
If it really is the case that celibacy makes people into paedophiles, maybe women really do have an obligation to have sex with disgusting incels. Wouldn't want them to turn to children instead.
Melvin, if a priest wants to break his vows and have sex, he can have affairs with women parishioners (or male ones). "Gosh, I want to have sex, but it's too risky with grown-ups, I'm safer to be fucking kids" isn't really the motivation here.
That was the big excuse used by those who want a more liberal church: if only we could have married clergy/women clergy/gay clergy! Then this would never happen!
And the response when the school sex scandals came out, from this side, was "oh no, if only teachers could marry! if only women could be teachers! if only there were gay teachers! oh, what do you mean that already is the case?" because it's not got very much to do with "can you marry", it's "are you capable of/want a relationship with an adult?"
Certainly, I think celibacy meant that men who could not see themselves/had no interest in heterosexual marriage had an outlet in the priesthood, "John isn't gay, he's got a vocation" was a way of avoiding coming out to family and that was an entire, separate scandal of its own (see Ted McCarrick https://en.wikipedia.org/wiki/Theodore_McCarrick#Abuse_of_seminarians and the rather, um, extravagant allegations made by Archbishop Vigano) and the same for men who may not have understood their own paedophilia.
But the underlying problem is: positions of authority that offer access to children will attract these types of people, whether or not they permit married gym coaches, doctors, teachers, and so on.
The biggest difference between the Catholic Church sex abuse scandal and sex abuse in other organizations is that the hierarchical nature of the Catholic Church meant the cover-ups were far more involved than they would be anywhere else. You could have a Protestant pastor in some storefront church answering to no one. If he commits any sort of sexual impropriety (criminal or otherwise), maybe he gets caught, maybe he gets chased out of town, maybe he gets away with it. In the Catholic Church, every priest answers to a bishop (or some sort of superior in a religious order), and those higher-ups had incentives for covering up crimes. And the Church as a whole is a lot bigger than a single school district, so it was easier to move people around in an attempt to conceal wrongdoing. But I think you're completely right that there's sexual abuse (and cover-ups of that abuse) in a lot more places than just the Catholic Church.
You're probably right insofar as the Catholic Church is one of the largest institutions on the planet. And for a time I thought that elements like their hierarchical nature and their ability to move people around to hide offenses made them uniquely able to perpetrate this kind of abuse.
But school districts have been caught moving teachers around within the city, hiding offenses and keeping teachers from criminal prosecution, etc., similar to what the Catholics did. When I think about it a little more closely, they probably can't move a priest from Boston to Bogotá. Indeed, any move more significant than within the city increases the logistical difficulty. (I don't know much about this aspect of the Catholic Church, so maybe it is as easy as an international transfer?) If most transfers are within the city/region, the nominal size of the church may not be as unique a factor as it at first appears.
Which is part of what I'm trying to say here. When I'm looking at an isolated incident, I've noticed a strong tendency to use unique features of the case to explain the phenomenon, absent any actual data establishing causal relationships. This can lead me to think that "those guys are uniquely bad", which can lead me toward partisan thinking that harms my ability to understand the problem at a more fundamental level. When I stopped thinking of child abuse as a unique or far-off phenomenon, prevention of that same issue within institutions I'm involved in became a more immediate concern. There but for the grace of god go my favorite institutions, (religious or secular).
I can't say for sure how often priests were transferred outside of dioceses. Transferring a priest within a diocese (roughly a metro area, perhaps a larger area in a less-dense state) is a very simple procedure. Transferring a priest to another diocese would basically require both bishops and the priest to approve. But after some brief googling and Wikipedia research, it looks like there were some priests (from Orange County, CA, according to Wikipedia) who were transferred to different dioceses and even countries.
And also, even one Catholic archdiocese can be enormous. The Archdiocese of LA has 4-5 million people. Only five other denominations have more than 5 million members in the entire US.
Thanks for this! Sounds like the Catholics definitely leveraged their size to help hide abuse. Indeed, moving someone to another country significantly complicates the process of criminal proceedings against the offender, which is probably why an international reassignment was agreed to.
Not to excuse the crimes, but at the time there was also "we consulted a psychiatrist who told us that therapy would cure Father Dan and now he's fine and safe to be moved to another parish" going on. Some of the higher-ups naturally didn't want scandal to be public, but they did try the Medical Consensus Of The Time, which often was "nah, some therapy and counselling and it'll all wash out".
I'm gonna need to see some kind of source that public schools have both a) the same large number of child sexual abuse cases and b) the same kind of large cover-up involving hundreds (if not more) of people over decades as part of an organized institutional effort.
As far as I can tell, neither are true. The weaker claim "there is also sexual abuse in public schools and sometimes it is covered up" is of course true, but not very interesting.
Edit: basically at a minimum, I would need to see evidence that multiple Secretaries of Education (or at least, very high-ranking members of the Department of Education) were routinely involved in national-scale sex abuse cover-ups, in the same way that the Pope and high-ranking Catholics were involved for decades in these kinds of cover-ups.
And *I'm* gonna want to see some evidence that "the Pope" (which one? the sex abuse scandals go back decades) was routinely involved in such cover-ups.
I can make demands for rigour to fit in with my biases, too!
Anyway, somebody is claiming the Department of Education was complicit in such cover-ups for schools:
"Today, Education and the Workforce Committee Chairwoman Virginia Foxx (R-NC) made the following statement in response to the Defense of Freedom Institute’s (DFI) new report that uncovered evidence of teachers unions, school districts, and the Department of Education concealing cases of sexual abuse in K-12 public schools:
“This report highlights a pattern of gross misconduct by school officials and the Department of Education—who are beholden to teachers unions—to conceal directly or indirectly sexual abuse in K-12 public schools. Cases of sexual assault have tripled in the last decade, but instead of investigations and terminations, perpetrators are often transferred to another school or school district, or given an administrative job.
“The Department’s Office for Civil Rights (OCR) is complicit too. OCR has worked to reverse the progress made by the prior administration to compel school district officials to take decisive action in cases of sexual abuse.
“In no uncertain terms, this report shows that the Department of Education and teachers unions are putting children in harm’s way. I hope that this report shines a spotlight on the deteriorating state of public schools in America and the consequences of the Left’s radical education agenda. Rest assured, the Committee will continue to hold the Biden administration accountable for not putting students first.”
But I guess because Massachusetts and California school boards are not international, or even national, authorities then it's not the same thing at all, at all!
Thanks for providing this. I'd like to return the focus on the original point, which was a specific example of how updating based on dramatic events can cause you to become hyper-specific in your expectations. "This kind of thing happens because of X, Y, and Z features of the Catholic Church", or perhaps, "this report shines a spotlight on the deteriorating state of public schools in America and the consequences of the Left’s radical education agenda".
It seems clear to me that "positions of authority attract this kind of person", and perhaps, "extremely embarrassing and polarizing bad press can cause institutions to defend themselves by hiding misconduct privately, instead of rooting it out publicly" are more reasonable conclusions. The kind of solutions that make sense hinge on what you think the problem is.
I bet if Jeff Epstein was two decades younger he'd have given a lot of money to EA and there'd have been an entirely unnecessary massive controversy when his extracurricular activities got revealed.
I disagree, Epstein was never a big charity guy. He had ample opportunities to donate to charity and never did, and I don't think that inventing a slightly different version of charity would have changed his mind.
EA is to some degree a non-standard "charity," even if they fund some traditional (but above-average ROI) charities. There's also a fair number of Big Thinkers associated which might have appealed to him, that he'd want to brush shoulders with (like Marvin Minsky apparently did).
Marvin Minsky's "The Society of Mind" thanks Epstein for the funding on one of the first few pages, seems plausible Epstein would fund stuff in the AI x-risk space.
No chance - EA is for the uncool nerdy kids. Epstein was in with the big guys, who categorically do not support EA. EA is, cynically, a way for people to act like they're smarter than everyone else. Other, more trendy causes are for people who want influence and attention.
How do you figure, "EA is, cynically, a way for people to act like they're smarter than everyone else."? I would say EA is a way for people to admit that others are more knowledgeable about how best to distribute donations.
I assume there is a theory for the evolutionary origins of salience bias? If not, maybe it goes something like this:
It's always a power law. All power laws are scaled versions of the others. Hence if one considers the worst event a of type A one has ever observed vs. the worst event b of type B and b is worse than a then, everything equal, one has reason to believe that the power law for B is a scaled up version of the power law for A. For example, I am guessing the worst individual predator attack you have ever heard of killed no more than three people. The worst earth quake you have ever heard of on the other hand...
"Everything equal" does a lot of work here, specifically the amount of observation done should be equal. But this used to be the case throughout all of prehistory. History, by definition, started when a written record enabled us to preserve certain events for much greater periods of time. And these are realiably the most salient events. Hence we have a written record of the destruction of Pompeji by the eruption of Mount Vesuvius but not one of some random lion attack (unless it killed a very important person). In prehistory, people had their own observations and what other people told them about which went back maybe three generations before fading into the domain of myth. So in terms of orders of magnitude, all worst events one had observed or heard about could be considered about equally likely.
Now, usually precautions against the worst ever observed event of a given type will also work against lesser events of that type. E.g. sleeping around a fire at night will deter lions but also leopards, hyenas, wild dogs, ...; keeping a safety distance of 50km around a big volcano will also keep one safe from smaller volcanos; and so forth.
So if one scales one's precautions according to the inferred scaling factor deltas of the known worst events, one should assign approximately the right amount of resources to each.
In other words, If during my lifetime I have seen a lion kill a family member and I have also seen a volcano wipe out an entire tribe, it appears rational that I view volcanos as the greater threat and take much greater precautions against them. Here I may misallocate, but any such misallocation is short term. My great-great-grand children will already have forgotten about the volcano but they will still be aware of the more frequently occuring lion attacks and have reallocated their resources correspondingly.
Viruses evolve and gain function in nature. What happens in a BSL4 could just as easily have happened in a bat cave. That's why I don't think it matters whether COVID was a lab leak. Restricting virology research does not materially reduce the probability of dangerous viruses existing or entering the human population. It would limit our ability to understand and respond to the situation when a naturally evolved virus does circulate.
"Wild animals attack and kill people all the time, so you should have no concern about the rampaging gorillas that just escaped from the zoo next door..."
"Zoos sometimes cause wild-animal attacks" does not imply "we should get rid of zoos to protect ourselves from wild-animal attacks" and neither do the analogous statements for virology.
A car that is not being driven doesn’t hurt anyone. A car that’s being driven carefully hurts fewer people than one being driven drunk or recklessly. But a pandemic pathogen that exists or can exist in nature is going to cause the same pandemic sooner or later - whether you give it a pathway or it finds its own. The question is, will you be ready? To optimize for survival, we definitely need to put our thumbs on the scale in favor of “later” but we’re also going to need to use that time to do virology research and get ready.
I think your argument needs actual numbers in order to be more convincing.
Given a population of bats doing their usual dirty bat things, how often should we expect a nasty human-adapted virus to emerge? And what's the chances of a human getting infected with it?
Given a research program devoted to making random animal viruses better adapted to humans, how often should we expect a nasty human-adapted virus to emerge? (Very often, that's the point.) And what's the chances of a human getting infected with it? (Depends how foolproof your containment procedures are.)
I could easily be convinced that the existence of GoF research in its current form increases the frequency of nasty pandemics by anywhere between 10% and 1000%.
>Viruses evolve and gain function in nature. What happens in a BSL4 could just as easily have happened in a bat cave. That's why I don't think it matters whether COVID was a lab leak.
No, absolutely not "just as easily". Those are chance events. GoF is literally trying to make this happen as quickly and effectively as possible.
>Restricting virology research does not materially reduce the probability of dangerous viruses existing or entering the human population.
Yes! It does! You're just flat out wrong.
You're increasing the number of enhanced viruses that exist, you're increasing the extent to which those viruses are enhanced, and you're deliberately putting them around people to whom the virus can spread. ALL of these things increase the risk.
>It would limit our ability to understand and respond to the situation when a naturally evolved virus does circulate.
You mean like how the research that caused this pandemic in the first place helped us? Oh, NOPE, it only got people killed.
No GoF research that has been done to date has been worth millions of deaths and trillions of dollars. You're making a huge speculative bet if you claim that such research will, at the absolute bare minimum ever outweigh the cost of the one major pandemic it caused, let alone outweigh the known risk of causing future pandemics too.
THE biggest risk of future pandemics is lab leaks. And nothing about GoF research suggests the risk will ever be worth it.
> What happens in a BSL4 could just as easily have happened in a bat cave
1. You pulled this probability out of your rear end.
2. It's wrong because in labs they do serial passaging in genetically engineered animals and other tricks to massively speed up adaptation to humans, which in nature may have never happened given that humans tend to avoid bat-filled caves.
3. Finally, it's irrelevant anyway because the WIV team wasn't operating in BSL4 labs. They had them, but didn't use them, because they're a pain in the ass. The SARS-CoV-2 work they almost certainly did was done at lower safety levels.
"given that humans tend to avoid bat-filled caves"
How could I resist this?
"average person visits 3 bat-filled caves a year" factoid actually just statistical error. average person visits 0 caves per year. Bat Man, who lives in bat-filled cave & encounters over 10,000 bats each day, is an outlier and should not have been counted
The issue is gain-of-function research on potential pandemic pathogens like SARS. There were specific warnings from Carl Bergstrom and Marc Lipsitch against lifting the moratorium on this research in 2017.
In terms of COVID-19 the nearest relatives to SARS-CoV-2 are found ~1500km away from Wuhan in areas WIV sampled SARS-related bat coronaviruses like Yunnan and Laos. It arose in Wuhan well adapted to human cells with low genetic diversity indicating a lack of prior circulation and a furin cleavage site never seen in a sarbecovirus. WIV was also also part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses.
> In terms of COVID-19 the nearest relatives to SARS-CoV-2 are found ~1500km away from Wuhan in areas WIV sampled SARS-related bat coronaviruses like Yunnan and Laos.
The closest *known* relatives, but they're not all that close. This paper (which I am completely unqualified to assess) finds that the most recent common ancestor of SARS-CoV-2 and RaTG13 was around 50 years ago: https://academic.oup.com/ve/article/7/1/veaa098/6047024?login=false
“Just as easily” is doing a *lot* of work here. Seems to me that I would say we could “just as easily” do virus research without gain of function, since the same things are going on in bat caves. If this is going to generate any advantage in terms of data or evidence, it must be changing the probabilities of *something*, and it’s on the people doing this to argue that it’s the probabilities of safe learning about viruses, and not the probabilities of people getting infected with novel viruses.
... only follows if people changed their minds, people thought intentionally getting animals sick to breed better virus was stupid before fauci killed more people then several wars
It was an active debate on if "gain of function" research should be banned for a decade before 2020. It wasnt a debate if shoe bombs were risks before 9/11 and the tsa
> math, assume this and such numbers, and you get such and such result
Making viruses stronger is bio terrorism research, he was actively avoiding a law, and what helping china learn a new type of weapon mass destruction; I don't understand a way to read a situation that you accept the facts that fauci moved money to the Wuhan institute to breed stronger viruses, that people where very very concerned about and stopped the research for fears of this exact outcome that isn't profoundly stupid, treasonous or psychopathic.
Its unlikely we we ever know which, but why not take a common denominator of "bad"
>But if you would freak out and ban gain-of-function research at a 27.5%-per-decade chance of it causing a pandemic per decade, you should probably still freak out at a 19-20%-per-decade chance. So it doesn’t matter very much whether COVID was a lab leak or not.
Notably, banning gain of function research wouldn't prevent SARS-CoV-2 leaking from a lab. According to the most plausible version of the lab-leak theory, SARS-CoV-2 was a natural virus from a cave, that researchers brought back to Wuhan, from which it subsequently escaped.
If it was a natural virus that escaped, the response should surely be to ban research that collects unknown wild viruses and cultures them in populated areas with inadequate containment.
But isn't one of the arguments for a lab leak that the virus has a furin cleavage site that is unlikely to have evolved naturally and is evidence that SARS-CoV-2 was actually engineered? Even the virus wasn't genetically modified, passaging it through non-bat lab animals (deliberately or accidentally) might have given it an opportunity to evolve to cross over into humans in a way that wouldn't have happened otherwise.
The furin cleavage site is evidence, but it's not super-strong evidence by itself. See e.g. https://www.sciencedirect.com/science/article/pii/S1873506120304165 . Coincidences do happen. The evidence for "the outbreak was related to gain-of-function research at the lab" is a lot weaker than the evidence for "the outbreak was related to research at the lab".
Of course, if we had functioning institutions, then answering the question of whether the research engineered that specific furin cleavage site would be trivial, since we could Just Ask. But that's a separate problem.
To me one of the arguments *against* the virus being deliberately modified is that the Wuhan Institute of Virology wasn't some super secret black site - they had openly collaborated with Western scientists. So if they had been doing research to make Sars-Cov-2, it's surprising that they hadn't mentioned this to anyone (although maybe they wanted to keep it secret until publication) or at least that there wasn't some clear evidence that would come to light afterwards. You'd think that the CIA or NSA could hack the systems of an academic institute pretty easily and find a smoking gun email or spreadsheet. This makes the scenario where a natural virus was collected and then leaked (maybe after passing through lab animals) or just infected someone in the bat caves who brought it back to Wuhan.
> if they had been doing research to make Sars-Cov-2, it's surprising that they hadn't mentioned this to anyone
They...they did, though? It wasn't a secret they were interested in spikes. As for specifically inserting novel furin cleavage sites into spikes, they applied for a grant in 2018. That was DARPA, not NIAID, but regardless. They specifically proposed inserting novel furin cleavage sites into bat coronaviruses. Like, the fact that WIV was looking into furin cleavage sites in bat coronaviruses wasn't a secret.
But, I mean...even if they didn't conveniently have an English-language grant application to DARPA. Let's take a step back here.
SARS-CoV-1 *terrified* people, and it was a much bigger deal in China than in the United States. Initially, nobody knew where it came from, or how it worked, or what. Everyone wanted to know. This kind of thing is why people become *become* virologists.
The reason we knew so quickly SARS-CoV-2 came from bats was because of previous research on SARS-CoV-1, back when SARS-CoV-1 didn't require "-1" as a modifier. Research done at...the Wuhan Institute of Virology. Funded by EcoHealth Alliance.
SARS-CoV-1 did not have anything special going on with furin cleavage in its spike, as far as I know. But it had long been known that this *could* happen, and people believed (correctly, as SARS-CoV-2 would later prove) that it could make a virus much more dangerous. Artificially inserting a furin cleavage site and watching how the resulting virus infected cells was a fairly obvious thing to try. It's far from the only obvious thing to try, but that's where we go back to "WIV specifically asked for money to do literally this."
If you had suggested in November 2019 that maybe the Wuhan Institute of Virology was inserting furin cleavage sites into coronaviruses, nobody would have even blinked. That kind of thing was what WIV was *for*.
> maybe they wanted to keep it secret until publication
Most research is not known outside the lab until publication. That is what publication is for. Indeed, in the United States it is a surprisingly common pattern for most of the research to basically be done before the grant application is approved; this is something of an open secret, since the whole point is to make the grant proposal more attractive by making it a slam-dunk from the grantmaker's perspective (since there's no actual uncertainty). Labs often push forward with research that has not (yet) been externally funded, in the belief that funding agencies will become interested once enough is certain that results are basically guaranteed. Often, that belief is vindicated. Many labs have a budget item explicitly for incubating projects this way.
You do bring up an important point that argues against the furin cleavage site being artificial:
> there wasn't some clear evidence that would come to light afterwards
If the furin cleavage site was a spontaneous mutation, then even if the virus came from WIV, nobody in WIV would necessarily *know* that the pandemic came from a leak. They wouldn't have to lie about anything; they'd just have to not notice things, and given the stakes I expect them to be very good at not noticing.
Whereas if somebody inserted that particular furin cleavage site on purpose, then as soon as SARS-CoV-2 was sequenced, light bulbs and alarm klaxons came on in *somebody's* head.
Which somebody? Remember there are three basic scenarios.
In the first scenario, a small number of people (fewer than ten, possibly as few as *one*) inside WIV working alone without telling anyone about the furin cleavage site they were inserting. This small group may, or may not, have included Ben Hu and/or Yu Ping and/or Yan Zhu.
This would be somewhat weird, but not very weird. I don't know how WIV operates, but in a lot of labs, PIs have wide latitude to try things as long as they're making progress on the moneymakers. If lab leadership later learned about the experiments, the reaction would be "Oh, did you learn anything?" If yes, then it could lead to a publication; if no, then it gets buried in somebody's filing cabinet and forgotten.
In this scenario, after putting together what must have happened, the person or persons deleted everything and never told a soul. (Because they would get in trouble for violating some rule or other, because in unhealthy bureaucratic labs everyone is always violating some rule or other.) WIV leadership, in this scenario, doesn't need to lie about anything; they just need to not notice things, and given the stakes I expect them to be very good at not noticing.
In this scenario, it would be somewhat weird for discoverable electronic records of the experiments to never have existed, but not very weird. The researcher(s) might have kept notes in a shorthand, totally indecipherable except to someone with both deep fluency in the Hankou dialect *and* deep knowledge of the SARS work at WIV, intending to write up a more coherent description later. In any case, the researcher(s) tried very hard to delete any electronic records that existed. Alternatively, some PIs are old school and keep research notes in paper notebooks; these could be *very* well-organized and readable, but undiscoverable.
In the second scenario, WIV leadership (meaning at least one person in a position of authority, such as Deputy Director Shi Zhengli, or Peter Daszak, the guy bankrolling the SARS work) gave the green light to use lab resources to try some exploratory work on the furin-cleavage insertion without the external funding.
In the second scenario, there would be an electronic paper trail showing the planned work. The principals would have tried very hard to scrub it, but put it this way: if you were one of the low-level researchers involved in such a conspiracy, would you really delete everything, or would you save a copy in a safe place in case the leadership later decided to throw you under the bus?
In the third scenario, WIV actually *did* get an external grant to do the work, such as from the Chinese Center for Disease Control and Prevention. Only in this third scenario would there definitely be *piles and piles* of electronic paper-trail, far too much for anyone to hide.
Even in the third scenario, note that the grant application would almost certainly not mention the *specific* furin cleavage site they intended to insert. It *would* talk about producing a chimeric virus by replacing the spike of one virus with a spike from another virus. Of course, we already know they were doing things in that vein, because some of the results they *did* publish.
> You'd think that the CIA or NSA could hack the systems of an academic institute pretty easily and find a smoking gun email or spreadsheet.
Leaving aside the conception of "computer hacker" as "evil wizard", any time U.S. spooks reveal documents obtained by hacking, that compromises the methods they used to Chinese spooks. They'd need a strong incentive.
And honestly, why would they want to? Like, at all? To embarrass the PRC? I mean, I'm totally willing to believe the U.S. government is willing to do dumb things to embarrass the PRC, but given the research in question was paid for by the U.S. government, I don't see how they could walk away with a "win" from revealing the whole thing, nor do I think they're stupid enough to *expect* to.
It would also be contrary to the approach the executive branch has taken so far. The White House's stance on WIV is...complicated, and evolving, and they may yet throw WIV under the bus, but as things stand now, I think it's fair to say that any spooks who tried to prove a lab leak would be starting a feud with the President. And while they were absolutely up for feuding with the last President, they do not feud with most Presidents and have shown no inclination to feud with the current one.
Compare to the situation in the United States. The Director of NIAID testified that they weren't doing gain-of-function research at WIV, while his internal emails said the opposite, because the cover-up was sloppy, because everyone involved correctly expected that there would be no consequences if they were caught. As you suggest, these emails were obtained by the FBI investigating the allegations and hacking into his...oh wait, says here that never happened. The emails were obtained via Freedom of Information Act lawsuits. The People's Directive on Transparency, the Chinese equivalent of the Freedom of Information Act, would technically not apply to communications with WIV, because no such thing exists and I just made it up. The PRC is just...really not big on the whole "freedom of information" thing.
Even if the spooks wanted to hack in (which they wouldn't), and even if the relevant notes weren't kept in old-school notebooks (which they may have been), and even if the relevant documents hadn't been deleted (which they may have been), the spooks wouldn't necessarily have been able to understand anything they found if it was in shorthand, since people with deep idiomatic Mandarin fluency are notoriously thin on the ground over there, to say nothing of the problem of understanding the biology. And they wouldn't want to compromise their sensitive op to outside experts without even knowing whether they'd found anything.
It remains true that in every scenario, it is at least *possible* that documents exist and would leak, and so the absence of such documents is Bayesian evidence against the furin cleavage site being deliberately inserted.
Thanks for your detailed response. By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses', which we do know they were doing. I was thinking that there would surely be some record of the project.
But I agree with your point that they could easily have done it outside a formal process (I have read that making a reverse genetic system to make custom coronaviruses is pretty simple and not a groundbreaking project that would need lots of funding), or with mainly paper notes, or just managed to delete everything.
> "Most research is not known outside the lab until publication. That is what publication is for."
Since WIV collaborated with outside labs, I was thinking that before official publication they might have emailed or casually communicated with an outside group and mentioned that they were working on a project to add a furin cleavage site. But it does seem plausible that they just did the work first to check they could do it without telling anyone.
I don't think it would be that difficult for a national security agency to remotely penetrate the network of an academic institute (since outside observers were worried about the standards of biosecurity at the lab, it seems at least possible that they were also slack about keeping everything patched, not to mention the possibility that their software had vulnerabilities that aren't widely known outside the NSA etc). But you're right that if the CIA has a 'smoking gun', they wouldn't just immediately leak it revealing their access, and Western countries wouldn't want to start a fight with China. In fact they allegedly tried to pressure their experts to find against a lab leak: https://oversight.house.gov/release/testimony-from-cia-whistleblower-alleges-new-information-on-covid-19-origins/
So on reflection I think the lack of public documents is pretty weak evidence against the deliberate insertion scenario. I think it's definitely possible that intelligence agencies have relevant evidence about what WIV was doing that they're keeping secret, but as you point out, whatever internal notes they could recover might not be a clear 'smoking gun' anyway.
> By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses', which we do know they were doing. I was thinking that there would surely be some record of the project.
Yes, and I agree if anyone inserted that specific furin cleavage site, there would likely be documents mentioning inserting that specific furin cleavage site. Though of course in each of the scenarios there exist ways that those documents might not be comprehensible, or might have been successfully deleted, or might remain undiscovered.
And I think it *is* relevant that they were known to be doing research in this area. If we were talking about some random biolab, then it would be *extremely weird* if, for example, some PI randomly decided to play a hunch and messed with a virus without telling anyone. That's just...not within the range of things that happens. If we were talking about a lab where this kind of work was not routine, then I would absolutely expect to see piles and piles of documentation, not least because they'd be using physical resources that they would have never used in the job they were supposed to be doing. But in a lab where this was routine...then not necessarily. Maybe, yes. But not *necessarily*.
> I have read that making a reverse genetic system to make custom coronaviruses is pretty simple and not a groundbreaking project that would need lots of funding
This is also my impression. This kind of work *used* to be incredibly expensive, but the new methods invented by UNC (which WIV was known to use and be competent with) are...not *cheap*, but potentially within the range of what a PI could use without talking to anybody. (If WIV was the sort of lab where that kind of thing happened. I don't know that they were, but the fact they were so *successful* (everyone seems to agree they discovered an unusual quantity of true facts about SARS relative to their budget, and it wasn't because they had a particularly convenient location, a thousand miles away from the bat caves) suggests to me that WIV was the kind of place where moving fast was valued and researchers might have routinely taken initiative.) It also uses basically the same resources regardless of what specific virus you're working on, so nobody would notice that a different experiment had taken place unless WIV had very unusually strict accounting, which they probably didn't.
> Since WIV collaborated with outside labs, I was thinking that before official publication they might have emailed or casually communicated with an outside group and mentioned that they were working on a project to add a furin cleavage site.
Uh, I mean, obviously that is a thing that they could have done that they demonstrably didn't do, which technically makes it further evidence against the furin cleavage site being artificial, but...I can't really picture that conversation. Like, just...emailing someone out of the blue "Hi, I'm about to do an experiment which I will now describe in an unusual amount of technical detail. I don't have any interesting results to share with you, because I haven't actually done it yet. Just...FYI." I suppose if someone had already been planning it when they went to a conference, then they might have happened to mention it when chatting over coffee. But they also might not have mentioned it. And of course I don't think anyone's assuming that the experiment, if there was an experiment, was planned for a long time in advance (except in the third scenario where they actually *did* get an outside grant from somebody we don't know about like the Chinese Center for Disease Control and Prevention), so there was probably no conference it would have coincided with.
This is just based on the way Baric described it, but it didn't sound like they collaborated all that closely. They obviously went to the same conferences, and they gave talks, and they chatted afterward. But it's not like people were regularly flying back and forth between WIV and UNC, or anything. Collaborating, but only in the sense that all virologists are collaborating. The only outside institution I know of with deep visibility into WIV was EcoHealth Alliance. I do agree that if there was a cover-up involving WIV leadership (the second scenario, where the process was internal-but-formal using funds explicitly set aside to incubate small projects without external funding), then *someone* in EcoHealth Alliance was probably in on the cover-up.
Put another way, it seems unlikely to me that anyone would have communicated with an outsider about the planned experiment *in enough detail to specifically identify it* unless that person, "outside" or not, was fully collaborating on that particular experiment (and therefore had their own reasons to fear for their own career, and so would also shut up about it).
> an academic institute (since outside observers were worried about the standards of biosecurity at the lab, it seems at least possible that they were also slack about keeping everything patched, not to mention the possibility that their software had vulnerabilities that aren't widely known outside the NSA etc)
I definitely don't think the NSA would ever willingly use vulnerabilities that aren't widely known outside the NSA this way. You, uh, you may have heard that the PRC has been making regular military feints at an American ally, any of which would turn into a real invasion very quickly, because the PRC would really really *really* like to just grab Taiwan and present it as a fait accompli. And consequently the U.S. would really really *really* like to have visibility into the Chinese military's true intentions, just in case Xi does decide "fuck it, YOLO". They have *much* more important uses for zero-days, is what I'm saying.
But you do bring up a good point that an academic institute in general, and WIV in particular, probably is, or rather was, unusually vulnerable to hacking. Like, "gateway server still running Windows 7" vulnerable. In which case hackers could easily get in using only vulnerabilities that *are* already widely known outside the NSA, which they wouldn't hesitate to use. (Of course, *after* the pandemic happened and everyone started looking at WIV, every WIV server is probably a honeytrap with every spook in China watching it, not because they're complicit in a coverup (if there even was a coverup) but simply because a chance to watch U.S. spooks in action doesn't come along every day.)
I don't really think anything about the CIA thing, because it's so vague. Apparently at some point the CIA convened a team of seven people. Uh, okay. I would venture to guess the CIA had a lot more than seven people looking into a global pandemic, but maybe there was something special about these seven? Then they offered "significant monetary incentives". What does that even mean? Does the CIA not *usually* pay its analysts?
Ultimately the CIA had to make a public statement. And I mean, obviously the content of that statement was going to be decided by CIA leadership, for political reasons. The actual evidence wasn't totally irrelevant, sure. Even if, for example, the CIA had really liked Trump (in fact they hated him) and Trump really wanted to say it was a lab leak and the U.S. hadn't funded the work so everyone expected the U.S. to be able to get a "win" that way...if they had reason to believe it *wasn't* a lab leak, they might have resisted out of worry for long-term career consequences for eventually being publicly proven wrong. The statement the CIA ultimately made was a maximally-vague statement that didn't commit them to any faction. Which was a logical choice for the CIA given almost *any* facts. The particular nuance of how CIA leadership gets their peons in line just doesn't strike me as especially important. Or rather, it's not important *to me*; I'm totally willing to believe that someone involved violated some federal rule or other. Management is totally allowed to hand out bonuses up to 25% of an employee's salary without needing permission from anyone, and I don't think anyone would be surprised analysts working on an important-somehow COVID team got a bonus that year. There's a currently-active proposal to allow up to 50%: https://federalnewsnetwork.com/pay/2023/11/agencies-would-have-an-easier-time-approving-pay-bonuses-under-opm-proposal And I doubt a CIA analyst would be too stupid to understand that their bonus, to say nothing of their long-term career prospects, depended on making their bosses happy. But it's possible that someone violated whatever the rules are and/or were at the time.
>Thanks for your detailed response. By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses'
" If we knew what it was we were doing, it would not be called research, would it?"
- A. Einstein
There's no such thing as "doing research" to make an exact specific virus. There might someday be engineering projects along those lines, but I don't think we're there yet. The most you can do is e.g. try to see if you can make a bat coronavirus into something deadly and highly contagious among humans by deliberately inserting cleavage sites and doing serial passage through increasingly human-like tissue cultures.
If someone does that, which we know Ecohealth and WIV were trying, and a deadly bat-coronavirus highly contagious among humans shows up near their lab, the proper response is NOT, "Nah, that must be a coincidence, we don't have documents showing they were trying to produce that exact specific virus".
Also, those documents were sequestered and/or shredded by the Chinese government. Which doesn't tell us much, because the Chinese government covers things up almost reflexively whether they've done anything wrong or not. But they're quite good at it, and they have the ability to throw anyone in China into an oubliette on a whim if they try to embarrass the Chinese government. So "...but no plucky WIV research assistant has blown the whistle with the documents showing that exact virus!", doesn't tell you very much either.
There’s good arguments for restricting gain of function research *and* scouting out remote caves for unknown viruses *and* hunting wild animals for sale at wet markets.
Yes, I guess this reinforces Scott's point. Whatever the actual origin, we know there is a risk from gain-of-function research, we know that a field researcher could get infected accidentally and we know there are risks of wet markets. Whatever the particular scenario was with Covid, all of these things should be restricted.
Apparently wildlife has been officially banned from wet markets in China since 2003 ( https://en.wikipedia.org/wiki/Wet_markets_in_China ), but now the restrictions are tighter and are actually being enforced. Which seems like a good idea, even if Covid was really a lab leak.
And somehow they have decided to partly ban dog meat but keep going with the bats - it's fine though because they're doing lab tests every 3 months... and of course everyone knows the problem was *Chinese* wet markets so why worry about what's going on in Indonesia?
You can update on individual events, as you discuss, but you can also update on models, along the lines of Popper.
One might have a model that "Gain-of-function research has arisen for reasons of institutional benefit. It has very little benefit for the actual science of virology, and the risk of lab leaks is quite high on a recurring basis." A confirmation that COVID was a lab leak might still produce only a small increase in belief in that model, but the practical import of the increase might be very high.
It could be that the update coming from SBF and OpenAI is not the carefully formulated list of fine-grained beliefs that supposedly point in the opposite directions, but rather an increase in the single belief that "EAs aren't very good at reasoning about or acting on institutions."
Every time rationalists or EAs complain about PR stuff like this, I think of https://xkcd.com/1112/. If you understand the game well enough to critique it, what's stopping you from winning?
As I have argued elsewhere, in a context where you can't trust people there is much to said for calculations simple enough so the reader can check them for himself, even at the cost of a less sophisticated analysis.
This is annoying in exactly the way you are most frequently annoying…so right in theory, so clueless in practice.
Despite your awareness of exactly this problem, you and so many commenters are STILL being a bunch of stupid quokkas who allow your basic factual background beliefs to be manipulated by psychopaths using the specific tool most effective against YOUR community of quokkas, namely taking advantage of a deep aversion to anything “right-coded” (an aversion which was itself cultivated in you by similar psychopaths ) to get you to not look at places you ought to be looking to get the necessary perspective.
It WAS a lab leak and NO ONE with
1) common sense who is
2) not a motivated reasoner and
3) understands molecular biology at the level of someone with a bachelors degree in the subject
thinks it was of natural origin, because of the three (3) smoking guns “human-infection-optimized furin cleavage site”, “Daszak and Fauci and Baric confirmed lies about funding gain of function research in Wuhan specifically including adding furin cleavage sites”, “Coincidence of Wuhan location”.
The reason it’s IMPORTANT is not anything to do with “updating on the likelihood of pandemics and therefore changing policy on gain of function research”, the reason it is IMPORTANT is the REVELATION that our public health establishment and our government in general are run by psychopaths who would *purposely HINDER response to a pandemic by hiding everything they knew about the germ that caused it*.
If it was a lab leak, why did all the first cases happen around a wet market that was a 30 minute drive away from the lab in question? One of the scientists just really liked going to that market and coughing on people?
It it wasn't lab leak, why did China do literally everything in their power to prevent a proper and full investigation into the virus that would have vindicated the natural spread claims?
They didn’t. It’s very obvious that the psychopaths in the public health establishment and government (in this case, in China) allowed only counting cases around the wet market, and purposefully hid to their best ability information about cases that happened earlier and elsewhere, including by jailing scientists that tried to report such information for “panic-inducing disinformation” and other transparent crap like that.
You should trust that what the Chinese government and its scientists (which are still under its power) say about Covid is the full truth about as much as you should trust it about what happened during the 1989 Tiananmen Square protests.
The doctor guy who was jailed (and later died) was jailed by the Wuhan local authorities and he was jailed for talking about the outbreak in the market. So that doesn’t work.
Also during the pandemic, they probably didn't want to accuse China of being responsible because they needed China's cooperation and didn't want to endanger the supply of PPE, vaccine vials etc.
The first cases appeared along the railway line linking the WIV with Wuhan Airport. The market was a red herring caused by ascertainment bias: in the early days having been at the market was one of the diagnostic criteria for being classed as having the new disease.
All the market cases were lineage B but as Jesse Bloom observes lineage A arose first. The market cases aren't the primary cases.
What's harder to explain is why it arose in Wuhan when the nearest relatives are ~1500km away in Yunnan and Laos both locations the Wuhan Institute of Virology sampled SARS-related bat coronaviruses. Patrick Berche notes it arose in in Wuhan well adapted to human cells with low genetic diversity indicating a lack of prior circulation and a furin cleavage site never seen before in a sarbecovirus. WIV was part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses.
That's my understanding as well, though "raccoon-dog cages" is grasping at straws; AFIK neither lineage A nor lineage B nor any sort of proto-Covid was ever found in raccoon-dogs in the wild.
There is good evidence for the first superspreader event occurring at the seafood market, and no good way to confirm or deny reports of earlier solitary cases. But that market would have been a prime location for a superspreader event even if it had been selling e.g. jewelry. The question is, if Covid came out of the seafood market, how did it get *in* to the market? Through the back door, in an infected animal, or through the front door, in an infected customer?
If it was a natural zoonotic spillover, why did the first cases occur across town from the big virology lab but a thousand miles away from the habitat of the original host? Both scenarios involve a freaky coincidence, and "lab tech wanted to buy some fresh seafood that day, instead of a hundred other things he could have done after work" is not hugely more coincidental than "wild animal trader sent the infected bats to Wuhan and not one of a hundred other larger or closer cities".
If that were all we knew, I think the evidence would lean toward yes, the wild animal trader did just happen to put his infected bats in the shipment to the distant city with the virology lab. But not as a slam-dunk many-nines certainty; I think 90/10 was about right a priori. Then we started learning other things, that caused rational and well-informed people to make substantial updates.
I'm a bit worried that "It doesn't matter if COVID was a lab leak" is going to be read as "It's not useful to talk about whether COVID was a lab leak".
I agree that the object-level question doesn't matter, but it's useful to talk about WIV doing gain-of-function research in a BSL-2 lab. My priors for lab leaks in general were lower than Scott's because I assumed virology labs doing anything remotely risky would be similar to the BSL-4 lab near where I live. Even if COVID hadn't happened at all I'd still update on the knowledge that people study animal respiratory diseases in a lab that doesn't require respirators.
No, absolutely not. We're already being told we need to be worried about the next pandemic, and the best way to avoid another pandemic is to minimize the chance of another lab leak to as close to zero as practical.
>It’s hard to define “mass shooting” in an intuitive way, but by any standard there have been over a hundred by now. You can just look at the list and count how many were by Your Side vs. The Other Side.
The source you reference is, in my opinion, itself biased in order to push a certain narrative. Per their website, they exclude shootings motivated by armed robbery and gang violence, and as a result end up with a list that retains a left-wing "hard on far right extremism" outlook but ignores support for a right-wing "hard on crime" perspective.
That's technically true, but there is an important qualitative difference between crime-related mass shootings and ideologically-driven mass shootings: who the victims are. If you're in a gang and get shot, even the most compassionate of us struggle to find any tears. But terrorism could happen at your place of worship, your kid's preschool, the mall foodcourt where your teen daughter works.
This is a sub-type of what Scott discusses re: abortion and getting stabbed in an alley both being called 'murder'. The central category for a murder is a stranger kills you, an adult. You don't have to worry about being aborted. You also don't have to worry about gang violence in Chicago.
Which is exactly the problem with typical reporting about mass shootings (possibly not Scott's particular source): the statistics usually include gang violence in Chicago, but are discussed as if Columbine is the central example.
It’s true that including the organized-crime related cases changes some features of the overall statistics. But it doesn’t change the ratio of right-wing ideological terrorism to left-wing ideological terrorism to religious ideological terrorism to eco ideological terrorism or whatever.
That's not how Bayesian updating works. You're not supposed to have a single number for how many lab leaks you expect to happen in a decade (the parameter lambda for the Poisson distribution). If you only had a point estimate, you wouldn't be able to update it at all. You're supposed to have an entire probability distribution that is your credence for the value of lambda. Then you update your probability distribution for lambda. Doing the actual maths requires you to use some statistical software, but the point is that a single event can have a very large effect on your posterior if you start with a wide enough prior (which you probably should). There is a big difference between observing 0 lab leak pandemics in 20 years and observing 1 lab leak in 20 years in terms of how your posterior will end up looking like.
> There is a big difference between observing 0 lab leak pandemics in 20 years and observing 1 lab leak in 20 years in terms of how your posterior will end up looking like.
This is true, but I think Scott’s point was that we already observed 20 lab leaks in 20 years, and observing the 21st should not change your posterior about “how frequent lab leaks are” by much.
(Reposted from Reddit): The Bayesian model which Scott adopts is generally the correct one. However, there is an implicit assumption that the occurrence of an event does not change the distribution of future events. If that happens, then a larger update is required.
For example, the US has always had mass shootings even prior to the Columbine shooting. Yet the US has seen a gradually escalating rate of mass shootings since 1999 ([see chart entitled "How has the number of mass shootings in the U.S. changed over time?"](https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/). This chart is not adjusted for population size but clearly the growth in mass shootings exceeds the growth in population.
The reason is the copycat effect. There are always psychotic potential murderers out there, but Columbine set out a roadmap for those murderers to go from daydreaming about killing their classmates to actually doing so. So a rationalist ought to update their model because Columbine itself changed the probability distribution.
Another example where an event changes the likelihood of future events is where the event has such emotional salience that strong countermeasures are enacted. To take the gun example, after Australia's Port Arthur massacre, the government introduced strong and effective gun control. Thereafter, the rate of mass shootings plummetted.
The same applies to 9/11. Prior to 9/11, there was always the chance of terrorist attacks. After all, Al Qaeda had previously bombed the WTC. But Osama inspired other jihadis to attack westerners in the west. It made future attacks more likely. But the world also enacted strong airline security measures, so the likelihood of 9/11 style airline attacks decreased. But its harder to stop e.g. train bombings so it shifted the likelihood of terrorist attacks away from planes to trains. Hence the London tube bombing. So a rationalist ought to have updated their priors to think that planes are safer and that other public areas are less safe.
PS: I really dont want to get into a debate about gun control. The Australian solution won't work in the US, but clearly it affected the Australian probability distribution. Please constrain your arguments to whether Columbine changed the statistical likelihood of a mass shooting.
Right. There is a difference between events appearing in nature, e.g. volcanic eruptions, and man- made events. The frequency of the latter may change, depending on the copycat effect, including how would-be copycats interpret how other people react to the events.
It’s not totally clear whether this is a copycat event following from Columbine, or whether Columbine itself was a symptom of some underlying change that made people more likely to act out in this way. I’ve seen suggestions that “serial killers” were much more common in the 1970s than they are today, and that some of the rise in mass shootings might just be a result of something leading would-be serial killers to become mass shooters instead (or to out it equivalently, something in the 1970s that led would-be mass shooters to become serial killers instead).
Pre-Columbine, there was definitely a recognized trend of mass shootings, but we called it "going postal" because the modal archetype was a frustrated postal worker shooting up his workplace (which IIRC did actually happen multiple times).
The bit where mass shootings started frequently happening at schools, definitely seems to have been a copycat effect following Columbine. Possibly also the mass shootings occurring at places that are neither schools nor tedious dehumanizing workplaces, but that's less clear.
Might add that information availability and existence of opinions are potential missing factors. It's not that a single, hypothetical, highly-informed Bayesian is marginally updating existing beliefs; it's that people who hadn't thought about an issue at all (priors NaN) are suddenly forming beliefs at the same time. I suppose one could argue that everyone has an implicit, very weak prior about everything, but I don't really buy that. But if we suppose that's true, weak priors update a lot more dramatically than strong ones.
Re: lab leak: I don't think many people had any idea that there was an active area of research built around deliberately making dangerous viruses more dangerous to humans. Let alone that the field has minimal practical value (I haven't seen a good defense of it yet, particularly relative to the risks). Or that people involved were operating out of a lab in China with shoddy safety procedures, with some US government funding.
Awareness that such a risk is out there rationally should cause a dramatic updating of beliefs; far more than the incremental updating in your example. Colloquially, from "scientists probably study diseases, government funds it, that's reasonable enough I guess" to "wait, they do what now?".
To some extent that falls under the coordination problems and stupid people buckets, but think stupid is unfair. There are a lot of things in the world, and most people (including smart people) don't have opinions, let alone subjective probabilities, about most of them.
Gain-of-function research was legally restricted in the US (and Richard Ebright played a role in getting those regulations in place). That's part of why funding was being funneled into Wuhan, which wasn't so much a legal loophole as a way to avoid notice of skirting the law.
I may be missing the point of the article (which I largely agree with!), but... if it was a lab leak, knowing what caused the leak could be very important. Lab leaks are relatively rare and I assume the folks who run these labs try very hard to avoid leaks. Knowing how a leak occurred would be useful information that could help make future leaks less likely. Likewise 9/11 probably shouldn’t change your estimation of how likely a significant terrorist attack is, but in a very short time frame passengers (apparently) learned that being passive in the face of a hijacking is not the ideal response and it led to locked cockpit doors. Both responses probably should reduce your estimation of the probability of an airliner being flown into a building or populated area again. (The less said about TSA and security theater the better). Overall I agree that dramatic events shouldn’t necessarily cause you to dramatically update your priors, but that shouldn’t mean the truth doesn’t matter and that we can’t learn from dramatic events.
Would you agree with the summary that in both cases "The point isn't *that* it happened, the point is *how* it happened"?
In particular, the passengers of United Flight 93 did not update their probabilities on whether they would be hijacked based on the news that two other planes had been hijacked. (After all, they already knew that their plane had been hijacked: the probability was already 100%.) They had previously chosen to do nothing about this, based on their prior knowledge about hijackings. After they learned about what happened to the two previous planes, they adjusted their probability distribution on exactly how hijackings go. They adjusted their expectations quite suddenly and dramatically, in fact, and they took radical action in response to their revised expectations. Their response strikes me as obviously correct. If the news media had virtuously relegated the attacks to tenth-page news, the the passengers on United Flight 93 would not have changed their behavior in response.
We know that this kind of information matters, because it did in fact matter, literally the same day.
> Knowing how a leak occurred would be useful information that could help make future leaks less likely.
"Lab leaks are relatively rare and I assume the folks who run these labs try very hard to avoid leaks. "
This appears to be disturbingly false. Presumably *some* labs are well and conscientiously run, but not all of them and probably not even most of them so we can expect to be getting many lab leaks of *something* every year. Which make it rather important to understand what the labs are working with in the first place.
But it looks like you're right on that particular data point, since it looks a lot more like a vaccine trial gone wrong.
Regardless, most of my probability mass is from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9274012/ (71 events in 40ish years) as well as anecdotal reports about how lax biosecurity was in order for lab leaks to happen
I hear you, but individual exposure is a far cry from global pandemic. Definitely something to worry about but I still have to point to "zero known instances of this happening in 70+ years of pathogen research" and reiterate my original point.
I've definitely always thought of 9-11 as "the day the terrorists won." Not only did we waste hundreds of billions of dollars, and thousands of soldiers' lives over it, we instituted flat-out stupid "security" measures that are purely theater.
The increase waits at airports over the ensuing 22 years have almost certainly cost more American lives (in US life-hours wasted) than 9-11 itself! My back of envelope estimate has 240k US lives lost to security theater, vs 3k in 9-11 itself. The terrorists *really* won on 9-11, authoritatively. The biggest self-own we've ever had.
Amen! I really wish somebody would "sunset" the TSA, because there's multiple proven TSA failures any time they're tested with intentional agency-led red teaming, and massive negative impact to US hours / lives wasted.
But no, bureaucratic inertia is apparently infinite, and never to be challenged by any candidate ever.
What’s remarkable is the lack of technological change to speed things up after 20 years. This could be something that image recognising AI might help with in future though.
Even if it's correct to say the US lost, the terrorists didn't win: they didn't come closer to achieving their goal, whether it was to get the US to end involvement in the Middle East (even with all the war-weariness, it's still more involved than before 9/11), or to provoke the US into wars that inspire the residents of Muslim countries to rise up against US-aligned governments, ending up with most of the Muslim world under Islamist rule (it did drag the US into wars, which made some Muslims angrier at it than before, but the rest didn't follow).
As for your calculation, with ~750 million air passengers a year spending 20 minutes more at airports, I get ~8000 lifetimes, equivalent to twice as many terrorism deaths if the average terrorism victim dies half-way through what would otherwise be his life, and assuming that wasted time is equivalent to time spent dead.
Thank you for the separate calculation, I had an error in my math. I'm still getting 40-60k US lives wasted by security theater. It didn't paste very well below, but if I copy and paste the text below into a google sheet document, it pastes flawlessly and you can infer the simple spreadsheet calculation I used myself.
The biggest difference I immediately see is I assume a 1 hour delay, which seems *conservative* if anything to me - before 9-11 you could literally arrive at the terminal a half hour before your flight, and now every official source recommends you get to the airport either 2 or 3 hours before your flight to accommodate security theater's time wasting.
Annual passengers Average flights per passenger Delay in Transit due to security theater (hours) Years since 9-11 Hours wasted annually Hours wasted in TSA waits Avg awake hours in a year Average awake years wasted equivalent Avg US lifespan Lifetimes equivalent
And don't forget the TSA themselves, literally wasting their lives in a purely net-negative endeavor that both they AND their victims hate! Although that has *only* wasted an additional ~250 US lifetimes.
I don't know why people sometimes talk as if airport security didn't exist prior to 9/11. It did exist and wasn't fundamentally different -- you'd get your bag x-rayed and you'd go through a metal detector -- only the details have changed.
Airport security was introduced in the early 1970s, for a damn good reason -- in the period 1968 to 1972 there was more than one aircraft hijacking per week. In 1969 alone there were eighty-six aircraft hijacked, which is more than we've had in the last 25 years put together.
I don't know, I definitely see a difference between a half hour and 2-3 hours of average waits. It's a difference of 4-6x, and beyond the personal inconvenience, that's quite significant multiplied over 750M emplanements a year.
While I've definitely seen some stupid security lines in the US, I've never seen 2-3 hours.
But anyway the problem isn't so much the procedures as the implementation. Countries that aren't the US can do the exact same procedure efficiently, meaning I rarely have to wait more than a few minutes to pass through security in any other country.
It's just some specific features of the way the procedure is implemented in the US that makes it especially painful; possibly a deliberate go-slow by the TSA, who apply the usual US public sector logic of sucking as badly as possible so that they can get more funding.
Yeah, I'm totally with you - I actually enjoy flying most international airlines (ANA, JAL, Korean Air, Singapore, Emirates, or Virgin especially), because not only is the actual flying experience much better, they have their shit together in the terminals too, and don't make you do ridiculous things like taking off belts and shoes, or explicitly remove your laptop so it can be all by itself.
I even have Global Entry and TSA Pre and Clear and all that stuff, and I STILL have to take my shoes and belt off half the time domestically, we're literal savages here.
Why god, why can't we ever improve, reduce, or eliminate pointless bureaucracies?? Every road to improvement just seems impossible in the US.
Does the average air traveler really arrive at the airport more than 90 minutes before the flight? I think there are a lot of people who fly once every few years, and arrive this early. But the people who fly every week are usually calculating whether they can arrive 55 minutes before the flight or 45, and this group makes up the majority of emplanements.
That was actually one of the daily trivia questions for my staff, who are frequent (though not weekly) business travellers. Almost all of us were in the 60-90 minutes category, because an extra half hour at the airport sucks *way way* less than missing your flight by ten minutes.
But my staff is a team of professional experts in risk mitigation, so maybe not representative of typical travelers.
Most of the remaining difference comes from these:
- I assumed the 750 million "emplanements" to already be the number of flights, and the number of TSA checks (actually the latter may be slightly less due to airside transfers)
- I assumed 20 minutes wasted by a security check, you assumed 1 hour
- You calculated with awake lifetime, I didn't take that into account and calculated with total lifetime
Makes sense, thanks for the triangulation. I was doing the 1.7 because of this definition:
An "enplanement" in the context of air travel is defined as a passenger boarding an aircraft. It is a term commonly used in the aviation industry to quantify the number of passengers flying from an airport. Each time a passenger boards an aircraft, regardless of the purpose of the flight, it counts as one enplanement. This term is distinct from a flight, which refers to the aircraft's journey from one point to another, and it's also separate from passenger count, which could include both arrivals and departures.
On awake lifetime, I thought it was fair - they're definitely "awake" hours being wasted, and awake hours are the important ones, the literal stuff of living and experiencing.
And on 20min vs 1 hour, I still think even 1 hour might be conservative, but taking an average of 1.5 total (1 delta with TSA) with official guidance at 2-3 hours made sense to me - I couldn't find any quantified before and after numbers, unfortunately.
It's taking the average per passenger v per emplanement - ie passengers more often board twice on a given journey vs one way flights, and a given journey will involve that same person going through security 1.7 times on average, ie the time in security theater is not just wasted once at the *passenger* level, it's wasted ~1.7 times.
Now that you've brought that up, it might be double counting - the total emplanements figure should capture both ends of US-US flights, but not both ends of international flights, which will involve customs coming back (which does have heightened TSA security theater / times post 9-11).
Searching for international v domestic splits, I'm seeing at the Bureau of Transportation Statistics that we actually have 1.1B passengers across domestic and international, with an 800 / 200 split roughly. That's passengers rather than emplanements, though.
I'm not sure how this would cleanly translate, but playing with various combos of the numbers (1.25 vs 1.7, 1.1B passengers at 1 v 1.7, 1.1B passengers with average 1.7 security stops) I'm getting between 45k and 90k total lifetimes wasted. Still all 10x 9-11 (on the high end, 30x), and a huge net-negative waste either way.
Huh from the wind-up (especially the point on hyperstitional cascades) I really thought this would end with either a re-affirmation on how you would be sticking to the Substack platform or a gesture towards looking at an alternate.
I agree, but less because of the nazis, more because this site is horrible to navigate. I have no idea why it takes so long to load, there must be a huge amount of javascript cruft.
One interesting dramatic revelation that DIDN'T result in any updates, debates, or changes at all: for years, computer geeks had been assuming / jokingly saying that the NSA was spying on them and everyone on the internet. They were called crazy, paranoid, and unstable for years. If the government were spying on all of us, we wouldn't take it as a society! The tree of liberty must be watered, etc...
It was of course, true, and it came out in public in a big way ten years ago. And the result?? Absolutely nothing. Not a peep. Nobody cared that every single thought, text, email, search query, the slightest dribble of text or ascii, all monitored, all stored forever in your Permanent Record (or whatever the NSA calls it).
And anyone who said they'd water the tree of liberty? Not a peep, nary a protest, nary a change.org campaign, not even any sternly worded emails or entirely cosmetic changes in Cabinet officials or administrators. We're all apparently happy about it, zero updating any direction. Why is that?
I had the opposite reaction, I was amazed by the number of people who updated their world view significantly on the revelation that the NSA spies on people.
Like, of course the NSA spies on people! What the heck did you think the NSA does?
Really? What were some of the specific actions they took based on the world view update? Because almost nobody I knew did *anything* (one person began taking the battery out of their smartphone regularly, but that's pretty much it, and who knows how long they kept that up).
And presumably, they thought the NSA spies *on the rest of the world, in the service of US interests and US citizens* rather than on *literally every single person in the US, for their own nefarious purposes.* I mean, there IS a difference between those two.
...What the hell are you supposed to do? Taking measures to not get spied actually makes you more suspicious, as this xkcd brilliantly illustrates: https://xkcd.com/1105/ The best thing you can do is simply not do anything to bring attention to yourself. If you are going to do something reckless, having a fake, benign internet presence to mask your actual anonymous presence would be the best idea, but frankly you should be doing that anyways because employers do background checks via social media.
Whatever they did in all these other cases of overreaction - presumably writing their Congress critters, protesting, doing change.org petitions, trying to overrun the state Capitol, whatever. SOMETHING. Of which, they did literally nothing, which I always found funny / horrifying.
America, the bastion of the supposedly "free," the ones so proud of their supposed rights to free speech and their rights to own 700 guns per person (to prevent *tyranny,* you see) - but no, the instant we learn the government is literally panopticonning every US citizen, a tyrant stomping on all our faces and internal thoughts forever, we all just quietly rolled over instead of doing literally ANYTHING, as we DID do in every one of these other cases Scott mentions. If ever a case *warranted* overreaction, this was it, in my mind. But instead, not a peep. It really made me update my priors on how much of the media might be literal and explicit propaganda channels.
Your advice is the archetypical case! Instead of saying "well, we shouldn't have to take this, let's vote the bums out, get our voices heard, it's clearly unacceptable for the government to literally be spying on everything we do every second of the day," the advice is instead "oh, just try not to bring attention to yourself (THEY HEAR EVERYTHING YOU KNOW)."
I was never more happy to be an expat when all the PRISM stuff came out (and yes, I know I'm being spied on by the NSA no matter what country I'm in).
On this note, I'm actually genuinely surprised Trump (or some similarly ethically-challenged "outside the inner circle" politician with high-level permissions) hasn't used whatever kompromat the NSA has on all his rivals / enemies to try to cut them down / nullify any challenges he might have.
He / they might have and it's just under the radar, I guess. But even if he hasn't, the ability is now THERE, for literally any sufficiently unethical politician with access. Are we happy about that? Apparently. Team America, hell yeah! :-D
Regarding the Effective Altruism movement; I don't agree that it's that hard to draw more specific lessons from the two disasters you mentioned. However, even granting that it is, by analogy to the regular school shootings or terrorist attacks of cases of sexual harassment, it seems that one should update toward expecting regular disasters of a similar magnitude.
Perhaps you already had a distribution that assigned somewhere in the range of a 10-90% chance of this level of disaster occurring at this frequency; but I did not.
Dramatic events are about as un-bayesian as it's possible to be though. Usually if an event is a big one-off we didn't see it coming, or had very little idea about its likelihood and no real way to estimate a prior ahead of time.
Lab-leaks (not pandemics), would be an example of this. No lab leak has/had lead to a global pandemic before, and any attempt to predict their likelihood is full of unknown unknowns that mix notoriously poorly with Bayesian thinking. Once you've observed the first instance of an event, I'd say you should update massively, if nothing else, on the fact that that event can plausibly happen.
With most dramatic events, I'd say nobody is truly thinking about the event at all beforehand. They're implicitly in the process of deciding whether the event is worth thinking about, which would be more like deciding between
So tautologically, the only time you shouldn't update from events is when you think relevant events were already predictable, and were able to assign well founded priors to them ahead of time.
> You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge.
But this is obviously false. The infinite descent of public knowledge was clearly established; popular culture was full of acknowledgments of the phenomenon. The musical of The Producers includes the lyric "I want to be a producer / with a great big casting couch"!
Are you suggesting that, in the absence of a convincing explanation for some phenomenon, even an explanation which is known to be false should be accepted as correct?
No matter what explanation I gave, it would be no worse than yours. It isn't possible to do worse than yours.
Maybe it's a co-ordination problem. "Trading movie parts for sexual favours is a common practice" is a bit vague, and even if people agree that vague bad things are bad, it's hard for enough people to work up enough passion to stop them. Conversely, "This producer made this actress have sex with him in return for getting her a part in this movie" is specific; people can easily picture it, and even imagine it happening to them or their loved ones. It's much easier to get people worked up about specific things like this; it's the same reason why charity ads almost always include a specific example of someone suffering from whatever it is they want to fix, instead of just saying "This is a problem, help us fix it."
The explanation of #MeToo? Feminism existed before it and after it, don't think many people changed their minds.
Whereas gain-of-function research was known to very few before COVID, it really did lead people to update in the direction of "this s*** is messed up."
MeToo had 3 specific ideas (I'm not endorsing these points just stating what I took away as the initial goals of the movement):
1. These specific people did bad things at varying levels of criminality (Harvey Weinstein, Louis CK, Bill Cosby). Because people have trouble separating art from artists people have irrationally high bar to changing their opinions of famous people.
2. Bad things were done to lots of people so you shouldn't feel ashamed instead speak out to hold your abusers to account.
3. Practices that you thought were ok aren't and you should update to consider more things harassment.
Power laws can be expected sometimes, but can also be surprising in some other contexts.
Here is a conceptual toy thought example: Small scale terrorist attacks that require only handful of people to execute are much "easier" than large scale operations, which require larger organization and more resources. However, after organization has been founded and organized, it has a steady income, permanent resources, and recruitment. Why wouldn't they be able to carry out large scale attacks at a constant rate, instead of them following a power law phenomena? Power law is less surprising if one notes the terrorist organizations are opposed. Every moment a prospective organization spends preparing an attack and building larger network , more opportunities the authorities have time to stop them. Additionally, after an successful attack, their organization is often disrupted and the security apparatuses step up their threat model -> further action of similar scale requires either gathering similar amount of resources again or coming up with a novel attack the enforcement is not prepared for (another form of gathering resources).
Coincidentally, this is more or less the toy model proposed by Clauset, A., Young, M., & Gleditsch, K. S. (2007), pages 76-78 (though they conceptualize it as "planning time" than "gathering resources"). Their final model is x^(-alpha), where the exact shape of the power law depends on constant alpha = 1-kappa/lambda, where kappa/lambda is supposed to reflect the relation between filtering effect (by state action) during planning vs increased severity of attack due to planning.
Point being, in a competitive environment each novel "attack" must be matched by increases in "defense", or the kappa/lambda ratio changes (and so the distribution). Seems plausible that the dramatic reactions to dramatic events would be driver for this.
Tried really hard to work "the Schelling has stopped" into other comment, but it'd be an appropriate news headline for such an act. Or maybe New Study Confirms Lead-Crime Correlation. Probably it'd also happen in the Bay State. But the article would be kept appropriately sparse, since Everybody Knows about infohazards here. Just a bulleted list, maybe.
[Paranoid disclaimer about morbid humour as mere coping mechanism for tragic coordination failures, as many ill-advised attempts at levity are]
The thing about mass shootings (using the Mother Jones definition, and not some of the broader "four or more shot, regardless of deaths" definitions) is that they're actually pretty close to a representative racial sample of the country, even though they're seemingly always portrayed in the media (and even in this post) as "crazy white guy" or maybe "Muslim terrorist."
Out of 149 shootings on the list, there's 10 Asian, 26 black, 12 Latino, 3 Native American, 80 white, and 18 "other," "unclear," or with no race listed. At least 6 of that last group (and two of the white people) have some sort of name connected to an Arabic/Muslim-majority country.
(What really baffles me is that the media, as much as they love to talk about white male mass shooters, never fixated anywhere near as much as I thought they would've on the 2022 Buffalo shooting, perhaps the most obviously anti-black racist mass murder on the list. Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?)
> Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?
Sounds like the kind of thing that Scott wrote about in The Toxoplasma of Rage -- since nobody has anything to say in defence of the Buffalo shooter (Payton Gendron, I had to look up the name and I don't think I've ever even heard it before) there's no agitation in the news cycle, everyone just agrees that this is a terrible thing done by a terrible person and then they forget about it.
A more cynical explanation is that there were no riots in May 2022 because the powers that be didn't want there to be riots in May 2022; George Floyd was part of a very deliberate campaign to stoke up racial tensions in advance of the 2020 election.
You may recall there was a string of heavily reported dumb racial-conflict incidents in the weeks leading up to the George Floyd thing. Two weeks earlier it was the "jogger" who got shot by neighbourhood watch. One week earlier it was an argument about an off-leash dog in Central Park which got breathlessly reported across the world.
>Two weeks earlier it was the "jogger" who got shot by neighbourhood watch.
I thought it was pretty clearly established that Arbery was, indeed, just jogging, so there's little need to put that in quotes expect if one is consciously trying to present the shooting as justified.
In addition to the other notable conflicts before the Floyd case, it should also be obvious that it became a huge thing because there happened to be a highly evocative and memorable photo of Chauvin's leg on Floyd's neck taken and presented in the news. Sometimes it just goes like that.
The other big thing about it was that it happened at the end of the serious lockdown period (start of good weather) so people were still free to go to protests and looking for opportunities to get out. By 2022 people were back to normal so it's a much higher opportunity cost to go to a protest.
>Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?
Related to the Toxoplasma of Rage explanation, it's easier to canonize one person rather than ten. Easier to focus on, lots of controversy involved, easier to remember one name. I suspect most people naturally have Great Man tendencies regarding history, and it's much easier for them to sympathize/demonize individuals rather than systems.
Modern progressives try to focus on systems, but ultimately they often end up with Floyd-like avatars acting as synecdoches for those systemic problems. Easy to pin Floyd on the widespread problems of police; despite the popularity of demonizing young white males as neofascists, it's still not as easy to smear the whole population of them like it is with police.
Mostly unrelated to Toxoplasma, Floyd became a cause celebre due to the lockdowns and vast amounts of people looking for any excuse to disrupt the pandemic news cycle and the pandemic boredom. Absent that tension, would it have exploded like it did? I highly doubt it; maybe isolated protests like after Trayvon Martin, but not the widespread ones.
Likely also a "hangover" or backlash- everyone was so burnt-out by the time of the Buffalo shooting there was no energy left.
This misses an important challenge with extreme events: they may, or may not, be governed by the mechanisms that control the more mundane events. Observational evidence of extreme events updates *on the physical limits* of the system.
It is notoriously sketchy to extrapolate power laws to predict the tail of the distribution. Power laws work until they don’t (If things were actually power laws weird things would happen).
Extreme event are likely probing the edge of the power law, in this sense they are a rare bit of information to inform what actually governs the tail.
I’ll give an example (one that I am quite familiar with): what it the biggest possible earthquake?
I could fit a power law to the small events- this turns out to fail quite dramatically in many cases, for many reasons that I won’t get into.
I could figure out what limits the size of the earthquake. Ok, but this is not very empirical. And note that this does not depend on the things that control small earthquakes. This is sometimes used for building codes.
I could update A LOT on the largest earthquakes that we have in longer term records. This is a decent approach that is used for building codes.
The key here is that if we know of ~10^6 earthquakes in California (for the most part tiny). A new large earthquake is NOT a 10^6 + 1 update.
To bring this back to some of the examples in the text, in a world without nukes or bioweapons, I would update A LOT if I learned about a terrorist attack that killed an entire city. This is because my model of the world placed physical limits on the scale of terrorism. New extreme evidence, significantly changes my model on these limits.
I think this is a special case of "if you previously assigned near zero probability to something, you should make a very large update when it happens".
I don't think we necessarily disagree. 'What is the shape of the distribution in the main part?' and 'where does the distribution end?' are sometimes (non-obviously) separate questions. It's worth being careful when deciding which of those two questions prior observations inform. My 10^6 observations make me very confident in my general shape of the distribution, but I need to not forget that I only have single-digit observations on its boundaries and should update those more easily.
When people say: “THIS CHANGES EVERYTHING!” The charitable case is: “Are there mechanisms that could limit such a dramatic event from happening (e.g. laws, social norms, economic pressure, coordination challenges, etc.) and does this new extreme event significantly update our understanding of said mechanism?"
> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school.
I would have put extreme odds that this was someone assigned male at birth. I know that lesbians commit domestic violence at relatively high rates, but I heard in my head JK Rowling saying "these are not our crimes". This post has many extremely valid points - among many, "when you accept risk, don't overreact to risk paying off" - but I'm still viscerally shocked by losing so many internal Bayes points.
Well, you know, JK Rowling has admitted that her critiques come from a place of trauma, so she's not exactly coming to it from a logical and rational basis.
We know that some fires start by arson and some start by other means.
Let's assume prior odds there's a 20% chance per decade of a deadly fire being set by firefighters who are secretly arsonists.
The cause of one very famous fire is disputed, but even if it were caused by arsonist firefighters, that would only update our priors to 27% odds.
Either way, we should have heavy restrictions on firefighters, because 20% and 27% are both alarmingly large numbers. Because we don't know which firefighters are arsonists, and they refuse to all confess to the alarming criminality among their profession, we should call many of them before congress to testify.
And, since non-arson fires are too boring to talk about, let's pretend those don't exist and make absolutely zero changes to reduce the rate. After all, hiring more firefighters would probably just cause more arson, right?
That argument should sound absurd, but it's roughly where the public conversation is, with regards to viruses and virology.
So, what's wrong with your argument?
Well, your prior odds might be reasonable. There is perhaps one research related pandemic in history, the 1977 flu, which some people think could be the result of some kind of vaccine trial. It's not proven, no one even knows which lab would be to blame, but let's just assume that's real, and there's 1 in 50 years. That's 2% odds per year, or 20% per decade.
Okay, but there was nothing exceptionally bad about the 1977 flu. The number of people that died was about the same as every other flu year. So in 50 years, 49 of the flus were natural and one was possibly research related. The natural ones are 98% of the problem.
And during the same time, nature also brought us HIV, Ebola, several novel coronaviruses, and lots more diseases. So the natural diseases are well over 99% of the problem. Putting some 50% reduction in the risk of natural viruses would have much higher impact than improving lab safety by 50%.
Even if the virologists were like the firefighters above (and some are arsonists), you'd still have a net positive effect from hiring more virologists, just as you'd have a net positive effect from hiring more firefighters.
For some reason, people keep making this mistake, again and again.
With vaccines, we focus on the small rate of side effects, not the large rate at which they save lives.
With police officers, we focus on the small amount of police brutality, and not the large extent to which policing save lives.
If virologists have a 2% chance per year of making a flu no worse than the average flu, then focusing on the labs and not nature is a waste of time.
I suppose covid could be different, if we're talking about a chance of labs making something worse than typically comes from nature. And in that case, perhaps you're going to have to come up with different priors -- you can't use 1977 anymore. There has never been a gain of function pandemic in history, so it's hard to know what the prior odds are.
In practice, I'm not sure this matters. Covid is simply not a lab leak. The congressional hearings came up with no evidence. Rootclaim came up with no evidence. Covid came from nature, with at least 99% certainty. The next pandemic will almost certainly come from nature, as well.
And our society's obsession with a false lab leak theory will only make it more likely that we are unprepared for that next pandemic, because we've focused on a hypothetical 2% per year risk of a future lab accident, but have done very little to reduce the much higher annual risk of a natural pandemic. We've started cancelling viral surveillance programs because of the popularity of the lab leak theory and we've lost good communication with scientists in China.
It's not even clear that we've done anything to reduce lab risks -- if you're worried about lab safety in China, we now have even less transparency than ever as to what's happening in Chinese labs.
And if you consider the (low but real) annual risk of biowarfare, or warfare in general, having the world's two largest powers blame each other for creating covid certainly doesn't lower those risks (American people think it started in a Chinese lab, but Chinese people think covid started in an American lab).
AFAICT your argument is mostly that it matters how high you think the risk of GoF lab leaks is, and not that figuring out whether or not COVID was a lab leak is super important, which seems like it's not really disagreeing with Scott.
Regarding the point you're making: I think it would be a big mistake to stop hiring virologists, but the thing I see is a call for defunding / banning gain of function research, which I take to be a small subset of virology that isn't obviously helpful: in hindsight, if DEFUSE was funded and had happened in like 2015 or something, would that have helped COVID much? My impression is no.
Since there's never been a gain of function lab leak, the error bars are very wide, and figuring out if Covid is one could change the odds dramatically. That makes the two things inseparable.
The result of the lab leak theory has not just been to defund GoF research. We've also cancelled other things like viral surveillance in nature. Here's an article on one project that got cancelled because of the changing politics:
Sampling of bat viruses in caves is extremely safe, relative to gain of function research. It's hard to even culture the samples, let alone infect yourself with them. The WIV collected 20,000 samples and only successfully cultured 3 sarbecoviruses.
We've also harmed collaboration with China, raised political tension with China, and harassed the western virologists investigating covid and other diseases.
Do I think that exact work proposed in DEFUSE would have prevented this pandemic?
No.
But I can think of many ways in which listening to virologists and other intelligent people would have helped. Let me list a few.
First, I'm reminded of Bill Gates' Ted Talk, from 2015, arguing that we're not ready for the next pandemic?
No one really listened to him. Instead of looking back and thinking he was prescient, many people now just make conspiracy theories where he was involved in planning the pandemic.
Second, I think there was some important work being done on therapeutics, at the WIV and a few labs in other Chinese cities. Shi Zhengli and others were working on a drug (I want to say it was a fusion peptide inhibitor) that would treat coronavirus infection. Had the US or China invested more money into studying which drugs can treat sarbecovirus infections, and found something that was even 50% effective, that could have saved 5-10 million lives around the world when the pandemic hit. And many of the steps involved in doing those experiments could be classified as "gain of function", depending on how you regulate it.
As is, Shi Zhengli recommended chloroquine, based on a few preliminary SARS cell experiments, but that didn't work well.
Third, we could have listened to virologists about how pandemics actually start. Remember Eddie Holmes' trip to Wuhan in 2014? His Chinese colleagues took him to the Huanan market, to show him an example of a place where a future zoonosis could occur. While walking through the building, he stopped to take a picture of a raccoon dog in one shop. He took that picture because he knew that was a dangerous animal to sell, those were susceptible to the original SARS virus.
In 2021, Eddie looked back at his photo and discovered something shocking -- it was from the same shop that we now think that the pandemic started in.
And China noticed that shop was dangerous, as well -- it was fined for selling illegal wildlife earlier in 2019, one of only 3 in Wuhan that got fined that year. They were given a $30 fine, but continued operating.
In another world where scientists are better respected, perhaps decision makers would listen to them when they say, "we should not sell these animals at wet markets". Politicians might ban those practices that increase the risk of a pandemic. The wildlife trade might be regulated better than with $30 fines.
In another world, perhaps we'd think about how to get more feedback from people like Eddie Holmes. As is, we have conspiracy theories where he is covering up the origin of covid. The US congress subpoenaed him to ask him the "hard questions about proximal origins".
In another world, we might look at this pandemic and ask how we could have prevented this, or how we could have reacted better after it started. Aside from the key step of regulating the wildlife trade, we could ask how to do disease surveillance, how to test and approve drugs faster, how to get vaccines out faster.
Instead, about half the world fell for a conspiracy theory that scientists created the pandemic. And we've decided to listen to scientists less, not more.
My own takeaway from this controversy has not been "virologists bad" but rather "wow, GOF on PPPs (1) is a thing, that (2) may apparently-still-plausibly have just killed 30MM people; maybe we SHOULD be reevaluating the risk/benefit of this specific research activity."
Except no raccoon dogs have been found with SARS-CoV-2 and there is little evidence they can transmit the virus. Eddie Holmes also privately noted the furin cleavage site was unlikely to have arisen given the small number of animals in the market cages. The nearest relatives to SARS-CoV-2 are found in bats in Yunnan and Laos (areas WIV sampled) ~1500km away from Wuhan. Unlikely to have contact with raccoon dogs which come from the north of Wuhan.
It arose in Wuhan with low genetic diversity indicating a lack of prior circulation. The FCS codon usage is also unusual. At this stage the raccoon dog origin claims seem speculative at best.
> Since there's never been a gain of function lab leak, the error bars are very wide, and figuring out if Covid is one could change the odds dramatically. That makes the two things inseparable.
This is potentially a good point, altho I'd think of the relevant uncertainty as "how much lab leaks do we get of viruses of X level of pandemic potential" rather than thinking of GoF viruses as a special category.
Regarding your other points, I feel like the vast majority of the projects you mention are a good idea even if you're certain that COVID was a lab leak - so it seems weird to link those to the question. Instead I'd have the model that a bunch of people hate pompous scientists and doctors telling them what to do, and that's why they believe in lab leak and also want to defund harmless virology research.
The one thing on your list that popped out to me was therapeutic development that arguably involves GoF research - no chance you have a link I could read about that?
> altho I'd think of the relevant uncertainty as "how much lab leaks do we get of viruses of X level of pandemic potential" rather than thinking of GoF viruses as a special category
Hmmm, unless GoF provides more "surface area" for leaking because of multiple passaging / having to handle the virus for longer to get it to do what you want? I guess this is a place where it would be helpful to understand what's actually involved in GoF research, and probably different subtypes will be different levels of risky.
"how many lab leaks do we get of viruses of X level of pandemic potential" is also mostly inseparable from GoF research.
Like, we had several SARS lab leaks, but none caused a pandemic. That's because SARS is not a pandemic worthy virus, and none of the natural or unnatural introductions could make it one. If SARS had been pandemic worthy, it would have been all over the world before it ever leaked from a lab.
Same thing with a lab studying, say, Ebola. That lab could leak the virus 100% of the time and Ebola would not become a pandemic.
A lab needs GoF research to make a natural virus into something pandemic worthy (or perhaps there are weird examples of reviving a frozen virus that's no longer circulating, like smallpox, but I'd be happy to lump that in with GoF since most labs would have to recreate smallpox somehow).
With regards to GoF bans, I think they sound great in principle. There are experiments that I've read about that seem clearly too dangerous to me -- in one US experiment, scientists recreated Spanish flu and infected monkeys with it. I struggle to see how you can possibly justify the risk of that experiment. It makes perfect sense to me, to ban things like that, and the Obama admin did call for a GoF pause back in 2013.
I believe that the problem actually comes with the drafting of the laws, where it's really hard to spell out the language as to what's dangerous and what's not -- there's no consensus on what is and isn't gain of function and some scientists think the current bills would restrict most of what they do. I'm a bit short on time today, but I'll try to dig up some of the letters that scientists have written criticizing the new GoF bans.
My biggest concern is not that we'll over-regulate virology, though. It's just that we'll ignore the actual natural risks. To actually reduce pandemic risk, we could do things like regulate the wildlife trade in Asia or maybe ban mink farming in the west.
If there are sensible policies like that which could, say, cut future pandemic risk by 50%, then that would ultimately save tens of millions of lives over the next few decades. But when a majority of the general public thinks that scientists created the pandemic, it's hard to get support for those kinds of policies, and it's easy to instead regulate the scientists and ignore their advice.
This reminds me a bit of why I first got interested in Covid misinformation. My friends and family were buying ivermectin from internet pharmacies instead of getting vaccinated. It's not that I was particularly worried that the ivermectin would harm them -- my friends are likely worm free now. It's that I thought the drug had little to no effect, while the vaccines they did not get worked pretty well to prevent serious illness and death.
Likewise, if you rely on a pandemic prevention method that has a 0-5% chance of preventing the next pandemic, while ignoring the methods with a 50% chance, the net result is much worse.
I may be misunderstanding the figures, but I was under the impression that the 1977 Russian flu was one of the worst pandemics of the century, with 700,000 deaths vs. about 50,000 in an average year.
(I also don't know how to think about it if it "only" caused the normal 50K deaths. Does that mean it took the place of another flu that would have evolved naturally that year, and so caused 0 deaths on net, or that it is indeed responsible for 50,000 deaths?)
I think if something causes either 50,000 or 700,000 deaths, that's a pretty big mistake that deserves important efforts to stop. Although I agree that virology is great in general, my understanding is that experts don't think the two highest risk activities - gathering new viruses from weird caves, and gain-of-function research - do much good (maybe we should add "experimenting with preserved historic flu viruses" to that list). Certainly the fact that Wuhan Institute successfully discovered some COVID precursors years before didn't seem to help much with COVID, and I can't think of any examples of where something like that did help (I could be missing some).
Slides 57, 59, and 60 show annual flu deaths in the US. 1977 does not appear different than any other year. I could not find comparable world-wide data, but it seems unlikely that the flu would be significantly more deadly, world-wide, but also be normally deadly in the US.
I'm arguing that the moral panic about gain of function research will definitely reduce our natural pandemic preparedness in a variety of ways.
If you start with a prior that 99% of pandemics are natural, making a small reduction to our preparedness for natural pandemics is certain to be worse than whatever gains you make from criticizing virology.
To be clear, I don't think we actually know what percentage of future pandemics will be natural. I thought a little bit more, last night, about how to calculate that number, but there's a very wide range of uncertainty since a gain of function pandemic is something that's never happened before:
Because there's such a wide range of uncertainty on how dangerous gain of function research is, the annual odds of a GoF pandemic could be anywhere from, say, 5% annually down to nearly 0%. Going from zero known GoF pandemics to one would have a large update on that number, not a small update.
If expected future GoF pandemic risk is comparable or greater than future natural pandemic risk, then the emphasis on prevention would of course have to be on labs. Ergo, it does matter how Covid started.
The question of when the next pandemic will happen is a probability distribution over years. Does virology research shift that probability distribution earlier? Of course it does. It would be ludicrous to claim otherwise. Is that a bad thing? Of course it is.
...but why is it bad?
If some virus in a weird cave is going to cause a pandemic, then it's going to cause a pandemic eventually. (That isn't really how any of this works, because it's really a question of mutations, but "go out and look for the virus instead of waiting for it to come to you" is a simple little picture story that encapsulates the issues involved in proactive research.) If we do nothing, maybe the next pandemic comes in 20 years. (There's *some* Everett branch in which that's the right number.) If we do something, maybe the next pandemic comes in just 18 years. But we'll know more when it does. How much is that worth? Hard to say. Covid turned out to be pretty bog-standard as flus go (the hard part was just manufacturing enough vaccines quick enough), so research into potential pandemics turned out to be irrelevant. On the other hand, sometimes new diseases emerge that are radically unlike what we've seen before (see e.g. AIDS), and there's no rule that says the next pandemic can't be something very different.
We could have a regime where literally 100% of all pandemics emerge from labs, while it is simultaneously true that the labs are a good idea.
Suppose that 100% of suicides happened in psychiatric hospitals immediately after the Pre-Suicide Unit identified that person as a suicide risk and ordered their hospitalization. Therefore, shut down the Pre-Suicide Unit?
Suppose that almost 100% of murderers have previous contact with police. Therefore, the police cause murder? I mean, there's an extremely narrow technical sense in which, yes, the police probably did cause the particular murder, in the sense that if the police had never done anything, that specific person would not have been murdered on that specific date in that specific way.
Proactively seeking out the cause of the next pandemic *obviously* makes it happen sooner. But that only matters to the extent that we're *less prepared* as a result of the pandemic happening sooner. Lots of other things can also make us less prepared, such as pointedly refusing to proactively seek out the cause of the next pandemic.
You're so close to understanding this. You mentioned urbanization. You mentioned AIDS. It spilled over into humans about 100 years ago. Why only just then? Because that's when cities started forming in Africa. Before that, it could spillover but the odds would be very low of it infecting more than a few people in one village.
Now think about SARS. SARS was found in farmed animals in a few places (including Hubei province), but we never recorded a human infection in a farmer. Probably there were some, but if you live on a rural farm and get infected, you're not going to pass it to many people. Maybe your family gets it.
We did find SARS cases a thousand miles away from those farms, in the middle of big cities. We found them in markets and restaurants where people ate those animals. In that case, the human density was high enough to sustain transmission, and an epidemic started.
With Covid, we also found the first cases at a market in the middle of a big city.
The fact that some bat disease exists in some distant cave does not make a pandemic inevitable -- these bat diseases have existed in those caves for millions of years. What causes a pandemic is farming susceptible intermediate host animals and then bringing them into dense urban population centers.
Those practices are preventable. Making changes to the wildlife trade between SARS and Covid could have dramatically cut down the odds of the Covid pandemic ever happening -- it's possible the two diseases even share the same intermediate hosts. But no one is talking much about fixing these things, because they're too focused on some imaginary lab leak.
Ehhhh...I mean, I certainly don't want to discourage anyone from trying to clean up meat markets, but...
There is one intervention that would definitely work to solve the problem: make everyone rich. (This would also solve a lot of other problems.) The problem is that we haven't managed to do that.
As human population increases, my default expectation is more human encroachment on (insert any given location on Earth). Are there improvements to be made on the margins? Sure, maybe, but you're talking 1% improvements, 2% improvements.
At the end of the day what you're saying is that you want to stop humans messing around with animals, which...I don't think you can fix *that* without fixing *everything*. Which is totally possible: rich countries are rich. But it's not like we're not already trying to make everyone rich. What you're talking about is a *targeted* intervention, where we can skip the "make everyone rich" step, and that...this isn't like malaria, you know? With malaria, yeah, we just have to interrupt the cycle, intercept the carriers. Tricky, but possible. But what you're talking about...this isn't one disease we're talking about here, it's every disease that can potentially mutate and jump to humans. I'm skeptical that we can meaningfully reduce contact between poor humans and animals without either making everyone rich (which we already want to do anyway) or massive human-rights abuses.
It's not like the CCP hasn't been trying to stamp out wet markets. They've been trying for decades. The educated classes are 100% on board. But they haven't been willing to take the drastic steps necessary to actually succeed, nor, frankly, *should* they be willing to. When they engage in horrible abuses to maintain their own power, outside observers are rightly horrified. Stomping on the poor to the degree necessary to actually enforce something as hard-to-enforce as "don't mess with animals" would be bad, however noble the intentions.
Your problem is the 67-year-old nobody in Guangdong who thinks eating a bat is protective against cancer. (To be fair, this isn't any more ridiculous than the listicles' ongoing quest to classify all substances in the universe as causing or preventing cancer.) (One could also make the case that he isn't exactly *wrong*, per se...)
Closing wet markets is like whack-a-mole: they just pop up somewhere else, in poorer, less surveilled areas. And of course, the harder you stomp, the less the authorities know about what's really going on on the ground (because no one will talk to the authorites, Stop Snitchin', etc), which brings its own problems. You can get past that with extreme enough measures, but...how to put this.
Another big part of the problem is China's population density, of course. Other places also have poor people, but have *fewer* poor people. If we reduced China's poor population, it would produce such diseases at the same rates as those other places. But I think the ethicists would have a few quibbles with that plan. This is pretty much how I think about prospective bans, too. I can definitely think of enforcement tactics that would make it stick, but I would not approve of those tactics and neither should you.
I mean, heck, what do you even want? Shut down all the wet markets? On paper, *we did that*. They were banned in February of 2020. They were "reopened" two months later. In reality, of course, they never went away at all, and the authorities correctly made the call that better visibility was more important than reducing volume.
They tried in 2003, too, when they were scared of SARS-CoV-1. It failed. They tried again in 2013, when they were scared of H7N9. It failed. "Hey, what if we just stopped poor people from messing with animals" is not some kind of new idea! Every so often there's some huge disaster and the authorities say enough is enough and try to shut it all down...but it never sticks, and they eventually (wisely) give up.
Heck, go back further. Authorities have been trying to get rid of nasty, dirty, smelly places frequented by poor people essentially as long as there has been human civilization. The upper classes' visceral reaction of "Ew, get rid of it" predates germ theory by thousands of years. Success rates are holding steady at 0%.
Even if you think it would be a good idea to try again (I don't), you have the problem of getting the CCP to do it. The governments of Australia and the United States at least, probably others, *have* leaned on the CCP to do what you want. The CCP has ignored them, because...lemme see here...ah yes, it doesn't like them in the first place and has no particular inclination to do what they say.
If you have some clever idea to make the wet markets slightly cleaner, or to discourage them, I'll give you a hearty slap on the back on a "Go get 'em, tiger." That stuff is good to do, sure. But the maximum plausible benefit is ultimately very small relative to the size of the problem. It's like having "I know, let's have everyone wash their hands!" as your pandemic-prevention strategy. You can't make a meaningful dent in the number of dangerous interactions between humans and animals. You can make a big dent in the number of *officially reported* dangerous interactions between humans and animals, but that would be a bad thing, not a good thing. It makes a big difference how quickly a new pandemic comes to the attention of the authorities. (After it's noticed, we still have to actually *do* something while we still have the initiative, but that's a separate problem.)
> it's possible the two diseases even share the same intermediate hosts.
Random aside, but some research suggests SARS-CoV-1 might not have had nor needed any intermediate host at all. SARS-like coronavirus WIV1 was able to use ACE2 directly, suggesting that SARS-CoV-1 might have made the jump straight from bats to humans. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5389864
Nature. 2013; 503(7477): 535-538.
> Most importantly, we report the first recorded isolation of a live SL-CoV (bat SL-CoV-WIV1) from bat faecal samples in Vero E6 cells, which has typical coronavirus morphology, 99.9% sequence identity to Rs3367 and uses ACE2 from humans, civets and Chinese horseshoe bats for cell entry. Preliminary in vitro testing indicates that WIV1 also has a broad species tropism. Our results provide the strongest evidence to date that Chinese horseshoe bats are natural reservoirs of SARS-CoV, and that intermediate hosts may not be necessary for direct human infection by some bat SL-CoVs. They also highlight the importance of pathogen-discovery programs targeting high-risk wildlife groups in emerging disease hotspots as a strategy for pandemic preparedness.
> Our findings have important implications for public health. First, they provide the clearest evidence yet that SARS-CoV originated in bats. Our previous work provided phylogenetic evidence of this5, but the lack of an isolate or evidence that bat SL-CoVs can naturally infect human cells, until now, had cast doubt on this hypothesis. Second, the lack of capacity of SL-CoVs to use of ACE2 receptors has previously been considered as the key barrier for their direct spillover into humans, supporting the suggestion that civets were intermediate hosts for SARS-CoV adaptation to human transmission during the SARS outbreak24. However, the ability of SL-CoV-WIV1 to use human ACE2 argues against the necessity of this step for SL-CoV-WIV1 and suggests that direct bat-to-human infection is a plausible scenario for some bat SL-CoVs.
If that turns out to be the case, having multiple kinds of animals in proximity might be as irrelevant as bad odors. But of course, we can't know for sure without more gain-of-function research into how viruses jump species.
"Slides 57, 59, and 60 show annual flu deaths in the US. 1977 does not appear different than any other year. I could not find comparable world-wide data, but it seems unlikely that the flu would be significantly more deadly, world-wide, but also be normally deadly in the US."
The flu in question is called the *Russian* flu, and it's called that for a reason. It really did cause 700,000 or so deaths. In China and Russia, which in 1977 were on the far side of the Iron Curtain and so that strain did not spread to the United States. About the ordinary number of Americans died of non-Russian flu in 1977; that's not what we are talking about.
If correct, that's less than 2 times the death toll for an average flu year: normally the flu kills ~400,000 people worldwide, with a lot of annual variation:
And I'm pretty skeptical that the iron curtain stopped viruses from travelling worldwide. Can you find statistics from those 2 countries?
Wikipedia also says that the 1977 flu was relatively mild for older people (because of its "frozen" nature, resembling a 1950's strain), so it seems odd that a strain which was milder for old people would cause an abnormally high death total.
> the fact that Wuhan Institute successfully discovered some COVID precursors years before didn't seem to help much with COVID, and I can't think of any examples of where something like that did help (I could be missing some).
They did propose a treatment, but it didn't pan out.
Inhibition of SARS-CoV-2 (previously 2019-nCoV) infection by a highly potent pan-coronavirus fusion inhibitor targeting its spike protein that harbors a high capacity to mediate membrane fusion https://www.nature.com/articles/s41422-020-0305-x
However, for what it's worth, Ralph Baric (another gain-of-function researcher, not at WIV) has argued that he deserves some credit for Operation Warp Speed correctly going "all in" on mRNA vaccines very quickly. It wasn't a priori obvious that the mRNA method would work, but Baric's experiments (funded by the Vaccine Research Center at NIH) prior to the pandemic showed that it would in a series of experiments in 2018 and 2019 (and early 2020 before Operation Warp Speed launched in May; by that point, of course, SARS-CoV-2 itself had shown up).
Baric also pointed to remdesivir, a treatment for coronaviruses that he identified in 2017. Unfortunately, remdesivir only works intravenous, so it didn't end up being much of a factor in the pandemic. They were trying to come up with an oral formulation that would work, but the pandemic came before they succeeded.
Baric *also* pointed to molnupiravir (sometimes called EIDD-2801), another treatment he identified as likely useful against the next pandemic before the next pandemic actually came. And molnupiravir was an oral treatment. The first documented COVID-19 hospital admissions were on the 16th of December; in view of the deadly peril and dire need, and that molnupiravir had already been proven broadly effective against coronaviruses, the FDA swung into action and issued what they call an Emergency Use Authorization, allowing molnupiravir to be used to treat COVID-19, on the 23rd of December. (Granted, the pandemic started in 2019, and the EUA was in 2021, but that is not Baric's fault.)
On the more blue-sky side, Baric continues to say that it should be possible to get ahead of the variant race and make a vaccine that will be effective against coronavirus variants broadly, by constructing a cocktail of variations ahead of time. He did show (with Martinez, etc) that their chimeric vaccines showed a meaningful improvement over the SARS-CoV-2 vaccines in common use.
It also seems worth remembering that a number of basic facts about SARS-CoV-2 we take for granted, such as the fact it comes from bats, are things we know because of the previous WIV work on SARS-CoV-1 and relatives.
(But I mention this only for completeness. I broadly agree that all the more proactive research was, against this particular pandemic, of only marginal value, nothing we couldn't have lived without. The question is the next pandemic.)
If I follow your formulation about "COVID as lab leak", the conditional probably is something like P(lab leak's happen once per decade | this lab pandemic was caused by a pandemic). I'm updating my expectation that I should expect a lab leak in the future based on what I learn about this particular pandemic (proven hypothetically to be 100 percent a lab leak vs not).
But we don't really know or cannot know whether COVID or any other pandemic was absolutely a lab leak. So we flip the conditional to get a prior on whether this or that pandemic was a lab leak such that P(this pandemic is caused by a lab leak | lab leak caused pandemics happen once per decade). If I have a higher estimate, then I'm more inclined to think that COVID was a lab leak; lower if my prior is lower. But there is a more nuanced formulation that isn't effectively modeled: P(Covid was a lab leak | the sum total of evidence), where "sum total of evidence" includes my prior on whether a lab leak pandemic is caused once per decade a well as the empirical and circumstantial evidence.
How should we update those priors? It turns out doing so well is epistemically difficult because our "sum total of evidence" is shaped by the news media we read, our skepticism or acceptance of government science, and so forth. I don't know how we can be a naive Bayesian about this and minimize our ideological priors.
Maybe that doesn't change the fundamental point Scott's trying to make here. It might. I'm still thinking it through.
I learned this a long time ago under the guise of, anecdotes are not data. My primary objection to even open the news is that I want to learn about the statistics, not about isolated events. And btw by this metric of course the new media are the worst, Twitter especially.
Any organization of sufficient age and clout has a huge scandal. Yet, people are always surprised that org x doesn't live up to it's own values. But there is a corollary too. There's also a percent chance of flubbing the response. And there's a percent chance of taking response actions that make "things like this" more likely or some other horrible thing more likely in the future. This corollary goes a couple levels deep and when you truly roll all 1s, it's going to be a bad time.
Separately, I just read "The Voice of Saruman" and so calling people "the stupid people" rubbed me the wrong way. For one, it's needlessly condescending - focusing on *their* stupidity, rather than the call to be wise, or one's own stupidity. Two, it doesn't help us deal with the dynamics of living alongside non-distribution-model-having humans. Three, people are stupid, for letting base emotional hits guide them instead of Lady Philosophy and explicit models, but why insist on it?
I hope to cultivate love or at least pity for the madding crowd rather than contempt. Hard as it is to do. Oh, and I want you to help me in this. Your job is to benefit my character too. I have large expectations of you. Sorry.
The "stupid people" thing is uncharacteristically high-handed and charmless but clearly deliberate, on the strength of rhetorical repetition alone. As a near-median person, I eagerly await the next puncture in the equilibrium.
I don't know! I would reconsider talking about "the stupid people demographic" as though it's a group of cattle that need to be prodded by the Wise, and lacking agency or intelligence. I know you don't actually believe there is a stable stupid people demographic who get together and have stupid people conventions etc. I understand that "stupid people", in this case, is just synecdoche for a real and ever shifting phenomenon based upon the drama of the day, and you not claiming that people who fall into this phenomenon are stupid all things considered or universally dull or that they are "bad people." You are naming a phenomenon (but the people who do it are kind of the outgroup?).
I counsel against calling people who routinely error in reasoning in this way stupid, because, I don't know, this type of stupidity is an emergent phenomenon of many people... and calling them stupid too many times in a row gets you a 'Niceness' penalty on your moral stats sheet.
I'm sympathetic to your approach, at least for this audience, and I don't think I could do *substantially* better.
But I think I would have used "foolish" rather than "stupid". The latter is usually my intuitive first take in cases like this but "foolish" carries the connotation of "perhaps unwisely didn't think this through" rather than "thought about it and got it wrong because mentally deficient. So, A: less pejorative and B: probably more accurate in this context.
If you didn’t want to substantially rewrite it I would maybe suggest ‘undereducated people’ instead of ‘stupid people’ since that at least gets rid of the essentialism and suggests a way in which it can be ameliorated in the future.
At the end of Antifragile he suggests that you should avoid the news because among other things it should almost always fail to update your model of the world. Year-in-reviews or Decade overviews should be enough.
These things do matter and the most sophisticated supercomputer probabilities are meaningless for new events. Extrapolation from the past has always been doubtful: even more so now that change has accelerated.
I kinda see what you're doing with the FTX vs OpenAI contrast, but the example falls flat for me. The problem with SBF was not that he was a CEO doing shady things or whether or not his company had a board. The problem was that the entire EA movement went all-in on this one charismatic person, tying its own public reputation to that person, shifting priorities in the direction of things SBF wanted, being unprepared for the FTX future fund being backed by nothing, etc etc.
Ironically, just as SBF himself, EA's turned out to also have rather naive linear utility models, a clear lack of deontology, common sense, due diligence, whatever you want to call it. The thing that makes you not bet your entire movement on one person.
When it's a charity that is chock-full of people convinced they are smarter than the average bear, have all these VUNDERFUL! maths and statistical tools, and is courting/courted by Silicon Valley deep pocket types, I sure the hell do expect that.
Mary Murphy running a cake sale to raise funds for new curtains for the local tennis club building, not so much.
I don't think we actually went all in on him. Will MacAskill said he seemed cool. He did seem cool. Most other people never mentioned him. I don't think I mentioned him, except once in a book review as an example of someone who had weird thoughts about how to do business. Mostly his role was "he gave us money and we accepted it". I think you would need a very high standard of suspicion not to accept $100 million from someone who was giving it to you for free.
Mmmm. I can see why this approach from EA survivors, since I've just read a bunch of comments upthread about how my church is a nest of rapists, and while I can assure you all that I've never raped anybody, what are you going to think when you hear "Catholic"?
But Sam (and his brother, to a larger extent), *were* involved in EA. From Lewis' book (and not the fawning Sequoia Capital article, for a change):
About the early version of Alameda Research:
"The business hadn’t even really been Sam’s idea but Tara’s. Tara had been running the Centre for Effective Altruism, in Berkeley, and Sam, while at Jane Street, had become one of her biggest donors. …Tara was no one’s idea of a crypto trader—before moving to run the Centre for Effective Altruism, she’d modeled pharmaceutical demand for the Red Cross. She had no financial background and no money to speak of and yet was generating tens of thousands in profits trading crypto.
…Her success led Sam to his secret belief that he might make a billion dollars by creating a hedge fund to trade crypto the way Jane Street traded everything.
But he couldn’t do it by himself. Crypto trading never closed. Just to have two people awake twenty-four hours a day, seven days a week, he’d need to hire at least five other traders. He’d also need programmers to turn the traders’ insights into code, so that their trading could be automated and speeded up. Tara had been making a handful of trades a week on her laptop; what Sam had in mind was an army of bots making a million trades a day....
His access to a pool of willing effective altruists was his secret weapon. Sam knew next to nothing about crypto, but he did know how easy it was to steal it. Anyone who started a crypto trading firm would need to trust his employees deeply, as any employee could hit a button and wire the crypto to a personal account without anyone else ever having the first idea what had happened. Wall Street firms were not capable of generating that level of trust, but EA was."
And then it all went kablooey because he was who he was, and the majority of the disgruntled EAs left and bad-mouthed him to the community (allegedly) but clearly not widely or forcefully enough, because while it did put a stop to his gallop for a while, it didn't finish off Alameda Research and him.
To be clear, accepting the $100 million was completely fine. One does have to be careful, as a sudden funding increase with strings attached will have a tendency to warp organizations. In this case, it seems there was a big push towards AI safety based on SBF's preferences (correct me if I'm wrong about this). IMO thinking about AI safety is a worthwhile thing to do and should be funded, but it is absolutely not the central mission of EA. I'm worried (in general) about AI alarmists overtaking EA, because it's a totalizing set of beliefs.
The other issue is that somehow, SBF turned into the poster child for EA in the public's perception. I genuinely don't know how deliberate that was, and which EA orgs helped this stuff along. It might have been himself doing it? I vaguely remember billboards with his face on it associated to EA, 80k hours holding him up as a shining example, and stuff like that.
"A few months ago, there was a mass shooting by a -->far-left<-- transgender person who apparently had a grudge against a Christian school. " This has in no way been proven. There was a manifold market about it and it resolved NO, because this has not been proven.
I mostly agree, but want to add an important caveat here:
You are not supposed to significantly update on dramatic events only if you indeed have a probability distribution that these events fit in.
Otherwise, you get a trapped prior, an awesome excuse not to update on any evidence. The "pretending to be wise" mindset where you fail to notice your own confusion and just pretend that you are one of the cool kids who totally saw the event coming.
It's okay not to expect some things and be surprised. It doesn't make you a stupid person. Personally I didn't expect FTX collapse. It was a "loss of innocence" for me. I knew that crypto is scammy, but these considerations were outweighed by the halo effect of EA - obviously good and reasonable people there know what they are doing. So my probability distribution didn't really include this kind of dramatic event. Not that I was literally thinking that it was impossible, but I just didn't really think about it, and if I did, would put a tiny probability estimate of such thing happening per year. And so I updated from my halo effect. Not to a point where I disavow AI, but to a point where I see it just as another community with its own failures and biases. A community that I mostly ideologically agree with, one that I wish I could truly be a part of. But not more than that.
Base rate for FTX should have been the rate at which multibillion dollar companies are fraud, which is low but not zero (eg Enron, Theranos), plus a lot extra for crypto, either plus or minus some extra for the EA connection, I don't know. If you'd asked me beforehand I would have said 1% based on knowing and trusting some of the people who worked there; if not for that, it would have been maybe 5%. I'm updating the degree to which I can trust people, but I'm not sure 5% is wrong for the general class.
Re lab leaks: you can reasonably not have a very high prior on them. You read about the occasional minor case or close call, but without having been there it's hard to know whether it was a real risk or just sensationalized news making something that never really had a chance to go bad sound scary. Maybe it's like that "there's a mass shooting every N days" thing where it turns out they define "mass shooting" very loosely and most of the examples are non-central. And you could try to do your own research, but without having been there or being a biosecurity expert it's genuinely hard to be sure (and you can't just trust the biosecurity experts either, since they have their own agenda).
So having a specific thing example of a really bad lab leaks pandemic really does provide good important evidence of lab leaks being worth worrying about (unlike say the SBF thing, where "crypto is full of scams" is something that was already obvious with many central examples).
Re 9/11, your conclusion of "then after 9/11 we didn't have any mass terrorism" is kind of bad because it only observes the world where we *did* have a massive response. It's easy to imagine a world where all the terrorists weren't suddenly busy fighting US Marines in Afghanistan and had time to go "wait you can just crash planes into buildings? Let's go!" And you get a massive terrorism spike (the sudden rise in global terrorism since withdrawing from Afghanistan gives at least some positive evidence for this). This works for true stochastic processes, but terrorism is a game against a human adversary, not a random process. It's the difference between gambling on a horse race and the stock market.
This is straightforwardly a clash of the different descriptors in use, I think...
I reckon the news originally was a blend of two kinds of events: the unexpected, and the impactful. The unexpected (e.g. violent crime, because most of us live non-violent lives) should make you update because it's unexpected. Impactful (e.g. who won the election) should make you update because it affects your base assumptions.
I would argue that 9-11 was worthy of an update for most people, because most people did not realise that terrorists were capable of organising something that big; or that small numbers of people could weaponise infrastructure against a major city like that. Perhaps we should have known, but I don't think it had ever crossed my mind before 2001.
The word dramatic gives us a clue about why some news is not worthy of an update: it's just rubbernecking. Spectacle. There's plenty of that, particularly as news has become more national, and then more global. For example: a murder in your community is worth thinking about. A murder in your local paper is important. But a murder *somewhere in the USA* is not worth thinking about; however they are now presented to us in the same way. For that kind of news, I think Scott's right - not unexpected, not worthy of our attention or an update of priors.
There's one more problem, which good news organisations had ways of dealing with: slow events. Like, WHO fails to deploy malaria vaccine for another day, 1,000 children die. (For those who didn't know, or who forgot about it, this is very important information and should prompt a change of priors.) Good news outlets had dedicated reporters for that sort of thing, but news is in the middle of a big transformation, so sometimes it's getting lost.
It shows that scientists, politicians and the media (the ones accusing Trump supporters of "misinformation") are perfectly willing to engage in outright lies and propaganda about even scientific issues in the name of ideological agendas.
It means that whenever anyone (rational) hears the term "misinformation" or reasonable explanation for things being called "conspiracy theories" (especially "racist" conspiracy theories), this should be a giant, MASSIVE red flag that the person is engaging in propaganda.
Additionally, the people who engaged in this massive lie suffered ZERO consequences for it, because approximately nobody on the left has any principles other than in group good out group bad. Maybe the right are no different, but they're not the ones acting high and mighty over such things.
Of course, even if you believe that lab leak shouldn't make us more worried about gain of function, we should still be very very worried about and opposed to gain of function. But doing anything about it would be seen as vindicating Trumpian "conspiracy theorists", and besides, all the scientists whose careers are based on this stuff said there isn't a problem, and you're not an anti-science republican are you?
I think if someone had previously really internalized the lesson of the Armenian genocide that mass genocides were possible, and was under no illusions that European countries were morally superior enough to avoid that, they could have stayed updateless after the Holocaust. I don't think many people had done that, and I'm not even sure we're there now - I totally don't believe a modern European country would commit a Holocaust-level genocide; maybe I'm being foolish.
You're not foolish if your time horizon for large European genocides is short enough or if you rule out large disruptions (war, a really bad plague, unheard-of natural disasters etc.). After a few years of the right kind of instability, even large outgroups would be at risk of being outlawed anywhere, I suppose. Sorry about that.
Probabilistic design is in civil engineering the governing economic rationale and is always related to the potential death toll. Engineering logic applied to terrorism would be by far to prioritise the prevention of large incidents over the prevention of smaller ones. Which seems to be the opposite of what we are seeing, also in geo-politics.
Uncertainty is a prerequisite for the evolution of life. Hence our continuous navigation to cheat it:) (to stay dumb:)?. So we should use its principle to help guide us in making smarter future-enabling decisions. Hence my quote: “Human progress can be defined as our ability to create and maintain high levels of future uncertainty“. Which at first may seem a paradox...but it's not.
I like the idea of harnessing the reactions of people to drive policy. Discerning actors should be able to craft opinion pieces so that they are ready to go when the right event happens. I know it's common to have two different pieces ready to go when it comes to presidental elections, but I'm curious if there's anyone that have prepared something for the day terrorist nuke a city. Might be a good time to use public sentiment to drive disarmament.
I'm confused what he's thinking. He knew there were Somali pirates causing minor trouble. He admits that the Houthi trouble will probably be, in the grand scheme of things, minor. So I'm not sure what update he should make between knowing about the Somalis and knowing about the Houthis, except maybe going from n=1 to n=2, which is fair.
Cops kill about 10 unarmed not-directly-attacking-police black people every year; I think this number has been pretty consistent. I agree that if another such killing makes the news tomorrow, this is basically meaningless for the question of how much you should care about this issue, or how many you should expect them to kill next year.
And if there aren't very many killings of black American's next year we still should not update and should assume the problem remains. We should still work towards staffing our law enforcement better. (Which probably should involve higher pay to attract better officers)
This article makes a reasonable case that drastic events don't tell you much about the base rates of said events.
But is that the only thing you can learn from a drastic event? Sure, in the abstract, we knew that flying a plane into a building was possible, but the data points of large terrorist attacks was still n=0. The 9/11 attacks were an unprecedented event, in terms of scale of planning and execution. Of course you can learn stuff from it!
You might know, in the abstract, that there are probably sexual harassers or abusers in a large community. You can, and should, put preemptive measures in place to protect against them. But you are still shooting in the dark, knowing nothing. Whereas if an article comes out listing a ton of examples of harassment, suddenly you have a sense of who some of the abusers are, how and when they operated, and how they got around whatever protections you had in place to catch them. If you care about preventing sexual harassment, this is very useful information! It can inform the proactive measures that can be put into place to prevent similar events from occurring in the future.
On the origins question, the problem David Relman described is the early case data is "hopelessly impoverished". Still, location, sampling history in Yunnan and Laos, lack of secondary outbreaks, features of the virus (binding affinity to human ACE2, low genetic diversity, FCS, codon usage) research proposals fit with lab origin.
1. Chinese researchers Botao and Lei Xiao first observed lab origin was likely as the nearest bat progenitors are ~1,000km from Wuhan. The Wuhan Institute of Virology sampled SARS-related bat coronaviruses in these locations - Yunnan, Laos and Vietnam.
3. The features of the virus are consistent with lab origin. SARS-CoV-2 is the only sarbecovirus with a furin cleavage site. It arose well adapted to human ACE2 cells with low genetic diversity indicating a lack of prior circulation. Again inconsistent with the claim it arose via the animal trade. The CGG-CGG arginine codon usage is particularly unusual. The restriction site SARS-CoV-2 BsaI/BsmBI restriction map falls neatly within the ideal range for a reverse genetics system and used previously at WIV and UNC. Ngram analysis of codon usage per Professor Louis Nemzer https://twitter.com/BiophysicsFL/status/1667224564986683422?t=Vh8I9fl3lwj6k6VJ8Kik8Q&s=19
Jesse Bloom, Jack Nunberg, Robert Townley, Alexandre Hassanin have observed this workflow could have lead to SARS-CoV-2. Nick Patterson notes work often commences before funding is approved and goes ahead anyway. The Wuhan Institute of Virology had separate funding for SARS-related spillover studies from the NIH and CAS.
5. The market cases were all lineage B but as Jesse Bloom observes lineage A likely arose first. So *the market cases were not the primary cases*. WHO has also not accepted market origin as excess death data points to earlier cases. Peter Ben Embarek said there were likely already thousands of cases in Wuhan in December 2019.
6. The evidence for both lineage A and B in the market itself is tenuous. The evidence for lineage A in the market is based on a single sample found on a glove tested on 1 January 2020 out of 1380 samples. Liu et. al. (2023) note this is a low quality sample.
7. Bloom found the market samples are *negatively correlated* with SARS-CoV-2 genetic material. Another Bloom analysis published 4 January 2024 shows abundance of other animal CoVS but not SARS-COV-2. https://t.co/i0HzwvIPeo
8. Lineage A and B are only two mutations apart. François Balloux notes this is unlikely to reflect two separate animal spillovers as opposed to incomplete case ascertainment of human to human transmission.
9. There is a documented sampling bias around the market. Something even George Gao, Chinese CDC head at the time, acknowledged to the BBC stating they may have focused too much on and around the market and missed cases on the other side of the city. David Bahry outlines the sampling bias.
Excellent post! I think this instinct to use probabilities/deduction/Bayes/rationality to understand problems like these is the biggest difference between STEM types and humanities types.
After a lifetime of maths, science, engineering and software, I'm doing a degree in the humanities. I have an assignment due next week on Aristophanes and, as far as I can tell, the question has absolutely nothing to do with the text. This happens to me with every single assignment.
Instead of figuring out the answer, I use what feels like a kind of blindsight and I start making stuff up. After a couple of hours, I have a story that kind of makes sense but it doesn't (to rational me) seem to answer the question. It's worked for me so far and I am getting good grades.
Would you say that you are doing this because you are updating towards “humanities types” are all just making stuff up?
Are you lending much probability weight to the theory “I am deeply misunderstanding what is happening here, but I have found a lucky strategy, so I am just sticking with that until it suffers a massive failure, like the kind described in the article?”
I don't think they are just making stuff up (I used to think that!). I think they have some secret way of knowing that I don't understand. I seem to have some instinctive way of knowing what they know — but I am not used to knowing things by instinct. In my field, we know things by figuring them out.
I don't know where this will end up. Will my instincts fail me? Will I learn to understand the way that they understand? Will blindsight work for me forever? I really don't know.
Could you share with us the question and the text you mean? I'm curious because I'm more the humanities than the science type, and while it's perfectly possible that professors do have bees in their bonnet about things, I'd like to see how *every* question has nothing to do with the text.
It may be that you are approaching it in a literal "well there's nothing in the words written down on the page about slaves/cheese sandwiches/best knot to tie sandals" spirit, but that does not mean that "What is Aristophanes' opinion on cheese sandwiches?" can't be relevant to the text or is not discoverable from it.
We're not allowed to share the text of assignments (Sorry! They are really strict about this!) but it is approximately "compare and contrast what Source 1 and Source 2 say about XXX in Athens". Source 1 is about 100 lines from a play and source 2 is a photo from the Parthenon that looks like a bunch of statues with no heads. Neither has anything to do with XXX as far as I can see.
Postscript. I've had my blindsight brainwave! I know what I am going to write now but it requires me to pull in an absolute ton of context from elsewhere. There's absolutely no way I could answer from Source 1 and Source 2 because Source 1 says almost nothing about XXX and Source 2 is a bunch of statues with no heads.
Raggy (if you will permit the familiarity), c'mon now, we can do better than that. Even headless statues can tell us something - are they of men or women? what poses? athletic youths with no clothes on, or demure young women fully robed?
Lemme hit up some online "Parthenon statues" to see what I can see.
So, since the famous ones are the Elgin Marbles, I'm going to assume these are the ones meant, and if I'm wrong, pardon me.
"Pausanias, a Greek geographer, described their subjects: to the east, the birth of Athena, and to the west the quarrel between her and Poseidon to become the tutelary deity of Athens."
Since Athena is the tutelary deity, then of course these do represent the public face of Athens, as it were. We have two accounts of the myths - the ones carved in stone, and (again I'm presuming) the lines from the play.
Do these fit with one another? How is Poseidon represented in his struggle with Athena? Who or what is emphasised more - the goddess, the struggle between gods, the victory for Athens?
Again, these are public statements and are either acceptable or controversial with the Athenian authorities and public of the time. Does the play undercut the heroic reputation of Athens, or a famous Athenian figure? Is it calling on present-day (of its day) Athens to live up to their glorious mythological past, or is it indicating all these stories were lies?
Think of our modern day movies and characters like Captain America and the changes in how he has been perceived; one interpretation could be like the heroic statues on the Parthenon, of the idealised past and the best version of ourselves. Another interpretation could be critical, reminding us of the inequality and racism of the time. Cap started out literally punching Nazis, but does that mean the same thing today? Can he still be a hero for today?
The Parthenon Marbles are saying "Athens! You are so great and so important, two gods fought over who would get to be your deity!" The play may say "Make Athens Great Again!" or it may say "The truth is that Athens was always a dungheap, and the pretty lies we tell about our history just cover it up".
(Since I don't know the play or statues, I'm pulling this out of the air, but it's not necessarily that 'there is nothing in sources about X' because you have to look under the surface).
I'll add another couple of examples from previous courses so I am not completely obscure. There were several questions along the lines of "Study this painting. What does it say about XXX's reputation?"
We had several variants of this question where XXX was Elizabeth I or Cleopatra or the Virgin Mary. I interpreted the question as "Write everything you can think of about this painting and throw in some other stuff that might be relevant."
Well, the portraits of Elizabeth I, for instance, are political and meant to be such. They're not about "this is what the queen looks like", they're about communicating power, authority, and copying an established image that was the ideal of the queen, not the woman as she aged.
In an age where you weren't likely to see the queen unless you were on the routes of one of her regular progresses, or a member of court (which includes servants), these portraits were a means of letting the general public know "Okay, so this is the new queen". They were propaganda, PR, and crisis management responses during times of turmoil.
For example, the Armada Portrait and the symbolism it contains which conveys the message of imperial right to rule, calmness and power in the face of threat, the opening up of the New World to English as well as Spanish colonisation, and the implication that Divine Providence favours Elizabeth and England, protecting them from the Spanish by wrecking the invasion fleet:
Several versions would be made of the same subject and given as gifts to foreign monarchs, or commissioned by subjects jockeying to show off their loyalty to the crown (in hopes of reward and advancement). These aren't simply random collections of imagery, but carefully worked out to, indeed, bolster reputation.
Look at the complex symbology in this portrait of Elizabeth, especially the eyes and ears on her gown - a warning, as well as reassurance, that the monarch knew all that was going on and was in control. She had a well-established spy network under Sir Francis Walsingham.
Elizabeth may well have learned this from her father, since it is the iconic Holbein portrait of Henry VIII which has shaped our mental image of the monarch, and it was that image which was repeated over and over again even as Henry grew older and weaker.
And things such as colours are also controversial - when Katherine of Aragon died, Henry and Anne Boleyn appeared in yellow garments. Some claim that this was mockery of the death, since they were celebrating instead of being in black mourning. Others claim that yellow was, in fact, the mourning colour in Spain where Katherine came from. So - were the king and his new queen officially rejoicing over the death of the obstacle, or were they observing correct etiquette in the death of the woman who has been queen? Up to the beholder to decide, since we don't have any records of what Henry intended:
Same with Cleopatra - how the portrait is painted, in what style, and representing her as Seductive Temptress Who Wrecked The Empire, or Loyal Egyptian Queen, or the likes shapes our view of the character and reputation, and naturally the decision to paint her as Temptress or Ruler is coloured by the purpose for which the portrait is commissioned - is it by political rivals trying to blacken her name? is it a romantic, later portrait that sells itself on the pop culture notion of Cleopatra and not the historical woman?
Think of the recent movie and the decision that "Cleopatra was Black" and the political choices and controversy around that. Movies are probably our modern version of such state portraits, rather than the state portraits and photographs that are produced today.
I'm not educated, I've just read a lot of art books! 😁
There are good resources online, even Youtube videos. Mostly it's because I see a picture (like Elizabeth in a dress covered with eyes and ears), go "what the heck is going on there" and look up explanations.
Struggling artist, man on the make, successful society painter, rebel against the artistic conventions, an old dog trying to learn new tricks (and succeeding or failing) - there's a lot we can learn.
For instance, John Singer Sargent's "Portrait of Madame X" which was a scandal and damaged his reputation at the time, though it may have helped it later on. He was already successful and rapidly becoming known, so why paint this portrait - which was not a commission, but one he wanted to do himself? And why in this style? (He was forced to make some changes to the picture due to the outcry). Did he think his established reputation could protect him, or was this a genuine artistic impulse to create something that was not bought and paid for in advance? Why didn't it wreck his career, and would a less-established painter have survived?
If you're getting the grades, you are doing well and not just pulling crap out of the air, so bravo!
I've been there. I think there are two things going on: one is a lot of background knowledge which the experts have, in the light of which you can actually make reasoned deductions from the texts given. So, for example, in your question about what you can say about painter X given this painting - when you add in the knowledge that the subjects of paintings are aristos, painters are not aristos, painters are commissioned by the aristos, painters' livelihoods depend on the aristos being pleased, beauty standards in that time were PQ&R... then you can see that the painter painted this dude ugly, which implies that he was protected. Or something. There are a lot of assumptions in there, but there is a logic to it.
The other thing is criticism/commentary as creative process, where the source texts are not so much evidence as a source of inspiration for the critic's own creative process. There may not be any logic here, but maybe a train of thought to be followed - for whatever that's worth.
"In 1681 Charles II appointed him ‘painter and picture drawer in ordinary’ and he is said to have produced a portrait of Charles (now lost) that prompted the response: ‘Is this like me? Then, odd's fish, I'm an ugly fellow.’ "
Anecdotes like this are what make me warm to Charles II; you would have to be confident both in your own standing as an artist, and in the likely reaction of your subject, to produce such an image of the king and expect to walk away with your freedom and all limbs intact 😀
Speaking of art restoration, this is a restoration of a portrait which had been later repainted to 'prettify' the subject and fit in better with beauty standards of that era:
I worry that this essay begs the question a bit. If you already knew enough to know that, say, 10% of people are sex offenders then observing a sex offender should not cause an update. If you thought that 0.1% of people were sex offenders, as is certainly the case for some, then observing one should cause an update.
I think the counterargument is already in the essay when Scott says "I think whether or not it was a lab leak matters in order to convince stupid people" if we interpret 'stupid people' to mean people with miscalibrated priors. Personally, I was not aware of gain-of-function research before the lab leak discourse bloomed, and if asked prior to it would have estimated like a 1% chance per decade of something like that happening, or even less. Now that people are making credible arguments about this I have updated, and this is rational. I would have been more on board if the essay had been titled Against Learning From Dramatic Events When Your Priors Are Well Calibrated.
I think if you thought 0.1% of people were sex offenders, learning about one in a community of ten thousand (which your theory predicted had 10) still shouldn't surprise you. I don't know if anyone thinks the number is so low that it's shocking for there to be at least one in Hollywood.
(I think the number is either 0.1% or 10% depending on what level of sex offender we're talking about).
In principle, even learning about one sex offender in a community of 100 doesn't force you to update - you assumed there would be one in ten such communities, and perhaps this is just that one community that has the sex offender.
It's still the case that larger updates are warranted when your prior differs greatly from reality versus differing less, and that "should not be updating" is roughly equivalent to "was already well calibrated." Honestly though, if I'm overestimating my update even in my own numerical example then that's a point in favor of your point that maybe people are thinking there's more information than there actually is. I do think it's the case that single observations are given more weight than they should, and that base rates are the right thing to be thinking about.
That depends on how you learn about the one sexual offender.
If an amazing detective investigated every single person in the community and tells you "there's at least one", then sure, you should barely update at all. (But you might be able to update a lot more if this detective tells you more about their findings: did they find 1 or 10 or 100?)
If you speak with 1 randomly selected person and learn that they are a sex offender somehow, then you could update a lot. You didn't just learn that there's at least 1, you also learned that a random person was a sex offender, which is 10x more likely if there's 10x more of them. If you were [90% on 0.1%] and [9% on 1%] and [1% on 10%], then I think you should go from ~0.3% to ~3% that the next person you talk with is also a sex offender.
What about the case where you hear about sex offenders via news or rumors? I guess it's a bit of a mix. You could have a model where each incident has an independent, small probability of making the news/rumors, in which case the number of news/rumors will be proportional to the number of offenders, and it's closer to the random case. Or you might think that someone first decides to do a story about sexual offenses in a community, and then finds corresponding cases. (Though even then, you could update if the number/severity of cases they found was very different from what you'd expected a reporter to find.)
Not sure about that. If your priors are so miscalibrated that you think the event should never have occurred, this certainly requires an update - but I think that's precisely the case which Scott calls "stupid people", or at best "science fiction". If your priors just make such a case very unlikely, observing one doesn't have to cause an update at all.
What you have done seems to me not to have updated based on a dramatic event, but rather used a (possible) dramatic event to learn more about that topic and updated based on that, which seems a sensible thing to do.
I'm confused (and possibly repeating existing comment, sorry) about the implication that one's prior on lab leaks ought to be blind, due to No Evidence. Haven't there already been several lab leaks over the years, including some big newsworthy whoopses, thus updating towards [Not Actually That Unlikely]%? The footnote does amend this to "lab leak *causing a pandemic*", which makes a bit more sense but also doesn't seem like the actual crux at issue wrt covid origin debates. Either way, I find Zvi's conclusion more persuasive, that even if one pegged the chance of covid lab leak at just 10% or whatever, there's definitely positive trades to be made insuring against that small-but-very-painful possibility in the future. I guess in that sense it "doesn't matter" whether lab or zoonosis - many prophylactic precautions would be the same either way - but the Schelling still needs to actually happen to get some redeeming value from Holly Hindsight. I very much hope it's not permanently lost to the partisan melee, like so many potential cause areas.
Similarly scratching head at your prior of one mass shooting by a far-left transgender person every few years. I wasn't aware that had ever actually happened until reading this post*, and just on base population rates for T that seems...really high. We're comorbid for lotsa mental illnesses, sure, but doesn't that cash out far more often in self-directed violence (say, suicide) rather than other-directed violence*? And similarly tend to be strongly anti-correlated with gun ownership? Maybe it's the same kind of precise definitional thing as previous paragraph, where you really mean "far-left mass shooting" or somesuch...sadly there's no potentially-clarifying footnote here.
*with some notable exceptions like, iirc, schizophrenia
I don't think it ought to be blind. Like you say, I think there's enough evidence to think it's pretty common. I think there's some room for not being sure how often it will be a pathogen important enough to cause a major pandemic, but even the Russian flu might be an update in that direction.
In terms of transgender shooters - let's say about 2% of shooter-aged people are trans, there are about 5 mass shootings per year, so you'd expect one per ten years. I'd go slightly up because trans people seem angry at society and have more mental problems, but then down again because they probably own guns less than usual. I'm not too attached by these numbers and could easily change them by maybe a factor of 5 with a tiny amount of evidence.
Fair enough, the conjunctions get real noisy when working with so many fractional factors. I also failed to factor in the recent large rise in transgender identification, partly from an aspirational "don't actually want this to be an exponential" and partly due to same bias others noted where they assumed said shooter was biologically male. Shootings (of all degrees) so often involve men that it feels like similarly highly-unlikely confluence. None spring to mind other than I guess the iPhone Backdoor incident.
I don't think pariah status necessarily dovetails with anger at Society which refuses to bend over backwards; that would also predict such malcontentedness decreasing as such accommodation continues apace, which...we don't see, really. But that's getting out of scope. The appropriate small update would be, I think, continued high baserate of firearms means even increasingly unlikely candidates will become shooters, given the opportunity. (Which on reflection seems better than, say, bombings? The devil you know...)
In your lab-leak example, you assumed that in the "rare risk" case, there's a 10% chance of a lab-leak-caused pandemic per decade. That's very high! My guess: If most people thought this was the actual risk, we would no longer have labs.
I'd bet that prior to covid, most people would have assigned a <1% chance per decade to a covid-sized pandemic caused by a lab leak. With this prior, you'd get a much bigger update. That's what the public response, I think, reflects.
Now, you could argue that this prior is stupid, or whatever. But arguing about priors is a difficult thing.
This also applies to elections -- if someone you predicted would get 48% of the votes ended up getting 78% (or 18%), that would mean your model of the world is badly broken and you need to fix it, but if they end up getting 51% (or 45%), then that's a completely ordinary occurrence that means no such thing (but only that you'll have to cope with a different president/mayor/whoever than you expected), and other people shouldn't get to feel smug at you about that.
Millei's victory in Argentina is a good deal of the way to your 78%. If I remember correctly he was projected to lose the election, ended up with 56% of the votes.
How badly was he predicted to lose? A 6% polling error in a place like Argentina (reasonably developed nation but not as deep and sophisticated a media ecosystem as the US) is probably in the noise, but if it was 16% that's quite significant.
"So if there’s a community of 10,000 people, probably 1,000 of them have sexually harassed someone" -- okay, but in certain communities it does seem to be a lot more common than that -- to the point that people will point out how Keanu Reeves or "Weird Al" Yankovic are not known to have sexually harassed anyone as though that were something particularly noteworthy.
Re: OpenAI, I think it's been sufficiently demonstrated by now that the board was *not* strong - Altman was re-instated and the board was replaced by a new, more 'light touch regulation' friendly board.
I think the lesson from FTX and OpenAI in conjunction is "A weak board is better than no board, but a weak board will be beaten by a strong board/whoever is holding the purse strings, if those are not one and the same". FTX *badly* needed governance and an organisation chart; the book by Michael Lewis, even where he is sympathetic in parts, had several bits where I wanted to slap the face off SBF, e.g. his resistance to appointing a Chief Financial Officer because (according to Lewis) he felt insulted that people thought he didn't know where the money was going. Well, turns out he *didn't* know.
EDIT: If you believe Lewis. If you don't, he didn't want a CFO appointed because that would have revealed all the money he and his family were siphoning off for their own little projects and personal use.
A few thoughts. First, I agree completely about all of your positions on regular events that have a known probability and distribution. I think we update far too much on new (or newly reported) individual events that are part of a pattern that we could have known about for years.
For extremely rare events, I don't agree. There are two kinds of knowledge in a society. Things that people can or should know, and things that people actually do know. I'm seeing this more and more as my children get older and I think about how I know something and realize that whatever mechanism that was, my kids haven't had the same thing happen. They could know the same things, but they're busy learning the things that they actually do know and aren't updating on things that may have been important in the past (and may be again in the future) but instead on things that seem important to them now. My kids don't need to learn how dial-up internet works, because they're not going to use it. Maybe they should learn about terrorism, but maybe not. The answer depends on information we don't have (future major terrorism) and not prior events.
Prior to 9/11, most people put the chance of a major terrorist attack on US soil at approximately 0.0%. If you asked them directly, they may not say absolutely no chance, but in practice we lived our lives that way. I'm sure there were people in the US who had lived in countries with major terrorism who were not surprised, but the vast majority of people in the US clearly were. And this is not surprising - there have been no major terror attacks in the US - either ever or at least in living memory (depending on how you define "major" and "terrorist").
Similarly, the chance of a nuclear detonation in a major city is not 0%, but most people live their lives as if it is. And this is rational. There is some amount of resources devoted to ensuring that no nuclear weapons are detonated by terrorist groups, and this amount of resources is apparently sufficient to achieve this purpose. There's no distribution of nuclear terrorist attacks because none have happened. This doesn't feel like a fluke because we try very hard to make sure this doesn't happen. As long as our collective efforts are sufficient, there will be no nuclear terrorist attacks. If a nuclear terrorist attack happens, we *should* heavily update. It would mean that our efforts are not or were not sufficient.
Bayesian updates are naturally very difficult in situations with insignificant numbers of data points (particularly zero data points). Most people live their lives categorizing events into two pools - things that they should worry about and things that they should not. They don't have enough data to be more specific, even if they were good Bayesians. As bad Bayesians, they haven't even tried. So when a lab leak happens, they should *heavily* update, because they aren't updating from 20% to 27.5%, but from an approximation of 0% (don't think about it, not worried about it) to a non-negligible percent (think about it, worried about it).
It's how we hold politicians to task. We don't want or need to know every negotiation and consideration. We just want to know that they're taking things seriously enough. That Fauci hired a US company to perform gain-of-function research in Wuhan China and then had that same US company's CEO weigh in on the chances of a lab leak is all relevant information. Not in a "this is a thing that provably happened" but in a "this may be a thing where politicians were insufficiently protecting my interests and I need assurance this will not happen again." People, with limited bandwidth to follow all things happening, want to know if they should continue being worried about something or to know that it's been fixed. The biggest thing George Bush did after 9/11 was make a big show about how he and the rest of the government were taking this seriously and working to ensure that it never happens again. A lot of the efforts and money spent was inefficient or even pure waste, in terms of preventing future terrorism. But it shows that they took it seriously, and that people could go back to their lives without worrying about continued attacks. (It helps that we're aware of increased investigations of terrorism and by things like locked cockpit doors on planes that will probably fully prevent any repeat attacks even if the TSA turns out to be worthless). But again, most people aren't even trying to do Bayesian reasoning on terrorist attacks, but asking "should I worry about this?" The government has clearly answered with a "no, you should not worry about this" and it worked. So long as that proxy for safety exists, people aren't sitting at X% chance of major terrorist attack, but 0%. If another attack happens, it tells people that the government was wrong, that they should worry about it, and this is and should be a major update.
In general, I liked this post. But is it just me, or is mentioning "the stupid people" so many times sort of off-putting? I won't deny that such people exist, but I think it gives the blog a sort of kicking-down/us-against-them attitude that I have previously not found here, and is one of the reason I appreciate the blog.
Makes it harder to share with people who are not in the rationalist-sphere too.
I strongly dislike the “stupid people” thing as well. Not just because I don’t think the people Scott refers to are irrational, but also because one of the reasons why I came to like SSC (beyond the great writing and the interesting topics) was that calling other people names was frowned upon. This seems less like “The whole town is center” and more like “I can tolerate anything except the outgroup”.
I really like this post - it harkens back to the glory days of SSC. Sadly I cannot adjust my prediction priors, but a couple more great posts and I could.
I think it’s important to note that this doesn’t mean that the probabilities are static - they do change over time, *and they can be altered by deliberate action*.
For example, nuclear terrorism is very rare (literally never happened yet), but it would be wrong to conclude “we should stop spending so much effort on nuclear security, because look how rare it is”. Because of course, part of the rarity is precisely because we spend a lot of effort making it hard for terrorists to obtain a functional nuke.
On the other hand, if you’re someone who wants nuclear terrorism to be less rare (ie a terrorist), you wouldn’t say “gee successful nuclear attacks are vanishingly rare, may as well give up”, you’d say “maybe I can make it more likely by getting lots of double agent terrorists into key roles at nuclear facilities”.
I guess this is in some sense your “stupid people who think that it’s sci fi just because it never happened before”, but I think it’s more subtle than that - I’m talking about actually altering the likelihood, not just our estimate of it.
And if you're someone who wants nuclear terrorism to be less rare, and you heard that there *was* a successful nuking, or that a plant blew up and some terrorist group claimed credit and no one knew if they were just bullshitting, you wouldn't shrug it off as "well my Bayesian probability already predicted that." You'd want to know more. Did they do it? How? Did they try to get double agents into key roles? Did they succeed in getting double agents into key roles, and if so, were those double agents particularly useful?
But isn't the issue with SBF that this wasn't some sort of major screw up? His actions were the logical outcome of his fundamentalist EA position. It's how he justified his behavior. In that case, we're outside the realm of probability analysis, and in a realm of discussing how These ideas led to This outcome. Arguably, 9/11 is the same thing: it was the logical outcome of US foreign policy. So you'd want to alter that foreign policy in ways that create a different outcome, not "change nothing" because a probability distribution convinces you it would have happened anyway.
- the reaction to the lab leak is something that causes me to update. Lab leaks are a lot more common than we think they are, because whether you actually hear about them is extremely political.
- the criteria for Mother Jones to include an incident as a "mass shooting" is extremely biased. For instance anything considered "gang-related" is not included, even though bunch of people still get shot. What this means is that your priors are wrong, and your updates will be wrong (since you're updating based on MJ and the news), so Bayes won't help you there.
- the post-9/11 security updates have mostly been useless I agree. Locking the cockpit doors is likely more useful than adding another layer of bureaucracy to the security gates, and was a lot cheaper. I do notice that we don't have airplane hijacking any more, but that was true before 9/11. I think that has more to do with violent movements not getting as much funding for a while.
I think there's a big issue here that you haven't addressed, Scott. Based on my own knowledge and experience, I think that the chance of dangerous AI takeoff is next to zero, much too small to worry about. But, I notice that many people who claim to have studied this closely disagree with me. Should I update my beliefs or not? There are plenty of groups that worry about silly things, e.g. the dangers of vaccines causing autism. So, if we're just going by background rates, the update should be tiny. However, the people worried about AI takeoff also claim to have expertise in evaluating risks, so maybe their claims are worth taking more seriously. The problem with that is that a big risk in the form of SBF was right in their midst and they didn't seem capable of correctly evaluating it at all. Is there some other track record of correct risk evaluation that outweighs this, or are we just back to the background rates of treating them like any other group with weird hobbyhorse beliefs?
I’m not sure if I interpret Scott’s argument correctly, but at least the way I read it, I disagree with most of it.
He seems to suggest that people update a lot from dramatic events, and that it’s a bad thing. While I agree it would be a bad thing, I think that hardly anyone does that. They might *learn* from single dramatic events, but that’s a very different thing, both epistemologically and practically.
I feel that the post fails to differentiate (at least, explicitly) between three types of people who might, say, call for a ban on certain types of biological research if COVID turns out to have a lab origin:
- People who thought a lab-caused pandemic was likely/inevitable, and now see themselves vindicated
- People who thought a lab-caused pandemic was highly unlikely/impossible, but now think it's probable and dangerous
- People who never thought about lab-caused pandemics and now think those are dangerous
The first group is not the target of the post’s criticism; I assume we can ignore them. Let’s look at the second one, which seems to me to be the main target.
I haven't followed the debates, but, based on other topics, I'd venture a guess that only a tiny minority of people who previously thought doing pathogen research was OK will significantly change their opinion; and most people who call for a ban will be from the second and third groups.
Scott writes that he didn’t update after a recent mass shooting by a far-left transgender person. But I bet hardly anyone did! I assume (although I don’t have data) that most of those who thought transgender people were evil treated this in exactly the way Scott described: “we already had 100 cases of transgender being bad, here’s the 101st one.” And I assume the opposite side also reacted exactly like Scott said one should: “we know people do terrible things sometimes, one instance of a person doing a terrible thing doesn’t change anything.” Perhaps I just misunderstand Scott’s claim that “people fail to consider events that have happened hundreds of times, treating each new instance as if it demands a massive update”, which precisely uses mass shootings as the example; I am under the impression that people hardly update at all, and tend demand an update from their opponents on the basis of the cumulative evidence, not just the latest instance.
I even think that people often make this argument relatively explicitly. To take examples from Europe, which I’m more familiar with, most right-wingers who demanded action after the 2005 French riots (https://en.wikipedia.org/wiki/2005_French_riots) or the 2015–16 New Year's Eve sexual assaults in Germany (https://en.wikipedia.org/wiki/2015%E2%80%9316_New_Year%27s_Eve_sexual_assaults_in_Germany) said something like “We’ve been warning you about this; the Muslims were breaking laws in small ways all the time, and it was just a matter of time until something big happened.”
So, the point of all these examples is that people generally don’t *update* a lot on dramatic events. If they had prior expectations, they tend to update them only a little or not at all (or to update them in the opposite direction, because the other side is so insulting and loud and arrogant). Coming back to the lab leak issue, I assume that, if we ask the people who argue about this if any result would lead them to a major update, I assume they’d say no (it would be interesting to see data on this, of course).
What remains is the third group – people who didn’t really have strong priors because they’ve never given much thought to whether gain-of-function research might be dangerous. I’d say it’s most of the population, and I’d say that their ignorance is not stupid; it’s rational. It’s one of hundreds of difficult topics with complex arguments, complex implications, and experts on both sides of the divide. Why should the average person spend weeks of their time researching the matter, given that they don’t have any way of influencing the developments anyway? Scott does it; but then, I assume Scott enjoys the intellectual challenge, and Scott surely has many thousand times the influence of an average American.
So I deny the “A good Bayesian should start out believing there’s some medium chance of a lab leak pandemic per decade” part – I think you can be a good Bayesian and don’t start out with any beliefs at all.
I think this group, or at least significant parts of it, *does* learn from dramatic events, for the simple reason (that has been mentioned in the comments like https://www.astralcodexten.com/p/against-learning-from-dramatic-events/comment/47462922 already) that they didn’t really think about the problem before, but it has been thrust into their lives now by the media. And I assume that they do tend to assign a high probability to such events happening again – presumably one that is too high.
If so, this might be systematically wrong, but not as intellectually indefensible as one might think. Of course, the epistemically optimal way to deal with this would be to dive into the biosecurity literature and debates and form an educated opinion based on long-term data and experts' opinions – but, as mentioned above, I think that would be irrational for most people. The rational thing would be just to take what they know and make the best guess based on that. I’m not well-versed in epistemology, but as far as I understand, there are no clear rules on how to assign probabilities based on a single observation. If you think about it, it might make sense that the examples you get to read about are particularly egregious, and probably not representative – but that’s already more effort than most people do. If that’s what Scott criticized, I agree with that point, but it seems to me it wasn’t the main thrust of his argument.
So I don’t think most people *update* too much on dramatic events – but people who didn’t have an opinion probably often overreact, and that’s what might change the public opinion or a society’s attitudes and rules.
I think "a lab leak should only increase your beliefs from 20% to 27%" is downstream of your assumption that even in the common-leak world, leaks only happen 33% of decades!
An alternative toy example: the Common/Rare Lab Leaks worlds have lab leaks occurring in 90%/10% of decades, and my prior is 10%/90% on them, so before covid I expect an 18% chance of a lab leak per decade. One lab leak will update my beliefs to 50/50 between the two world states, which also puts my expected chance of a lab leak per decade at 50%. So an 18% -> 50% is also possible!
An even more extreme example: I only model the world as "Lab Leaks are Impossible/Happen Every Decade", at 80%/20%, so my prior is they should happen 20% of decades, but once I see one I update to 100% chance every decade!
A final example where the probabilities are not discrete: if your prior is "the chance we live in a world where lab leaks happen p% of decades is proportional to (1-p)^3", an update takes you from 20% to 33%. This isnt as big a jump as the previous examples, but its still almost doubling your expected harm from a lab leak!
I think you're wrong about the SBF/OpenAI thing. The board there did the right thing initially and only the backlash was incorrect. If anything, what we should learn about the Altman fiasco is that no EA can be trusted who is named Sam.
This was nice, Thank you. I guess I must be one of the stupid people, because I mostly don't think about any of this stuff happening (or not happening) beforehand. And so when it does, there's a knee jerk response, and then maybe some time for reflection. And to be honest I just don't want to spend a lot of time thinking about terrorists getting nukes. (Or any of the other shitty things that could happen.) I figure there is maybe someone in government, or a think tank, or on substack doing my worrying for me, and I thank them for it.
As an aside, I think the most important thing to do now is more nuclear fission. (so less regulation or whatever it takes.) And yet this leads right in to a term in 'chances of terrorists getting nukes'. And that is a risk I think we must take... more nukes leads to more nuclear 'accidents'.
I think that's precisely the trouble with Scott's argument (and wording). Sure, the "knee-jerk" part might get handled better, but the rest is not just understandable but, IMO, sensible.
There's also an important factor, having an updating strategy that's difficult for others to manipulate. As long as no terrorist has detonated a nuclear weapon, there's no good way to check whether a security agency that wants money to prevent a detonation is presenting you with a correct cost/benefit analysis. Indeed, I think that factor is important relative to a lot of arguments that circulate among highly-educated tech communities: one of the most intense evolutionary needs is to be difficult to deceive by the many, many actors you are interacting with that have an interest in deceiving you. Do not optimize your analytical tools for being *correct* relative to "the environment", optimize them for being *unexploitable* by your cohorts.
Can I assume that "there will be a nuclear terrorism attack once every 100 years" actually means "if I run 100 simulations of the year 2024, one of them will contain a nuclear terrorism attack"? Because obviously the actual passage of time and evolution of tech and society will change so many things that may render such an attack impossible or irrelevant.
In 1970 it would've seemed roughly correct that there'd be a 60 HR season in baseball about every 25 years, but if you simulated the 1970 season 100 times I doubt you'd see 4 (or even 1) 60 HR seasons, the actual leader Johnny Bench only hit 45 despite the rare outlier feat of having played nearly everyday as a catcher. But a quarter century later during the Steroid Era, you'd have to simulate 100 seasons to find one that did NOT have a 60 HR hitter, because the game changed.
Doing things to change the frequency of the event therefore seems a LOT more impactful. The danger we want to avoid is not really the 1/100 chance of such an attack in 2024, or the cumulative chance over a decade, the danger is that you could wind up with the equivalent of a Steroid Era of Nuclear Terrorism that your priors treated as nearly impossible but in fact was nearly certain to occur because the tech reached the threshold that made it so.
As others have said, it's not so much the fact that COVID might have been a lab leak, but rather the fact that all the respectable government and media outlets went all-in to smear the lab-leak idea as an ignorant conspiracy theory when we now know that it wasn't. *That* seems like a thing you might reasonably learn from, unless your priors were already weighted quite heavily towards the "respectable government and media outlets are fundamentally untrustworthy" end of the spectrum.
<i>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.</i>
Similarly, it's not the shooting itself, so much as the reaction to it. The media refused to publish the shooter's manifesto and instead rushed to lecture us all on the dangers of transphobia; Joe Biden made a speech calling transgender children "the soul of our nation". So whilst the shooting probably shouldn't change your priors very much on the likelihood of left-wing transgender violence, it probably should change your priors on the likelihood of the left in general condoning violence when it comes from someone they view as sympathetic -- again, unless your priors are already weighted in that direction.
I agree with the basic sentiment in the article, and I guess I'm jaded enough about the world that I don't have the feeling of updating much when bad stuff gets reported. But as people have hinted in different ways in their responses, there is a big difference between society-wide expectations about the world, and individual ones.
A functioning society makes sure to have people engaged in estimating all kinds of dangers, and figuring out how much it makes sense to prepare for them and/or prevent them. So there is a conversation going on, and institutions get created and specialists do their research, and the occasional interested member of the general public may join in.
Whereas on an individual level, for things like pandemics or major terrorist attacks, where you individually have no role to play either way, it's perfectly rational to just not have or seek a precise prior. You basically know the thing exists, but you don't need to worry about how likely it is because there isn't anything you would do with more accurate information anyway. So you just think, if it happens in my lifetime, I'll notice it right there and then.
Motorcycles are fun, because the safe thing to do is often the opposite of what instinct tells you. If you are too fast in a sharp turn, your brain screams at you to slow down and round off the turn, but the safe thing is to lean in to the turn and roll on the throttle, because it forces the tires into the road and increases traction. Your survival instinct actually increases your chance of dying.
The Wuhan lab thought they were lowering the risk of a pandemic by tracking down potential pandemic pathogens. They were trying to be proactive (hugely overrated), but in the process caused the thing they were trying to prevent.
The operators of Chernobyl were trying to lower the risk of a meltdown, but in the process of running a safety test caused the thing that they were trying to prevent.
Complex systems increase the possibility that higher order effects of precautionary measures will cause more damage than doing nothing. This basically neuters the precautionary principle. People seem to understand this intuitively when it comes to the environment, which is why there is such a strong hesitancy around geoengineering, or efforts to wipe out malarial mosquitos. When it comes to society or the economy we're much more interventionist.
That's what Hayek meant when he said "the curious task of economics is to teach men how little they understand about what they think they can control".
motorcycles are stupid, because there often is no "if" you'll lay the bike down, its when. and 80 percent of accidents end in injury or death. The safe thing is not to ride them because even the safest cyclist can't always control others and the penalty for one mistake can be ruinous.
My cousin suffered brain damage from one accident. if anything people understate how dangerous that kind of complex system can be.
"Forces the tires into the road and increases traction", LOL. Do you actually think this is true? I do bike although I have much more experience with cars on the track (as a driver and a tuner). The only thing forcing the tires into the road is gravity. On or off throttle definitely changes weight balance, although, if you want a tighter turn, I would expect you would want more weight on the front, (meaning deceleration). It could change the geometry (definitely fork extension) but again, this is front vs. rear weight balance. I guess the biggest improvement from not dropping throttle is improved stability (again, through weight balance).
No motorbike produces downforce. In order to produce downforce, you need a wing and it needs to be (more or less) parallel with the ground. Since motorcycles corner at varying and dramatic lean angles, you would need a wing that moved in opposition to the lean of the motorcycle. You could try to do it with 2 fixed wings but then you would have no downforce except at a very specific lean angle. This would likely make the bike impossible to drive since leaning into and out of turns would cause the grip level to rise and plummet abruptly. Even in cars, only the most extreme wings produce actual (net) downforce . The fairly large wings you see on some Porsches, for example, merely cancel out lift (at the rear) to prevent the car becoming dangerously unstable at higher speeds. In other words, they prevent traction reduction at speed rather than increase traction at speed.
[edit: You wouldn't want a car to have a rear wing that increased traction at speed unless you also had a front wing that increased traction. Otherwise, you would find that as you increased speed, the car would gradually lose it's ability to turn (since turning ability is defined by the ratio of front to rear grip).
If no motorbike produces downforce, then why do I get a lot of articles and movies discussing winglets and downforce on motorbikes by simply googeling "downforce motorbike"?
Most likely, it is because people will pay for it despite the fact that they do nothing. The ones that I see are over the front wheel. If it was over the rear wheel, you could argue that perhaps in a straight line it improves stability and braking. You definitely don't need more front grip in a straight line. Under braking, every bike already has enough straight line front-end grip to "endo."
An example of downforce calculations for a 4 square foot (576" squared) wing (that's four feet long and 1 foot wide). The article suggests 37# of downforce by 60MPH and 66# at 80MPH. Considering that bike wings are about one 1/100th that area, if it was positioned at the right angle to help you at full lean, if you changed speed from 60MPH to 80MPH you would gain about 2.9# of downforce. For a 500# bike, that is 0.6%. However, your cornering speed has now increased by 33%.
The effect you are talking about is not speed related, it is acceleration related. Immediately, upon increasing throttle input, the bike feels more settled. This is weight transfer to the rear and/or geometry changes. You weren't (presumably) trying to say that 30seconds later when you reached 80MPH, you had 0.6% more grip.
>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all.
Bear in mind that the media and the Twitterati aren't neutral parties. If a right-winger commits a mass shooting, it gets trumpeted far and wide as an example of evil right-wingers. If a far-left transgender person does so, it gets buried, or reported in terms that omit the perpetrator's affiliation.
Given how the media buries misdeeds done by allies, having one such misdeed that's so bad that you managed to hear about it anyway should update your beliefs towards evil leftists a lot more than the corresponding report on the other political side.
That was a very interesting and thought-provoking read. However, I’m skeptical of the OpenAI/SBF comparison. Maybe it’s hindsight (or the outside view, since this blog is basically my only window into EA), but the two affairs seem deeply different – so there’s no reason why the same lessons should apply.
In the OpenAI thing, the board (the “EA side”, I guess?) made the first public move against Altman. He was in an extremely strong position: the CEO of a company whose ground-breaking products had become household names in less than a year. The board firing him, even though it was their right, should have been backed up by comparable evidence, which the board didn’t do. Not that they had to address the public, but them not even explaining themselves to other stakeholders (I’d say investors, Microsoft with which they had a partnership, and enough key employees?) destroyed their credibility. Of course, hindsight is 20/20, but this reads like Rational Debate 101: “if someone’s extraordinary claim is not backed by extraordinary evidence, they’re the unreasonable ones”.
For SBF, it’s very different, in that the EA movement was “reactive”, and only tarred by association. There weren’t any actions as resounding as the one above to justify (or was there?), so I think it would have been safest to weather the Week of Hatred (of EA people) and be mostly silent until they could do some good again. I also doubt that SBF’s position within the movement was as strong (both de jure and in perception) as Altman’s at OpenAI.
Of course, it can be that in both cases, the EA movement (or “side”) didn’t have a rhetorical position they could really defend, and got outplayed by people who were better at power or status games. In which case, the (slightly snarky) lesson would be “welcome to the real world, congrats for making it, now get good”.
Regarding the “deontology vs 5D utilitarian chess” lesson, isn’t this taking it to the extreme. No one (afaik) can play with any confidence 5D utilitarian chess. But many non-EA people can ably play regular 2D chess with a mixture of consequentialism and deontology – so maybe the conclusion is that you should stay at a level where you have strong evidence that you’re not out of your depth?
By the way, I’m sure you must have thought about it (perhaps even blogged about it), but you wrote a while ago that a pandemic risk as high as one percent should have been enough to drive all states to seriously prepare for this scenario. This was, of course, completely correct.
But what would have been a reasonable prior on SBF doing something shady enough to tar EAs by association? Given that he was operating some crypto “stuff” in Bahamas without any corporate control or serious oversight, and that he became a billionaire extremely fast, shouldn’t this baseline probability have been higher than one percent? And given how bad it would be if SBF was a fraud, even if your assessment was low, how come this “tail risk” wasn’t taken into account and prepared against, since it’s so central to the movement’s stated philosophy?
It's fine to update priors and all, but the main reason we want to know whether there was a lab leak is so we can make sure the people responsible are never allowed near a lab or a funding spigot again, and also so the people who lied about it are never in any trusted position ever again.
In particular, Lancet should reject any submission with Peter Daszak as one of the authors, since he was one of the authors of the March 2020 article attacking lab leak theories, it included the claim that the authors had no unrevealed conflicts of interest, and it did not mention that Daszak was the president of an organization that had funded the WIV.
I'm going to try to put your position and my position into a quality control analogy.
You have a part that is supplied to you by several vendors (A, B, C) and you do quality control.
You set your acceptable error rate of failed parts at X, based on a dimension Y, and one day you see that the parts from supplier B came with more failed parts than your acceptable X.
Given this factual observation, there are three possible alternatives:
- Assume that all suppliers are behaving well, assume B's failure is an expected outcome, and simply inform them that you are going to raise your quality requirement Y (raise the quality requirement) so you observe fewer X failures.
- Assume that supplier B is misbehaving and take the necessary measures specifically affecting that supplier.
- Or a bit of both
You choose the first option. In my opinion, you do this for ideological reasons, since you are ideologically motivated not to blame China because that would mean agreeing with horrible people like Trump.
This is a classic example of what Taleb refers to as wrong-normal distribution statistics-mediocristan based reasoning. I ask you to consider that this is not a normal situation where supplier B sent you a part out of spec, you throw it away and ask for another batch and wait for the normal distribution to operate and expect to see everything according to distribution again.
In this case, your supplier B sent you A HIGHLY CONTAGIOUS VIRUS. It's important to know if supplier B is a complete idiot, a son of a bitch, or both. It's important, and action must be taken accordingly.
You make sense. I have long thought along similar lines when it came to conservatives/libertarians who decided they were less pro free market after the Great Recession. (I am thinking specifically of Richard Posner and Razib Khan.) Don't you guys read economic history? Why didn't the Great Depression make you less pro free market? It was much worse than the Great Recession, and the American economy was way more laissez-faire in the '20s than the '00s.
Please consider replacing the term "stupid people." I'd bet non-statistically literate people make up >95% of the population. Many of these people are brilliant in ways we aren't. I'm not the language police, but outright dismissal of folks who think differently from us isn't likely helping us build the community or future we need. It reads as low-confidence in-group signaling and risks the bottom 5% of your audience (the stupid people?) latching onto this kind of language and thinking. This only stood out to me because of how different it seemed from your typical style.
Love your work. Wishing you all the best in fatherhood :)
Is there evidence that the kinds of stupid people Scott is arguing against actually exist? (I know I know, no limits to stupidity, etc.) Maybe everyone is just engaging in the same sort of coordination game around visible events, however consciously?
E.g. in the Harvey Weinstein case, did most people really update how likely they thought it were to get sexually abused as a young actress in Hollywood? Or did they just think, "This is a visible and upsetting event, and everyone I know is upset about this! We can finally do something about it!"
This relies on everyone having a really good model for everything that could happen, despite there being a million issues that *someone* is saying should be my #1 priority right now.
For example, it seems like everyone's been worried for decades about bee populations. I probably first read an article about this 20 years ago, but I can't quite recall what the upshot of this "crisis" is. For now, I think it's reasonable to not make "bee policy" a major part of my voting or charitable decisions.
If tomorrow, some sort of catastrophe happens due to a lack of bees, that signals to me that I should reorient in a big way on the bee thing. I don't have to be "stupid" to wait for the catastrophe, and then update on it, I just have to have non-infinite capacity for research.
But yes, once I point my attention there, I shouldn't then assume bee stuff is our biggest problem just because it was our biggest problem yesterday.
My major takeaway for the media from this post is to be assiduous about putting events into context especially in terms of frequency and magnitude. Of course in the heat of the moment putting things into context is considered to be somehow betraying the victims.
> So this hypothetical Bayesian, if they learned that COVID was a lab leak, should have 27.5% probability of another lab leak pandemic next decade. But if they learn COVID wasn’t a lab leak, they should have 20% probability.
You talk about lab leak probabilities as if they're handed down from on high, and our job is to figure out the correct number. That is not how this works. The possibility of a Wuhan lab leak is important exactly because of the official response to it. And that response is important exactly because it *causes* future lab leaks to be more likely.
Safety procedures developed via the "some guy sits in an office and brainstorms what procedures a lab should follow and writes up rules" have an extremely poor track record. If we used that method, then the whole system would fall over as soon as a lab encountered real-world considerations like "Hey, what if we're moving to another building and need to pack all our viruses into a van - how do we handle that?" Therefore, that isn't the method we use. Instead, safety is maintained via constantly, constantly updating.
A healthy lab is constantly seeking out times when things didn't go as planned. At a basic level, reports are solicited from individual team members, often monthly or quarterly, and filtered upward. The more serious happenings get a full report written up; different labs call these different things, usually some bland variant on "Incident of Concern". The lessons learned are then socialized throughout the team, with everyone hearing the short version and the people who think they might face similar situations able to read up on the details. Everyone intuitively grasps that the top-down regulations are often only loosely connected to reality, and that it's important to always have information flowing up and out about how the regulations meet the real world.
I cannot stress enough how central this process is to safety in the real world. The top-down regulations are nothing; the feedback process is everything.
An unhealthy lab treats the top-down regulations as perfect and does not seek out information. Any deviation from how the "guy in the office" expected the lab to work results in a punishment for whoever was foolish enough to mention it. Consequently, the gap between what the head-office rules say and how the lab actually works continually grows.
In a *pathological* lab, the entire leadership chain is complicit in fraud from top to bottom.
People who talk a lot about ethics in governance talk about "tone at the top". People at the bottom obviously do not know exactly what the people at the top are doing, but each concentric circle interacts with the circle one step closer. And each gets a sense of what they are expected to do to "play ball" (or, if they fail to learn this, their careers go nowhere).
Binance's Chief Compliance Officer wrote internally in December 2018, "we are operating as a fking unlicensed securities exchange in the USA bro." Assuming that your brain is normal, you do not directly care at all about whether your financial institution is following all the Byzantine U.S. financial regulations, such as the one where they have to file a suspicious-activity report on you if you mention that you know they have a regulation that requires them to file suspicious-activity reports on people who know about the regulation that requires them to file suspicious-activity reports. Nevertheless, I claim that you ought to be concerned about the gap between what Binance's leadership was claiming in public and what they knew in private. Because of tone at the top. As long as Binance's leadership is actively engaged in cover-ups, anyone trying to have a career in Binance will have to learn to cover up any inconvenient facts that their bosses might not like. They will have to learn to do this automatically and implicitly. When everyone in an organization defaults to cover-up mode, this has consequences far beyond the question of complying with the voluminous U.S. regulatory regime.
I am in favor of gain-of-function research. (Just as you do, I use gain-of-function research as a synecdoche for any risky research, such as collecting viruses from the wild and bringing them into labs filled with human researchers. Anything that amounts to proactively seeking out the kinds of things that cause pandemics instead of waiting for them to come to us.) We got lucky this time, in that the pandemic was basically just a standard flu, and standard countermeasures worked fine, so gain-of-function research didn't end up being relevant. (The fancy sci-fi mRNA technology was only needed to overcome the challenge of mass-manufacturing a huge number of vaccines quickly.) I have no confidence we will continue to be so lucky forever.
The next pandemic will come. It is inevitable. The only question is how prepared we'll be when it does.
Gain-of-function research makes us more prepared. But it also (probabilistically) makes the next pandemic happen sooner. That latter makes us *less* prepared. (Not only because we'll have completed less gain-of-function research itself, but also because we'll have less development in other areas, like mRNA manufacturing.) Therefore the safety practices of labs are incredibly important.
That's what I think. I am not going to try to make an argument here in favor of gain-of-function research, tgof137-style. Maybe you don't agree with me. Maybe you want to shut it all down. Maybe you want to murder me in my sleep to stop me polluting the system with my vote. That's irrelevant. The fact is that my side won. (Even if a few programs, like DEEP VZN, become casualties of political spats, much like Donald Trump killing individual Ford Mexico plants Batman-style.) (Yes, granted no war is ever over, but HR5894 has poor prospects in the Senate, and in any case gain-of-function per se is just a synecdoche, not the whole.) We won *seven years ago*, gain-of-function research is being done even as we speak, and therefore YOU SHOULD CARE A WHOLE HECKUVA LOT ABOUT THE ETHICAL PRACTICES OF THE ORGANIZATIONS DOING IT.
What are those ethical practices? Well. In February 2020, the Director of NIAID wrote internally that "we are doing fking gain-of-function research in the Wuhan lab bro." Okay, no, he didn't have quite the way with words that Binance's CCO had, what he wrote was "the scientists in Wuhan University are known to have been working on gain-of-function experiments to determine the molecular mechanisms associated with bat viruses adapting to human infection, and the outbreak originated in Wuhan." Same content. Unlike operating as a fking unlicensed securities exchange in the USA, doing gain-of-function research was not per se illegal. (Because, again, my side won that fight seven years ago.) They could have chosen to tell the truth about what they'd been doing. But it was...suddenly politically inconvenient. And so they lied.
Meanwhile, NIAID was leaning on the scientific establishment to make, and fast-track for publication, false statements.
On January 31, Kristian Andersen, a prominent virologist with Scripps Research, wrote internally that "Some of the features (potentially) look engineered." Separately, Andersen wrote internally that "the lab escape version of this is just so friggin' likely to have happened because they were already doing this type of work and the molecular data is fully consistent with that scenario."
On February 1, the Director of NIAID corralled leading scientists discussed the question on a conference call. We will never know exactly what was said on that call, but we do know that suddenly they started producing public statements inconsistent with their private statements. Andersen, who privately believed what he called the "lab escape" theory was more likely than not, began dismissing it as "crackpot".
We do know that the Director of NIAID directly brought together Andersen and others to produce what would become "The Proximal Origin of SARS-CoV-2", which appeared in Nature Medicine 17 March 2020. We now know that most of the authors privately believed that a lab leak was at least plausible. However, during the drafting of the paper, the leadership of NIAID and NIH (internally nicknamed the "Bethesda boys" after the NIH and NIAID headquarters in Bethesda) repeatedly pressed the authors to strongly dismiss the possibility.
The authors could have ignored this, but only at their peril. Andersen, for example, had an $8.9 million NIAID grant pending at the time. (The NIAID director signed off on it two months after Andersen's article "Proximal Origin" was published.) The virologists would have to be extremely stupid not to understand that they were expected to deceive and spin the public in order to earn NIAID money. And so that is what they did. The final paper said "we do not believe that any type of laboratory-based scenario is plausible." We now know this to be a lie; they did, in fact, believe a laboratory-based scenario was plausible.
As soon as the paper hit the presses, the Bethesda boys greeted it with feigned pleased surprise, as proof that they were right, while pretending that it had emerged independently from the process of "science" when in truth it was their own press release.
Then they went out, testified before Congress, and lied. Aggressively. In statements they were willing to have their own fingerprints on, they derided the lab-leak possibility as a "conspiracy theory" and "misinformation". (To encapsulate the overall tone of official messaging, on CBS's Face the Nation, the Director of NAIAD said "they're really criticizing science, because I represent science. That's dangerous.") That's what they were willing to be quoted on. Meanwhile, using backchannels, they called it racist and demanded that the truth be censored.
Whether any individual directly lied isn't the point. (Even though, again, it is now proven that they did in fact knowingly and directly lie.) What matters is the organizational culture. We would have the same problems even if the leadership team simply intimated what sorts of things their subordinates had better not say in their presence, and carefully siloed knowledge so that no individual person could be proven to have personally directly lied. (Though, again, they absolutely did personally and directly lie, and they got caught, because their practices for *not* getting caught were extraordinarily sloppy, because they weren't all that worried about being caught, because they expected to get away with it if caught, which they did.)
Imagine that you are a low-level "dish-washer" in the Wuhan lab, or any other lab. You see something concerning. Do you say something?
Of course not. But it's more than that. You seize any excuse to destroy the evidence (and in the normal operations of a biolab, there are always plenty of excuses to destroy the evidence). That makes you actively complicit in the cover-up. And so of course you have to destroy any further evidence that might shed light on what you've been up to. A naive person might think that this might open you up to risk of consequences if you are caught. But from within the system, you can easily see that there are no such consequences. Indeed you'd be rewarded, because absolutely everyone, all the way up the chain of leadership, is complicit in the cover-up.
Go back to the discussion of how actual-in-practice lab safety works. This situation is *utterly corrosive* to that process. Given the choice, obviously anyone in the lab would prefer to be actually safe to the extent that that does not conflict with the need to lie. But it does conflict with the need to lie. As long as the situation persists, lab practices get more and more unsafe with each and every year.
The current narrative on the left is that, okay, maybe we can't keep calling it "misinformation" without being laughed out of the room, but it's not like there was some kind of *cover-up*, certainly not one that could easily happen again, it was just a "difference of opinion" or a "disagreement", and shut up shut up shut up.
It's...quite a place to be living.
> Everyone is just going to say we lied to them. We’ll be accused of fraud. That sort of argument just bugged the hell out of Sam. He hated the way inherently probabilistic situations would be interpreted, after the fact, as having been black-and-white, or good and bad, or right and wrong.
> --- from Going Infinite
> Yes, it turns out that if you tell people everything’s fine but you have reason to know it very well might not be fine, often that would constitute fraud. You cannot, in general, simply not mention or account for things that you’d rather not mention or account for.
> --- from Zvi's review thereof
And now Gray Tribe thought leaders pivot to the idea that, look, does any of this really matter, what the hey, can't we all be brothers?
There is some extremely tiny technical sense in which it doesn't matter whether there was a leak from the Wuhan lab. Because in a well-functioning organization, an Incident of Concern is investigated regardless of whether there was a known disaster or not. We are so very far from living in the kind of world where it makes any sense to say that.
Similarly, there is an extremely tiny technical sense in which it doesn't matter whether any given risky bet made by Enron went well or poorly, but if we're still pouring $50 billion into Enron every year, and also the problem is virology safety instead of just money being lost, this is not a situation that we can just shrug off. The situation is not going to get better on its own.
We are currently moving, in a tiny limited area, from the "pathological lab" case to the "unhealthy lab" case. The bosses are currently moving toward throwing the Wuhan lab to the wolves as a sacrifice. Naturally, there is no postmortem, and certainly no looking into how a paper like "The Proximal Origin of SARS-CoV-2" could have happened and how to prevent such a thing from happening again. No John Ray. Enron is still a going concern, ethical practices unchanged. If the leadership of NIAID and NIH decided to do the same thing today, there is absolutely nothing to stop them. If anything, Enron may have been emboldened by learning that no matter what they're caught doing, their political backers won't greet it as a betrayal, but instead swing into action to suppress the truth. (I doubt it. Usually people already have a pretty good idea how their bosses will react, or they don't get very far.)
> So this hypothetical Bayesian, if they learned that COVID was a lab leak, should have 27.5% probability of another lab leak pandemic next decade. But if they learn COVID wasn’t a lab leak, they should have 20% probability.
I don't *care* what the exact correct Bayesian probability would be. The point of investigating is not to get a slightly-more-accurate probability, the point is to make the probability go *down*.
I did the lab leak calculation but am getting different numbers for the observed-leak update. Would someone mind sanity checking? I'm just getting the hang of things here, anyway.
Let A denote the 0.33 decade-rate hypothesis and B the 0.10 decade-rate one. Under our problem context C, observed data D of a single pandemic, and 2 sig figs, we have:
So A:B odds just boils down to straight division of the distributions, i.e. 24:9 = 8:3, or roughly 73:27. This is within rounding error of 74:26, so I'm unsure whether Scott just Spoonerized the 6 and 4 to get 76:24 or I am doing something wrong. That said, the final expected base rate matches up: 0.73×0.33 + 0.27×0.10 = 0.27.
I think your math is formally more correct, but more complicated than what Scott did.
As I read it he merely took it as a binary incident (either lab leak or none), where you have used the full Poisson distribution.
Using the simple model we would have
P(A| D) = P(A) P(D|A)/P(D) = 0.5* =0.33/(0.5*0.33 + 0.5* 0.1) = 0.767 , so just a rounding error from the given numbers.
However the fact that you got almost the same result with your more correct model shows that the idea is robust. I tried myself to look at a model where there were multiply theories with lab leak chances going from 1% to 50%, the priors of the theories being inverse related to their predicted probabilities. And I also found that having one lab leak did not change the expected probability of future leaks very dramatically.
Ah, beautiful. Thank you very much. That is quite surprising how Bernoulli coincides so closely with Poisson. Have you published the analysis you did? I'd love to read it.
I seem to remember a thread on Less Wrong about (IIRC) something of which Fermi had there was a 10% chance that turned out to be true, and whenever people in the comments pointed out reasons why that was not an unreasonable figure given what Fermi could possibly have known back then, EY would smugly say something like "pretty argument, how come it didn't work out in reality", which would have been a reasonable reply if Fermy had said something like 0.1% chance, but I found pretty ridiculous for 10%.
By the way, already back in mid-2020 I found lab-leak hypotheses much more plausible than the mainstream did, because of the epistemic good luck of simultaneously remembering that ESR had blogged about such a hypothesis back in early February which I had found convincing, and not remembering any of the details of his particular hypothesis, which had been pretty much ruled out by March at the latest.
I'm pretty sure this entire post is rather, incoherent ramblings. You made up these numbers (sometimes you admit to this e.g. "fake bayesian math"). There was no objective starting point. This isn't how thinking works, let alone thinking rationally, though I know you disagree because you believe Bayes has ascended to divinity or whatever. But Astrology is literally more objective than everything in this entire post. At least one can point to the position of planets in the sky as a starting point before making "mathematically correct" calculations pertaining to them. As an example, I'll briefly discuss your example.
You made an argument that survey's suggest 5% of people admit to having sexually assaulted someone, this is something people would want to lie about, so 10% actually did it? Whatever happened to the "Lizardman’s Constant Is 4%"? Didn't Bush do North Dakota? Why would you assume anybody would care to answer this question honestly at all, why wouldn't they lie in the other direction to mess with you? Depending on circumstance, I probably would.
Actually, nobody admitted to anything! You can't point to them, you only have an abstraction, a number so low it's not worth anything at all in this case. And your entire post is basically just restatements of this made up math.
Here are some real reasons why one should care if Lab Leak is true. If it's true, it means some people with political and social power knew about it, but didn't tell the public. Not only did they not tell the public, they actively hid the fact. Not only was the fact hidden, an international instantiation of censorship pervaded the Internet, creating a situation where to this day people are concerned on what they can and can not say online regarding this stupid virus. And then this was used by moral busybodies to start attacking people and dismissing concerns, and feel righteous about it, when in fact they didn't know any better than anybody else who had no political power. This isn't even getting into the fact that whatever research was conducted, it seems to have done nothing to mitigate the health, societal, and economic damage from the virus itself.
Now you may note that your real point is some nonsense about future lab pandemics. This is what people call a "straw man". It's true that some people embody the straw man. But the straw man is a straw man because it's a carefully constructed argument that isn't the argument being made. While the possibility of future lab leaked pandemics is a concern, it's not the only, or even the primary concern to most people regarding Covid being a lab leak. Insofar as it is a concern, a normal person, as opposed to a good Bayesian, would weigh the usefulness of funding lab viruses/gain-of-function research in China in the first place, against the possibility of even one leak.
Hm, if it is about climate, anyone NOT arguing, nay shouting, for a "precautionary approach" before something (big) happens is be will be shouted down - "how dare you" / "I want you to panic". - me no climate-skeptic, just wary of the catastrophists/ "degrowth-environmentalists" https://unchartedterritories.tomaspueyo.com/p/there-are-2-types-of-environmentalist
Thank you. I know and (mostly) agree with Taleb's statement. De-growth / switching off nuclear power / banning geo-engineering are "actions or policies" I suspect of a huge risk (nay certainty) of causing severe harm to general health AND the environment. Being poorer is worse for health and the environment. You can disprove that? Greta-nomics make sense?
As I am from the past, a boomer even, I assure you: We spent HUGE efforts to improve health and environment. With huge success. Maybe too much "pp", when it came to nuclear power. Getting a brake on CO2 was for a long time not as high a priority as we were suffering from much worse fumes and issues. You surely went to read the Pueyo post, I linked to? I noted too late: it is a paid one. Well, he has several free for all. https://unchartedterritories.tomaspueyo.com/
Firstly, I don't see how your point here contradicts Mark's point or supports your earlier point in the comment Mark responded to. You say the precautionary principle prescribes not making large-scale changes to the environment. Previously you said that anyone arguing for a precautionary approach before something happens will be shouted down. But at this point, people arguing for the precautionary approach in the case of climate change aren't getting shouted down in mainstream circles; if anything, it's those arguing against it getting shouted down, or just both are shouting at each other.
Secondly, preventing climate change by eliminating fossil fuel use at a forced pace *also* has a definite risk of causing severe harm to humanity's well-being via impoverishing us, although there is perhaps less uncertainty about the extent of the harm. And it's not a small number of big companies causing CO₂ emissions and benefiting from them at public cost, we're all (perhaps indirectly) causing them and benefiting from them (while also being harmed by them in some ways). One can argue that the harms exceeds the benefits, but it's not the case that a few people get only the benefits, while the rest only gets the harm.
I generally don't think the concept of "burden of proof" is very meaningful when we aren't talking about a court case. In a policy question, we have to estimate the probability of something (or the expected value and the probability distribution), and policy should depend on that estimate. Who provides the evidence our estimate is based on doesn't matter, except for evaluating the trustworthiness of the evidence. If we estimate a significant risk of severe harm from climate change, we should prevent it even at a significant cost. If we estimate the risk as very small, we shouldn't, whether it's those causing the climate change who have proved that the risk is very small, or others.
"Mass shooting" is just so not the liberal rationalist style! The observed tail is "limited stabbing"
That happened *one time*! One time!
I don't know the referenced incident, but I can't help thinking "you stab a bunch of people ONE TIME and they never let you live it down."
It's a reference to this incident
https://www.lesswrong.com/posts/T5RzkFcNpRdckGauu/link-a-community-alert-about-ziz
> In November of 2022, three associates of Ziz (Somnulence “Somni” Logencia, Emma Borhanian, and someone going by the alias “Suri Dao”) got into a violent conflict with their landlord in Vallejo, California, according to court records and news reports. Somni stabbed the landlord in the back with a sword, and the landlord shot Somni and Emma. Emma died, and Somni and Suri were arrested. Ziz and Gwen were seen by police at the scene, alive.
In fairness only one person was stabbed
"Does it matter if COVID was a lab leak?"
Not terribly relevant to the point of your post, but what *I* found most interesting about the Lab Leak hypothesis was how vehement a lot of people were (very early on) about eliminating it as a possibility. Clearly, the chances of this being the case were greater than 0%. And the location of the outbreak being near the lab wasn't exactly evidence AGAINST a lab leak. Still ...
What I find interesting about the Harvey Weinstein affair is that the response from Hollywood seems to be, "Yes, *everyone* knew about it. And it was bad." And my question is, "So who is doing this NOW such that in 20 - 30 years we will be hearing (again), yes, everyone knew it ...?" We'll have to wait 20 - 30 years, though.
About SBF ... high trust groups (e.g. Mormons) are known to be at higher risk of this sort of thing than low trust groups. Because obviously ... It happens.
The relevant prior to update in the lab leak hypothesis isn’t the chance that another lab leak happens. It’s: “should you trust the US government and major corporate news networks when they swear up and down something is true and that no one should disagree.”
'The louder he talked of his honor, the faster we counted our spoons.'
-- Ralph Waldo Emerson
My current trust when 'it is made clear that even ASKING the question makes you a bad person' is approximately 0%. The folks might actually be correct, but I put approximately no weight at all on their claim. Data will have to come from some other source. I'm not inclined to increase this value based on any recent-ish events :-)
'The louder he talked of his honor, the faster we counted our spoons.'
My sentiment exactly. All public moralists are somewhat suspect to me - some exceptions aside where publicity is necessarily. I assume that they are, not often than not, the thing that are inveighing against.
I'm not assuming that the folks who were vehement against even CONSIDERING a lab leak were leaking Covid themselves from a lab.
I do suspect (in the general case) that when folks insist that even asking the question makes you a bad person then maybe things aren't as clear as one would like.
I think it was an anti racist ideology, or adjacent. Trump had his China flu rhetoric after all. Then Biden comes to power and is also anti China and now it’s ok to be anti China.
China is an adversary. It is silly not to be wary of an adversary. But it may be possible to work together to accomplish certain goals, even though it isn't possible to work together to accomplish others. Neither government is trustworthy, so they properly don't trust each other.
> I do suspect (in the general case) that when folks insist that even asking the question makes you a bad person then maybe things aren't as clear as one would like
The main problem with applying this heuristic is that pretty soon you wind up going down the Holocaust revisionism rabbit hole.
Sometimes, being popular is more important than being right.
As I understand it, the big push against even considering a lab leak came with the 19 Feb 2020 open letter in "Lancet", which gave traditional media sources an excuse to effectively declare the matter settled on the basis of scientific consensus.
That letter was co-written and privately circulated by Peter Daszak, who is in fact the person most likely to have been leaking Covid from a lab. Well, directing others to create human-contagious bat coronaviruses in a known-leaky lab while he stayed safely on a different continent.
"Person most likely to" is not the same as "proven to", of course. It's still quite possible that this was a natural zoonotic spillover. But the plausible scenarios involving lab leaks are dominated by ones involving Daszak. So, pretty serious conflict of interest there.
Yes, we're in the weird situation where we know that there was a cover-up, but we don't know if there was anything for them to cover up!
At least some of them had been involved in funding research at the WIV, which gave them a strong reason to persuade people that it was not a lab leak. That is most obviously true of Peter Daszak, president of EcoHealth Alliance, the organization through which money passed from the federal government to fund WIV research, and of Fauci, ultimately responsible for the government end of the same funding. Daszak signed, probably organized, the March 2020 the Lancet piece “Statement in support of the scientists, public health professionals, and medical professionals of China combatting COVID-19” which rejected "conspiracy theories suggesting that COVID-19 does not have a natural origin."
That's too simple a model, but it has a large number of correct predictions. It's just that it also gives you a large number of false positives. It's definitely true that many folks who feel guilty about their desires or actions project those same desires or actions onto others, and also project the guilt.
My estimate is that many moralists feel guilty about their desires, but do not commit the actions that feel guilty about desiring.
"the faster we counted our spoons"
What?
I've found an idiomatic meaning, something along the lines of "making sure out stuff hasn't been stolen", but I'm not sure that's what is meant. Can you clarify this at all?
Edit: I actually found the passage with more artful digging. Short version: yes.
"See what allowance vice finds in the respectable and well-conditioned class. If a pickpocket intrude into the society of gentlemen, they exert what moral force they have, and he finds himself uncomfortable and glad to get away. But if an adventurer go through all the forms, procure himself to be elected to a post of trust, as of senator or president, though by the same arts as we detest in the house-thief,—the same gentlemen who agree to discountenance the private rogue will be forward to show civilities and marks of respect to the public one; and no amount of evidence of his crimes will prevent them giving him ovations, complimentary dinners, opening their own houses to him and priding themselves on his acquaintance. We were not deceived by the professions of the private adventurer,—the louder he talked of his honor, the faster we counted our spoons; but we appeal to the sanctified preamble of the
messages and proclamations of the public sinner, as the proof of sincerity. It must be that they who pay this homage have said to themselves, On the whole, we don't know about this that you call honesty; a bird in the hand is better."
From Ralph Waldo Emerson's "The Conduct of Life"
Yes. Back in the day a family's silver spoons might be one of their most valuable possessions. This article provides some context:
https://blog.teabox.com/teaspoon-measure-stir-display-steal
"They were a primary target of opportunity for Victorian era burglars. A court record in 1840 lists the thefts of Edward Abbey, sentenced to four years of hard labor: some gold and silver and three teaspoons. Dickens has several instances that show how the spoons were luxury items: when Scrooge dies, in Christmas Carol, his housekeeper strips the sheets from his bed and his night clothes, along with his teaspoons and sugar tongs, to sell to a fence. Fagin in Oliver Twist is sent to prison for two years, for theft of the equivalent of $120 or so – and a silver teaspoon."
And, yeah. Emerson (1803 - 1882) wasn't living in Victorian Era England, but the idea seems to have been on both sides of the Atlantic.
See also: "born with a silver spoon in his mouth" (= very rich family)
Things have changed.
When I was a teenager (1980 or thereabouts), my parent's house was broken into a few days before Christmas.
They stole the TV, the stereo (although not the record player although it was the most expensive component) and some jewelry.
My mother's silverware (sterling!) was untouched. I suppose it would have been a lot more difficult to fence.
Probably too difficult for the burglar(s) to readily identify actual silver, especially when most households would just have flatware.
Precious metals should be some of the easiest things to fence; just melt it down & its source is nigh unidentifiable.
To follow up, it's not just the archetypal night-time sneak-thief burglary. If you invited someone to your house for tea, and put out the good china and the good silver spoons, you might later find out that some of the spoons had "wandered off", which is to say, that your guest turned out to be unscrupulous and had stolen a few of them by slipping them into a pocket. (Presumably the china would be too bulky.) Sometimes this would happen with an elderly-and-forgetful person, or an eccentric-but-harmless person, and everyone would know about them and know how to go about retrieving the items later. But sometimes it would be someone of bad character, stereotypically someone "of bad breeding" posing as a lady or gentleman in order to enrich themselves, either through some complicated social manipulation, or just basic thievery.
And so the situation is that you invited a stranger over for tea, and they started showing "red flags" about not behaving like a proper lady or gentleman, and now you're on guard in case they turn out to be a thief.
I suspect that the Lab Leak Hypothesis was dismissed out of hand because it was seen as playing to Trumpian prejudices.
I think this is exactly correct.
So, going forward ... how do I take into account political leanings in statements about what should be 'truth' (the virus leaked from a lab or it did not)? I was aware of this BEFORE Covid, but I'm wondering if I should increase my weight that various statements from the health establishment are driven by politics rather than data/facts.
If Trump had been pushing masking very early on would the health establishment have decided that most masks did little for average citizens? I don't know.
I don't know how much you weight the possibility that statements from scientific authorities are informed in whole or in part by politics, but from what I can tell of the reproducibility crisis, I would probably increase that weight.
You should. Your enemies are worse than you think.
That’s exactly what happened here in Ireland. Masks were useless for a few weeks. Antigen tests were useless for a year, then they weren’t.
Masks were considered useless in all previous epidemics - there were studies on it (some quoted on this site previously).
They just got downplayed when it was convenient politically.
How do you not _already_ take this massively into account?? Yes you should!
> I'm wondering if I should increase my weight that various statements from the health establishment are driven by politics
Wow dude. Yes. You should set that weight to 1.0.
I went really deep on this during COVID and concluded that statements from the health establishment are entirely driven by politics, in the sense that everything they say is first filtered or transformed by the question "is this compatible with far left collectivism?"
So, if there's a true fact that's compatible, they will report it honestly. If the facts are not compatible, they either won't report it, or will just make shit up, and will then tell you that every expert agrees with them. And they will because everyone who isn't a collectivist has been purged from the health establishment years ago with the exception of private doctors, but they're licensed and so under the thumb.
Examples: how vaccine herd immunity went from expert consensus to "I think it was hope", and how social distancing went to "it just sort of turned up", those are direct quotes from health establishment figures in the USA.
This is a very painful thing because lots of people spent the COVID years telling themselves that as upstanding citizens on the side of science, they would listen to and respect the health establishment, unlike their crazy uncle who gets conspiracy theories off ZeroHedge. But those people were all wrong, and the crazy uncle was correct. Nothing you heard from the health establishment about COVID can be taken at face value - every single claim, no matter how apparently basic or trivial, was tested to ensure its compatibility with advancing collectivist ideology (or harming those who stand against it).
https://www.lesswrong.com/posts/YSP9prWnjKxzwuAKp/rationalwiki-on-face-masks
Thank you for that.
I have a few data points of masks good vs masks bad. This helps.
I still have a problem understanding the "masks bad" side.
I mean, covid is a virus, right? It is spread by tiny droplets that come out of infected people's mouths and noses, right? So... how does putting a physical barrier to these droplets *not* work?
(If the objection is that it doesn't help 100% reliably, I agree with that. It's just that in absence of 100%, I would take even 50% over 0%.)
(If the objection is that other things are more effective, such as staying away from strangers, or meeting people on the street rather then inside a building, I agree with that, too. It's just, sometimes I can't avoid the other people.)
Where I live, the typical objection against masks is that they were designed by evil Americans to suffocate our children. I find this somewhat falsified by the fact that my kids survived. Whom do Americans blame for trying to suffocate their children? Is there any other specific problem with masks? Is it just a generic "you can't make me do X" tantrum?
I think the objections were at root mostly about rules requiring people to wear cloth masks (which I think have protection closer to 5% than 50%). The mask itself becomes a symbol of the mandate/loss of freedom/cowardly submission to authority etc. This motivates exaggerating the minor discomfort into suffocation.
Covid is a virus that is spread by very small droplets, which follow the airflow very closely. And the air mostly follows the path of least resistance. So if you're wearing a typical improvised cloth mask that you bought on Etsy in April 2020, and your breath mostly goes out through gaps at the side of the mask. Or where the nose pulls the mask away from your face, so your breath fogs up your glasses while spouting a fountain of virus to waft down on everyone in reach.
If you're inhaling, more of the air will be forced through the mask, and the weave might be tight enough to filter out even few-micron droplets, but there's still a lot of leakage. And anything that *does* get filtered, just accumulates on the mask until you take it off at the end of the day, when some of it smears onto your fingers as you take off the mask, right before you scratch the itchy nose that has been bothering you all day.
A properly-fitted N95 respirator, used once and carefully disposed of afterwards, is designed to deal with these problems and does so quite well. But almost nobody in North America or Western Europe had access to those in early 2020; the East Asians had bought up pretty much the entire world's supply. Even American health-care workers and first responders didn't have *enough* N95s to dispose of them as recommended. And, as Scott has noted from his own medical training, properly fitting an N95 is not an intuitively obvious thing or a comfortable thing and lots of doctors don't manage it. But an improperly-fitted and oft-reused N95 still offers reasonably good protection,
The CDC, many years before COVID, published a white paper on how to make an improvised cloth mask out of cut-up T-shirts and the like, that would actually do what an N95 is supposed to. Unfortunately I'm at the wrong computer to dig up a reference, but the thing looked like the mutant offspring of an "Alien" facehugger and a laundry hamper. I didn't see anybody wearing one of those in 2020, nor was the CDC telling people to do so even though this was almost exactly the situation they'd designed that mask for.
The sort of improvised cloth masks people were actually wearing, offer protection that is statistically indistinguishable from "entirely useless" and could easily cross over into "worse than useless" with even a little risk compensation.
This might be why it was _repeated_ but the original dismissals were by the people who were responsible for the research happening at WIV--i.e. the ones with the most to lose if lab leak was widely believed to be true.
I tend to agree. Here in Europe we did not pay that much attention to whatever The Donald had to say. But in Germany the "mainstream" was very much anti-lab-leak for a long time. As we listened to our Fauci: Prof. Drosten - a true expert, who developed one of the first good tests. And he said: No leak. https://en.wikipedia.org/wiki/Christian_Drosten - Well, he had published before with Shi Zhengli. Checking for her name I just found on wikipedia: https://en.wikipedia.org/wiki/Wuhan_Institute_of_Virology#Virus_origin_allegations
Oops, still a case of wokepedia. Sad.
Because we didn't want an issue over it with China.
Yeah, that's my guess. I mean, the lab leak hypothesis was denigrated in other countries than the US, too, even without those countries actually have Trump. However, all Western countries had the same issues of wanting to cooperating with China and avoiding a topic that could foul up this cooperation.
Even countries that don't have Trump have people who identify with one side or another of the US culture war/red-blue split.
Taking that as given, that's not exactly a quest for truth without fear or favor.
The west condemns China all the time. That doesn’t make sense.
We freely condemn China for things the Chinese government doesn't care about. They don't what to look incompetent, so we tend to ignore that. (And when we're involved in the incompetence, the tendency becomes a LOT stronger.) The denials always had a strong flavor of CYA...but this didn't even address whether the thing happened or not. (FWIW, I tend to disbelieve the lab-leak hypothesis, but don't consider it important in either case...except that lab protocols definitely need to be strengthened...in either case.)
In a technical sense it did enter the public consciousness via someone playing to Trump's prejudices, because Senator Tom Cotton thought Trump might be more willing to dismiss the reassurances of his friend Xi if Cotton pointed out it might be something that accidentally got out of a Chinese lab.
https://www.slowboring.com/p/the-medias-lab-leak-fiasco
The lab leak (if true (probably is IMO)) can be put down to stupidity/ bad practice/ sloppiness. What is of far more concern (again IMO) is the smoking gun of cover up by Fauci et al, especially as it is from "the elite ®" who now are demanding to have full control over the levers of information dissemination in the name of stopping disinformation.
/tinfoilhat
At first I thought "of course it matters whether Covid was a lab leak! The media has been confidently insisting that it isn't, and this factors into whether you should trust them".
Then I realised the same argument applies to that: the media has been confidently wrong about many things, so also being confidently wrong about this isn't much of a surprise.
What surprised me significantly about the media narrative is that a group of scientists, who in emails specifically stated that they were quite unsure if it was a lab leak or not, chose to publish a schelling point paper that the narrative coalesced around -
https://open.substack.com/pub/rogerpielkejr/p/why-proximal-origins-must-be-retracted?utm_source=share&utm_medium=android&r=q4k5f
I continue to be surprised at how easily the media swallows anti-lab-leak papers with major flaws (like the one claiming to prove that Covid *did so too* cross over at the market, based on samples concentrated around the market - real "drunk looking for his keys under the lamp-post" stuff), and at how approvingly it continues to quote Kristian Andersen, despite his demonstrated involvement in the Lancet cover-up.
I think this brings up a good point though. Scott mentions that it only matters to convince stupid people. This is wrong. It *also* matters to convince people who aren't paying attention (which is most people most of the time, many of whom are very much not stupid!).
The big event is too big to not notice, and it might cause you to realize that "huh, no *only* was the media confidently wrong about this really big, important thing, they have consistently been confidently wrong about lots of little things for a long time!" That kind of realization warrants a big update.
This article is very much true for things where someone is paying enough attention to be quasi informed. Big things often bring the attention of lots of people who were, previously, not either of those things, and probably had a default belief that "everything is fine because most things are fine", and must now make a big update to that belief.
-edit- to try and put it more succinctly, the large event causes you to make a large update because it's what causes you to notice the decades of previous information that you were previously unaware of. So you are actually updating on a lot more than just the big event, but the big event is the trigger for you to notice.
>(which is most people most of the time, many of whom are very much not stupid!).
Someone pointed out that Scott recently used "lie" multiple times in an essay in a much looser manner than in his "Very Rarely Lies" post. Similarly, he's likely using a broad (and frankly, obnoxious and offensive) definition of "stupid."
Especially since he's in EA DEFENSE MODE, this isn't Scott at his most charitable and well-crafted.
Edit: And now I find my post even funnier given that just downthread people are calling it terrific and an "instant classic." Which of us has not updated correctly?
That's the wrong update. The right update is "Scientists have been confidently insisting that it isn't". Most people pre-COVID did not have scientists being collectively systematic bullshitters as a very high prior. So that's why it gets a lot of attention: a lot of people are updating hard.
It's just not true that the media has been confidently rejecting lab leak. There is lots of sympathetic coverage of lab leak throughout the media and has been for years. You can just search "lab leak" on google news and find plenty of coverage that treats lab leak as plausible hypothesis, including by the most mainstream sources like AP, NYT, BBC. My update after lab leak controversy is that no amount of positive coverage seems to persuade people that the establishment doesn't actually hate them.
Only when they had no choice (for example when the Energy Dept Z-Division scientists at LLNL shifted to favor lab origin joining the FBI). Otherwise they have pushed any papers that suggested market origin despite the glaring lack of any intermediate host and missing early cases. The NY Times didn't report on the DEFUSE proposal to introduce furin cleavage sites into novel SARS-related bat coronaviruses or the NIH terminating WIV's subaward for continued refusal to share records of their SARS-related bat coronavirus research.
I agree that: 1) media coverage tended to be more pro lab leak after expert assessment supporting lab leak by Energy Dept and FBI 2) media coverage tended to be more pro market origin after prominent papers supporting market origin were published in top scientific journals.
How else could it be? Journalists aren't virologists; they lack the expertise to evaluate origin claims themselves. Of course coverage is going to be driven by expert assessment in the scientific and intelligence communities.
This wasn't the first decade of observations. (Nor the first (possibly) lab-leak-caused pandemic, that was the 1977 Russian flu.)
I redid the calculation with 8 decades, 1 previous lab-leak-pandemic, and original priors of 80% probability on 5% per decade and 20% on 15% per decade. After 7 previous decades and 1 lab-leak-pandemic, my updated prior would be 72% on 5% per decade. Another decade without such a pandemic increases that to 74%, while a second such pandemic decreases it to 46% (with 54% on 15% per decade). The result is an expected value of ~7.5% per decade if COVID wasn't a lab leak, and ~10% if it was: a smaller absolute difference than in Scott's calculation, and a similar relative difference.
Might be easier to just use frequentist statistics here, when we don't have a clear prior probability. 1 leak per 8 decades = 12.5% leaks per decade, agreeing with your math of "looking at the evidence moves the probability weight closer to 15% and farther from 5%".
The problem with such calculations is that they are looking only at lab leaks, not on how many labs existed that could have leaked, how well the labs were run, or any of the other relevant metrics.
With the numbers of actual (known of suspected) leaks being so low, a single new case can change the numbers a lot, including if there was a successful coverup in the past that we are not using to calculate.
y'all I'm not going to update my prior that scott is a great writer just because when he has newborn twins he publishes some meandering thing full of reheated Taleb. ;)
Scott, prioritise mate. You don't need to feed the substack maw right now.
Obviously Taleb says outliers are important, but I don't remember him saying we shouldn't update on dramatic events because of them - but I've only read one or two books. Can you direct me to what he has to say about this?
Apart from a missing 'the', I thought it was an extremely well-written post.
I'm sure that he's just making up for one of the "the the"s that he's used over the years.
I too thought this was a good article.
Law of conservation of the "thes".
I liked it; it was a well-expressed laying out of something that's been bothering me about public discourse for a while now, which I found pretty cathartic. Also, my maw ever hungers.
I read it months after, not knowing about Taleb at all. I think it's instant classic.
I thought this post was inspired, and marked it mentally an "instant classic" while I was reading it. That, about evaluating the content.
The fact that Scott managed to write it at all while having twins impressed me (I am a father, and I've passed through that twice - one child at a time). That it turned out to be so good, is nothing short of astounding.
> This is part of why I think we should be much more worried about a nuclear attack by terrorists.
Agreed, but we should worry even more if nukes were known to be accessible.
We did worry about nukes post 9/11 - it was both implied and stated openly. Remember the WMD and Iraq. Iraq had nothing to do with 911, of course, but the general population was, I think, genuinely concerned that if something like 911 could happen, then an even bigger attack could happen.
A bigger attack could have been one with weapons of mass destruction of course, and even though Saddam had nothing to do with any of this, and even though the neo-conservatives were using the fear post 9/11 to start a war that they had planned anyway, it is somewhat understandable that people’s priors about terrorists using WMD had changed.
"You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge."
My SWAG is that everyone knew that everyone knew that everyone knew that there was a generalized problem, but not everyone knew specifics.
The Casting Couch is not exactly a secret, but it's a jump to go from "directors have been known to use unorthodox methods to audition starlets" to "Director X told Starlet Y on this date in that place the specifics of how she could best secure her big break."
And there's also the issue of consent - sure it's skeezy to use an offer of stardom to get a young girl to have sex with you, but she might have said yes and knowingly did it. I would want that guy out of hiring decisions immediately, but he had real power in the industry (at least his own company making movies). Ultimately, Harvey wasn't convicted of offering jobs in exchange for sex, but for when he crossed the line into rape.
There is no need to limit Weinstein's extracurricular activities to those that led to his conviction.
Right, but if people were aware that he gave out jobs for sex, but thought it was (mostly) consensual, there would be little reason to report that. If he needed a fig leaf, he could have claimed he was dating some pretty unknown actor and also giving her jobs.
I thought the whole idea behind it was that "sex for jobs" isn't really consensual.
That's the question, and it depends on how it's phrased. Nepotism isn't good, but it's a common occurrence that would be hard to stop. In privately owned companies it's a fact of life. If someone with power offers a sexual relationship and *also* offers a job, that's a lot like nepotism. If someone offers a sexual relationship *for* a job, that's at least borderline and worse than nepotism, but really really hard to differentiate. In many cases these women would enthusiastically agree and freely tell everyone how much they love this situation, even if they personally hated it and only did it because of the power differential. They know that getting the job requires pretending that the relationship is consensual and exists outside of the job, even if everyone knows that it's not true. It's a lot like Anna Nicole Smith marrying some ridiculously old guy to get his money. Nobody can tell either of them (legally) that they can't do that, but we can all frown at them a lot for it.
as you are doubtless aware, the law treats "marriage for money" somewhat differently than it treats "sex for money" or "sex for a job"
And, of course, getting back to the original subject, the existence of the Casting Couch is no secret. Just that this time, we have more details beyond rumor.
This is apparently the new definition of "consensual" that starts with "was the sex in question skeezy", and if the answer is "yes" we find an excuse to insist that the person knowingly saying "yes" because they wanted the deal being offered wasn't really "consent".
That's at least defensible when the person in question is a child.. For adults, oh hell no. Consent is consent, and "yes I will do X in return for promised Y" is consent. If you want to ban it or object to it, find another excuse, own up to believing that sometimes sex between consenting adults is wrong, because consent is too useful a concept to throw away over this.
I'd known that it was a historical thing, but I had assumed that it had died out some time around the 70s (except presumably in the case of porn). I was surprised, but not shocked, that it had continued to the present day. I wasn't surprised that so many actresses went along with it in the moment and remained silent during their immediate job. What shocked me was that so many actors and actresses knew about it, remained silent, lied about it in public, and even facilitated it by passing on "fresh meat" to Weinstein. At times I wish Rose McGowan could channel her anger into making people's heads literally explode.
"Harvey Weinstein abusing people in Hollywood didn’t cause / shouldn’t have caused much of an epistemic update. All the insiders knew..."
But to non-insiders this was a huge moment - for them, it was an example of #5 in your list of exceptions, teaching them something horrifying and hitherto unknown about the world. A COVID lab leak is the same way; (if true) it takes a bunch of people who *haven't* formed models about how common lab leaks are (or whose models are just "scientists are smart people and wouldn't be that dumb") and instigates a truly massive update.
This isn't a point of disagreement with your post, probably just a question of emphasis instead. There are a lot of people ignorant about any given risk/phenomenon, nobody can know everything. It makes sense that there are lots of dramatic updates going on after dramatic events, and this needn't be (especially) irrational; and the headline of this article makes sense only within domains where you already have considered opinions.
I was aware that the casting couch was a thing but to learn that Weinstein had a running joke with a well known actress (not A-list but In Lots of Good Movies) because she still hadn't slept with him was a real shock
I don't know how much this is Hollywood specific. Is it that fame is so intensely supply limited?
What I question is my assumption that most harassment has the "really, no" option somewhere there in most cases. It seems it's not just a question of whether you literally need the job to eat but a much broader vulnerability to 'needing to psychologically submit if you feel your belonging to the tribe is under threat'
> I don't know how much this is Hollywood specific. Is it that fame is so intensely supply limited?
It's not all that Hollywood-specific. Right now I guarantee that somewhere out there is a McDonalds employee sleeping with her boss to get better shifts.
Or consider the Vice-President of the United States, who got her political career started by banging Willie Brown, who was twice her age and married.
I think the “Hollywood-specific” part is how ubiquitous it is (or was): it was very common, and was to some degree *the* way that influence and fame was handed out.
I would think this would be most likely in places exactly like Hollywood: getting famous in the movie industry is a winner take all tournament, with vastly more supply of aspirants than demand for A-list actors. And these aspirants are fairly hard to objectively distinguish, so a lot of success comes down to who you know... and what you’re willing to do for them.
Power brokers like Weinstein, who are basically the judges of this tournament, get to set their terms of competition, and if you’re a sleazy dude with thousands of largely indistinguishable desperate actresses coming to your door wanting to be the next hit thing...
Hollywood is not the only “tournament job” - academia is the same way, but we generally assume there is less transactional sex there (although probably lots of transactional other stuff). I think this largely comes down to culture, given that Hollywood fame already involves a degree of commoditized sex (or at least sexiness). When you’re competing to be a public sex symbol, you’re probably already partway down the road of accepting having to do sexy things you wouldn’t do otherwise, making you more vulnerable to Weinstein types acting predatory.
I wouldn't rule out academia running the same way pretty often, maybe as often as Hollywood. I think academia does a better job of hiding it, and the stakes are much lower for individuals involved. I doubt there are many, if any, Harveys in academia, but probably a whole lot of professors having sex with grad students and helping them get positions in colleges.
You’re probably right, particularly on the “common but lower stakes”.
On the other hand I think there are more alternative paths to the top in academia, you don’t *have* to sleep your way there.
Sure, and I would expect it more commonly in Hollywood, but not by a huge amount.
It isn't impossible, but off hand I cannot think of any examples that I have observed in about fifty years in academia. Currently there are pretty strong norms against sexual relationships between professors and their students.
There are now. But in the 2014-2016 period, academic philosophy had its “me too” moment a bit early. John Searle and Colin McGinn and Peter Ludlow all lost their positions over things that were sort of open secrets for a while, and there are well known examples from earlier generations, like Alfred Tarski.
I think politics is another "tournament" area - look at all the interns in Washington.
It may be less obvious only because there's the joke that politicians are the ones too ugly to be actors.
That’s a fair point. On the other hand, politics has more centers of power - there are LOTS of campaigns to be involved in and pols to work for, with a diversity of styles and ideologies, all the way from the local to the federal level. You’ve got Willie Brown, but you’ve also got Mike Pence, and everything in between.
“Big shot Hollywood producer” is more rarefied air than even the Senate, concentrated in one place, with basically a monoculture.
I think it was particularly likely in Hollywood because being sexually attractive is a major asset in the profession, hence women trying to break into it are likely to be women that men giving out roles would like to sleep with.
Sexually attractive *and* at least somewhat willing to parlay their attractiveness into commercial success.
I’m not saying “they’re asking for it” or anything like that, just that I don’t think you find “be a Hollywood A-list star” enticing unless you’re more okay-than-average with being a public symbol of sexiness. It’s, for better or worse, part of the job description.
I think that "really, no"-assumption is what changed in a lot of people around MeToo's peak popularity.
If a lot of people's understanding of sexual misconduct in Hollywood went from "actresses sleeping their way to the top" to "actresses pressured into sex at risk of losing their livelihood," that explains why they saw confronting sexual harassment in Hollywood as suddenly urgent.
I think this is the most accurate interpretation.
The phrase "casting couch" had been known to the public for a long time.
It was common enough knowledge that it was used as a joke in Toy Story 2, all the way back in 1999.
https://youtu.be/94T8VG6YoA0?si=KKSqpWNRsomsp7Gy&t=224
La La Land (2016) has a whole song and dance about Emma Stone’s sexy roommates sleeping their way to the top.
No one cared.
Thinking about it some more, it might just be that everyone knew about it, but almost no one was considering the implications of it. Yes, obviously actresses were sleeping with directors for favors, but if pretty much everyone was doing it... Not sleeping with directors isn't a viable option. It's a pretty basic coordination problem: everyone (except the directors) would be better off if no one slept with directors (or if the director got punished by the law for doing it).
I don't think it's as simple as a coordination problem. Now that it is no longer acceptable, beautiful women are at a relative disadvantage to those with other connections in the industry (e.g., Nepo-babies).
Something something Moloch
"Valley of the Dolls" had it all. "Published in 1966, the book was the biggest selling novel of its year. " Later a hit as a movie. https://en.wikipedia.org/wiki/Valley_of_the_Dolls_(novel)
Casting couch, indeed. Also: The Godfather. A big movie producer can get laid if he really wants it? Who EVER doubted that? And who doubts he can now?
The argument seems to assume my priors are more or less accurate. But that seems unlikely for events that weren't salient enough for me to think about. Re the Covid lab leak specifically, I'm not updating (much) on the likelihood of a lab leak, since I think I had a good enough handle on that before. but I am updating substantially on the probability that the gov't and official health authorities will lie to me. The same problem arises at that level. Was I previously grossly underestimating the probability that the authorities would lie to me? Or have I overcorrected from the Covid lies? It is possible that I underestimated the probability before and I am overestimating it now. On the other hand, the dramatic events made something salient that was not salient before. Maybe that has caused me to think carefully about it now and my current estimate is more accurate than previously. So I expect that sometimes it is wrong to update substantially in light of dramatic events, but I doubt it's always wrong. And give that ex post, I've tried to think carefully about it and make my estimates as accurate as possible, I don't think it makes sense for me to discount my current estimate to account for the possibility that I'm overcorrecting.
There have been a number of comments along these lines, and I'm putting this reply here because it has to go somewhere.
I'm confused as to how there can be a significant update here. I consider myself a person who is more than usually trusting of authorities, but I don't think that all offiicial statements are 100% accurate. It would be very surprising to me if anyone had thought pre-Covid that all official statements were 100% accurate. There have been numerous prior examples of government and official health authorities making false statements, so the same point applies as in the post.
The update is on the strength of the government positions that turned out to not only be false, but likely or provably lies.
A minor position held by a single self-interested politician? People have suspected lies for as long as government has existed. A major position trumpeted from multiple federal agencies in official releases to the public? That's a new one for most people.
Yes, exactly. It is of course possible that Robert himself had a much more realistic appraisal of gov't lying prior to Covid than I did. But my point is that people - well, me anyway - are going to have inaccurate priors about low salience events. If the priors are inaccurate, then it is not necessarily wrong to have a substantial update after a dramatic event.
At least in part, I would think, there should be an included factor for updating to the degree that the dramatic event is likely to affect you- and, for that matter, how others will react which is really the big wrench wrecking this post meaning much of anything (as Scott points out with society being imperfect).
If you live out in BFE, even a once-a-century terrorist attack is unlikely to affect you directly (for that matter, for the US, if you're not in NYC, DC, maybe LA and Boston, even other major cities are unlikely targets). How society reacts almost certainly will, though; is there a meaningful difference between you updating on [drastic event] or [reaction to DE]?
A pandemic (regardless of origin) is likely to affect you even in BFE, even if you have (slightly) more warning than NYC. To what degree does the origin matter if we're "due" for a once-a-century plague? What does updating here entail?
Assuming you live in the US, you're used to an unusually stable and relatively low-corruption government; it was easy to underestimate the degree to which they were likely to lie in a delicate scenario. Is it dangerous to overestimate that? Skepticism to the degree that their lies will affect you seems wise, and IMO you have to go pretty far into the tinfoil hat spectrum before the overestimation becomes worse than the risks of underestimation. That said, that skepticism could lead to a certain epistemic anarchy or helplessness, which does have risks.
Ultimately, all the made-up math is a rationalization for following one's preferences anyways (https://blog.ayjay.org/silence-violence-and-the-human-condition/ ctrl+F Parfit, but the whole post is good if you ask me). Applying numbers gives an unwarranted illusion of solidity and reasonableness (does the difference between a 19% and 20% risk of lab leak *mean anything*? What the hell are those percentages? how would you even measure?). Consider, instead, something akin to Michael Pollan's advice on eating: "eat food, not too much, mostly plants." Should you update on drastic events? "Yes, mostly cautious, don't wreck your life."
For most of Scott's readers, there are a couple updates that I would consider nearly required: the government does not have your interests in mind (ideally, they are interested in "the public," which is not the same as interested in *you*), and they are not primarily communicating to people like you. I would also say you should update on Public Health having a major obsession with harm reduction, which is barely even related to things like "the public good" and has basically no concept of tradeoffs, how their messaging on harm reduction affects how people receive everything else they say, etc etc.
Mostly a digression, I'm tempted to call Scott's last paragraph a callout of Kelsey/TUOC a la https://www.datasecretslox.com/index.php/topic,4106.0.html but he's never struck me as that kind of guy.
"I don’t entirely accept this argument - I think whether or not it was a lab leak matters in order to convince stupid people, who don’t know how to use probabilities and don’t believe anything can go wrong until it’s gone wrong before."
Objection! Beware the other kind of stupid people who don't know how to use probabilities and believe every dramatic thing that happened once will happen again and kill them unless it has exactly a 0% chance of happening. For they will turn your argument from drama against you and demand infinite safety requirements for all nice things and then we can't have nice things like nuclear power.
No, that's the same kind of stupid people. Most people in modern industrialized nations eschew quantitative risk assessment and mitigation in favor of an intuitive binary classification where all things are either "perfectly safe" or "unacceptably dangerous". If pressed they will of course acknowledge that e.g. airliners sometimes crash, but the rate is low enough to round to "perfectly safe" and so nothing needs to be done beyond what we're already doing.
Until an airliner or two is observed to crash, and then airliners are "intolerably dangerous", or at least the model that just crashed is intolerably dangerous. At which point, they'll want to wholly eliminate the intolerably dangerous thing from the world they live in, and they won't be satisfied by "we have mitigated this risk to a rationally appropriate degree". They'll insist on something that flips "intolerably dangerous" all the way back to "perfectly safe", and you correctly note the nuclear power industry as an example of what that can look like.
If you want to win this sort of battle, you probably do want to have the "stupid people" on your side. But that means the end state you are fighting for has to be seen as either "perfectly safe" or "intolerably dangerous and thus very very severely regulated".
So, we need to know whether we'd prefer a regulatory environment where e.g. gain-of-function research is allowed with no significant restrictions, or one in which it is nigh unto banned. And we've got sufficiently few data points on that one that it would be genuinely helpful to us to know which side of the line COVID fell on.
I only disagree in calling these people stupid. Given the limitations on how someone can plausibly follow a wide range of very different topics, intuition and heuristics are necessary. Not just expected, but plain necessary.
When Boeing's 737 Max planes started crashing, the intelligent thing to do would be to categorize them as "airplanes" and maybe slightly update on overall airplane safety. The "stupid" heuristic would be to become fearful and avoid flying - either generally or on whatever identifiable aspects seem most dangerous. Obviously overkill to avoid flying in general, but correct in direction that something unusual was wrong and needed corrected. In this case, an automated pilot that could directly crash a plane under certain circumstances.
I agree that intuition and heuristics are necessary, but that's a separate question. It is entirely possible to use intuition and heuristics to define and populate a category, "things that are a bit dangerous but *worth the risk*". Having that category in between "perfectly safe" and "intolerably dangerous" greatly increases the utility of your intuitive heuristic-based risk assessments, and it also gives you an easy way to slide in quantitative risk assessments when appropriate.
Most of the human race for most of history has, I think, been willing and able to use "a bit dangerous but worth the risk" at need, and I think it is still common in working-class American culture. The bit where the middle and upper classes have dispensed with it is perhaps not "stupid", but it definitely seems foolish.
it's not stupidity, its awareness of mortality.
like statistics takes mortality and abstracts it, but the dramatic event escapes the cage of abstraction and forces consideration. risk management is actually a coping strategy, and the dramatic event collapses it, causing overreaction.
i think the real issue might be acceptance of mortality to avoid both overabstraction and overreaction. We'd kind of maybe accept nuclear power but ban motorcycles or go with socialized health care.
The airplane argument is ridiculous. If 99.9% of planes are safe 99.9% of the time then one plane has repeated problems, you absolutely should be up in arms. We know how to make planes "perfectly safe." This isn't some woke philosophical thing. An airplane that crashes absolutely is "intolerably dangerous." If you (as an airline) put it in the air, it will do irreparable harm to your brand. If you as a government agency continue to let it fly, you will do irreparable harm to your brand.....
Speaking as am experienced pilot and an aerospace engineer, no, we do not know how to make airplanes "perfectly safe". We know how to make airplanes that crash only rarely, and if you want you can round that to "perfectly safe". But if a single crash is enough to push it over into the "intolerably dangerous" category for you, then we don't know how to make airplanes safe enough for you and you should never fly in an airplane.
An aerospace engineer who can't fathom that quotes around "perfectly safe" means "not literally perfectly safe." And if you are an aerospace engineer, then you should try explaining to your boss that the plane you designed crashes only a bit more frequently than your competitors, so it should be fine. This probably won't adversely affect your career trajectory.
If you're saying that an airplane that crashes even once is "intolerably dangerous", then it certainly seems to me that your definition of "perfectly safe" allows for literally zero crashes.
Otherwise, it's foolish to update significantly on the basis of a single crash.
If something that you believed was infinitesimally probable occurs, you absolutely should update. However, your original statement to which I objected was, "Until an airliner or two is observed to crash." If you thought the odds were near zero and it suddenly happens twice. Not only should you update, you should probably reevaluate your entire method for assessing risk.
Phenomenal post! FWIW I agreed with SBF tweeting through it.
Yeah, that seems like a red herring.
Once you have done the fraud, admitting to the fraud is good actually! It doesn't come close to undoing the damage of course, but admitting to it is better than not admitting to it.
He didn't like admit the fraud though right? Saying FTX was just Lehman Bros was his only chance to survive I think.
Every time someone asked him, "did you commit fraud?" he said no. But he did admit to all of the elements of fraud. This was a de-facto public service to the victims and prosecutors.
Good for what purpose? For your reputation, it's either indifferent (it will be clear to everyone you've committed fraud anyway) or bad (if it wouldn't be clear to everyone without your admission). If you've committed a wrongdoing that doesn't involve dishonesty, admitting it may be good for your reputation for honesty, but that doesn't work for fraud, after which you won't have a reputation for honesty either way.
I kind of agreed with that part. But I disagreed with almost every part of the contrary position w/re Altman vs the OpenAI board. To me, the same principles that applied in the SBF case ought to apply to to the Altman case. To a substantially lesser *degree* because we have compelling proof that SBF was basically a nerdy Bernie Madoff and we don't have proof that Sam Altman was basically Gaius Baltar. But the *sign* should be the same, e.g. we should want strong corporate boards to safeguard against that sort of thing and we should want them to act before it's too late even if that sometimes does mean having to say "oops, we're sorry!"
But apparently I'm an outlier in that respect, at least in the broad rat-sphere. And I'm not sure why.
Completely agree with this.
Agreed as well. The problem with OpenAI was that the individual board members were either too close to Sam (i.e. employees working under Sam or who helped him start the company) or too inexperienced with running a major company. A more common board would have people who are more experienced with these kinds of things and more independent.
The failure of this board was in its creation, far more than how it did or didn't handle this situation.
Yeah, that part just smacks of siding with the victor because he's the victor. Actually, Sam Altman is an enemy of humanity, and the systems that were supposed to keep him in check failed! When I'm feeling pessimistic, I think that failing to keep him fired was the metaphorical event horizon, i.e., it doesn't feel like the apocalypse but now all possible paths lead towards doom. (I'm often feeling pessimistic.)
"Enemy of humanity" is unproven, to say the very least. But we can go with "insufficiently transparent w/re the alignment of his incentives with humanity's", and given his position that should have been enough for the board to fire him.
But as Mr. Doolittle notes, the board seems to have been set up to fail if it ever came to that, and it did. Possibly that was just carelessness when OpenAI was being founded.
Also, before everyone condemns some community for handling it badly, please give an example of a community that handled it well.
This should be indeed a common answer.
I'm afraid, however, most answers would start with: "Well <MY_GROUP> did so and so..." after which, you can be pretty sure that however <THEIR_GROUP> actually managed it, the person putting them as an example thinks they did stellarly, so...
I feel professional boards handle CEO issues much better, and with minimal publicity (which is part of how they handle it).
That Altman is well-connected and better at massaging publicity doesn't seem like a failing of the board exactly, even if it's an overall failure of the Altman-control system. Certainly, calling them "unprofessional" is ridiculous; it's a Sam Altman propaganda line, not a fact about corporate governance.
Would you prefer "naive and unprepared" to "unprofessional"?
Maybe? The problem is roughly that they assumed professionally discharging their responsibilities would be sufficient. But the point of the entire corporate structure was also that that should be sufficient.
The most crucial aspect of the lab leak hypothesis is the aggressive censorship it faced, with individuals considering it being labeled as conspiracy theorists. It was evident right from the start that something suspicious was happening.
Secondly, all bioweapons labs should be closed. This should not be a difficult task. Take Germany as an example, which successfully shut down all its nuclear plants, albeit for more or less wrong reasons.
One thing that's missing from this is that dramatic events usually have a great deal more evidential supply than less dramatic ones. For nearly everyone, you don't have close to first hand reports of something like historical lab leaks, you have to rely at best on one or two scientific studies of average quality, for which you probably didn't read the methodology section. And for most categories of event, you haven't even read those reports, you're going by your impression using the availability heuristic. For dramatic events, you have evidence from all directions which you can cross check. So it is a suitable time to evaluate whether the evidential underpinnings of your previous impression are fit for purpose. Of course, in a lot of cases a lot of the evidence is still third hand, but there is still so much more of it that you can get a clearer picture of this particular event.
>>the model airplane building community,<<
LOL!
I agree, I put it similarly here:
"When we’re inferring general patterns from individual events, we put too much weight on events of personal, moral, or political significance.
We focus too much on whether we were harmed by some risky behaviour, relative to what’s happened to other people. Obviously on average there’s nothing special about yourself.
And we put too much evidentiary weight on what happens in large countries, such as the US, relative to small ones. We can learn a lot from events in small countries, even though their practical importance is smaller.
Likewise, we over-generalise from events of great historical significance, like World War II, whereas we neglect less significant events."
https://stefanschubert.substack.com/p/evidence-and-meaning
Robin Hanson also has a post on this issue but I can't find it now.
I think this is the post you're thinking of: https://www.overcomingbias.com/p/big-impact-isnt-big-datahtml
Thank you! That's it.
Your intuition for how much to update your estimates based on samples of power-law distributions might be poor.
I really wish that announcements of dramatic events were routinely accompanied with:
Here's the previous data. Here's the fit to the previous data, and the corresponding exponent.
The exponent really makes a huge difference. If it is less than 1.0, the total damage done by the whole probability distribution is dominated by the single largest event. And, as a probability distribution, if the exponent is less than 1.0, then it can't be normalized, so it must break down somewhere, and _where_ it breaks down matters a lot.
There are also some other critical values for the exponent where other qualitative changes happen:
>A power-law x − k {\displaystyle x^{-k}} has a well-defined mean over x ∈ [ 1 , ∞ ) {\displaystyle x\in [1,\infty )} only if k > 2 {\displaystyle k>2}, and it has a finite variance only if k > 3 {\displaystyle k>3}; most identified power laws in nature have exponents such that the mean is well-defined but the variance is not, implying they are capable of black swan behavior.
( from https://en.wikipedia.org/wiki/Power_law )
Nope, it's extremely clear.
The benefits from gain of function research is near zero and the risk is monumental. It took ONE SINGLE lab leak to kill millions and cost trillions. Nothing gain of function has produced or likely will ever produce can make up for this singular event, let alone ones that will almost certainly happen in the future if such research is continued to be permitted.
a) This applies more directly to events like mass shootings, where there are plenty of events for statistics.
b) Irrespective of where it came from, Covid is also in the the class of epidemics, and, while the statistics on epidemics are sparser than for mass shootings, there are enough of them to make a stab at estimating the statistics and estimating what a rational policy is for _epidemics_ regardless of policy choices towards gain of function research.
I agree that such announcements should come with context. X's community notes seems like it's working for this purpose. Some websites seem to have it. Most places don't, because doing so it at odds with their purpose. The Media wants to sell sensation. Politicians sell a narrative and/or viewpoint.
Many Thanks! I'm slightly confused, what are/where are X's community notes?
I'm not on X, but it's apparently something that can be added to posts by certain community members as a type of fact check, adding context or additional information where needed.
Many Thanks! I'm not on X either, and didn't know of that feature.
> Does it matter if COVID was a lab leak?
My take: no, it's entirely irrelevant, because whether or not it leaked from a lab, it did indisputably, uncontroversially, get deliberately exported from a nation.
We know when China locked down Wuhan to internal travel, and we know when China shut down international air travel out of Wuhan, which was significantly later. That makes the pandemic a bioweapon attack. Whether or not it was developed in a bioweapon lab doesn't matter; a thing is a weapon *if it is used as one,* and China knowingly sending infected people to other countries absolutely counts as using Covid as a weapon.
There was no way that it wouldn’t have spread anyway.
You're probably right, but that doesn't mean that deliberately encouraging the spread is a blameless act because of it. "It is impossible but that offences will come: but woe unto him, through whom they come!"
I don't know that they were deliberately encouraging the spread. Usually I attribute things to incompetence before malice, and China seems like one of those places where they're fine locking the proles down but getting on the wrong side of wealthy travelers and especially international air travel chains might have been harder/scarier.
Oh wait, we're also one of those places.
> Usually I attribute things to incompetence before malice
Generally, in the absence of better information, yes. But malice of this form seems quite consistent with the character of the CCP.
> China seems like one of those places where they're fine locking the proles down but getting on the wrong side of wealthy travelers ... might have been harder/scarier.
Jack Ma would likely disagree.
They've never intentionally released a bioweapon before, so it's out of character.
I don't know who Jack Ma is.
> They've never intentionally released a bioweapon before, so it's out of character.
Perhaps, but they are known to have provided chemical precursors to Mexican drug cartels in sufficient quantity to make enough fentanyl to kill everyone in the USA and then some. That sure sounds a lot like a widespread *chemical* weapon attack to me; is it really that out of character for a regime that would use sneaky means to release one mass-casualty weapon upon geopolitical rivals to use sneaky means to release a different one?
Such actions come straight out of an old Chinese book of war practices known as "The Thirty-Six Stratagems." Often regarded as a companion to Sun Tzu’s The Art of War, the book outlines various stratagems (deceptive tricks or schemes) to win at warfare by fighting dirty. The third stratagem, “kill with a borrowed knife,” involves employing a third party to strike at an enemy when you can’t easily do so yourself. (Drug cartels, for example.) And stratagem #25, “replace the beams with rotten timbers,” refers to disrupting, sabotaging, and interfering with an enemy’s normal ways of doing things so their organization collapses. (Can you even imagine a more perfect example than introducing a germ to an enemy’s populace, introducing the idea of massively societally-disruptive lockdowns to combat it, and then using the pandemic and the lockdowns as an excuse to tighten exports and throw their supply chains into chaos?)
This sort of stuff isn’t what we tend to think of as “war” in the West, but it very much is “war as China understands it.”
How fast and soon it spread matters enormously.
If any one person leaves the country it spreads. As I recall the Chinese were condemned, not praised, for lockdowns. As I also recall a fair number of Americans thought back then that it was all fake. Including Trump. And of course countries can close down airports without the Chinese having to.
Is this... actually true? I don't think it is for covid. We had literally almost nothing to do for extreme symptom sufferers except put them on a ventillator, which didn't actually help them in any way, and watch them die. That didn't stop us from scrambling to put together as many ventilators as possible of course, but...
well. Flattening the curve is very important if your health care industry is actually capable of doing something. But I just don't think it was in this case, so the fact that we overloaded it literally didn't affect anything.
If the spread slows, you reduce the number of people who get sick before the MRNA vaccines are rolled out, so yes, it does matter how fast/soon it spreads.
Its probably always true
with corona I think each strain got more infectious and less deadly, we only have a sample size of 1 bio terrorism virus, but its possible that theres a fairly hard law that those evolutionary trade offs will always happen in the wild
The longer you wait the more likely the version you get is a variant that is less deadly, maybe, if this happens again I will consider hiding in a bunker for a year every time
Didn’t the ventilators have a survival rate that greatly exceeded the survival rate of people who *would* have been out on a ventilator but couldn’t get one? (Something like 20% vs 10%)
my understanding is that this is not true, the problem never had anything to do with the lung's ability to contract and expand, rather the problem was in the transfer layer where the blood becomes oxygenated. throwing a ventilator at someone whose blood cannot become oxygenated doesn't help them? i think that's what happened anyway, i haven't researched this since 2021, but i *did* research it
and that as this slowly became more known, the push for random geeks to manufacture ventilators in their garages correctly slowed
Depending how you're modeling your distributions, it is often completely correct to drastically overhaul your model in the face of dramatic events. Sure, changing an estimate from 1,000 to 1,001 generally does not require that you update your mean or standard deviation in any way whatsoever. But an update from 0 to 1 can and in many cases should result in a massive update of your model, because if a seven-sigma event happens, your model needs to be thrown into the trashcan.
One instance of something coming to light should often not be updated as one instance of something occurring. If I believe that the amount of daily shoplifting in a big box store is $200, because there's an inventory check at the end of the day that yields of average $200 discrepancy, and I notice one day that one shoplifter has managed to evade this system and steal $500 of product in one go undetected by the system, I need a massive update to my estimates: there could be anywhere from $200 to tens of thousands of dollars of product being stolen every day.
Any instance of sexual harassment coming to light can be an update being any number of sigmas. Consider the the model airplane community, consisting of 100 chapters of 100 members each. If I learn that one, or two, or three people in the model airplane community were persistently making unwanted advances on co-community members, that's not something that needs updating on, because we know this kind of thing happens everywhere all the time. But if I learn that one, or two, or three chapter leaders engaged in (presumably) rarer forms of sexual harassment like drugging them, assaulting them, then blackmailing them into recruiting more victims, and every time someone tried to blow the whistle it was somehow covered up, that's the kind of update that does in fact require taking a second look at the model airplane community because it implicates a lot more than "three instances of sexual harassment occurred."
"But if I learn that one, or two, or three chapter leaders engaged in (presumably) rarer forms of sexual harassment like drugging them, assaulting them, then blackmailing them into recruiting more victims, and every time someone tried to blow the whistle it was somehow covered up, that's the kind of update that does in fact require taking a second look at the model airplane community because it implicates a lot more than "three instances of sexual harassment occurred.""
I would indeed update, since I REALLY doubt anything like that has ever happened. In Hollywood, sure...
I think this is one thing I assume he will talk about if he detailed the mathematics enough. I also instantly think about how the severity of each event affects the update and how we have so little data to talk about century events to update our prior meaningfully.
I think a big thing about "learning from dramatic events" is that everybody else is learning too, which can either dramatically increase, or dramatically decrease, the odds of a similar event happening again, depending on the specifics.
This is a good point.
RB applied this to 9/11 specifically: it increases the odds of similar events happening again quite a lot, but all the other passengers also learned that, which *caused* similar events to be much less likely.
With mass shootings, it goes both ways at the same time: media reports of mass shootings make J. Random Nut more likely to pick that particular method, but also make people more likely to take countermeasures. (Wasn't there a shooting that didn't clear the threshold of four victims because the killer was shot so quickly?)
I think that this article could benefit from more discussion on correlated events. Yes, if one extreme event occurs every t years, independently of other events, then we shouldn't worry too much. Extreme, newsworthy events can actually change the covariance of subsequent data and potentially increase their likelihood. Perhaps the possibility of correlated events, marginalized to only include your most disliked subgroups, is still very low and your point still stands.
Scott, what was your prior estimate on, "expected number of billion-dollar frauds motivated by effective altruism per decade"? If you think a better measure would be, "expected fraction of EA-funds acquired fraudulently," you can express it that way.
I feel like there is a bit of sleight of hand here, where you use math show to how updating a lot on dramatic events is usually bad, but then you don't actually use this math to show why you shouldn't update that much on the specific event you are feeling defensive about.
MIRI had some pretty big financial scandals (before it got renamed from the Singularity Institute). Not at the same scale in absolute numbers obviously, but enough that your model of the Rationalsphere should already have a term for the possibility that some of its highest-profile people and organizations will get involved in major financial scandals.
Could you please be more specific?
I remember that there was an employee (who wasn't a rationalist) who stole money from the organization (and then was fired?). I don't remember other "pretty big financial scandals" (and can't even google them because I get "Singularity University" or "financial singularity" instead).
I don't think SBF should really be considered as "motivated by ea". And "number of massive crypto scams per decade" is obviously way high anyway for any decade that has crypto.
>I don't think SBF should really be considered as "motivated by ea"
He was raised by a relatively prominent utilitarian, Will MacAskill himself nudged SBF into earning to give, and most of his staff came from EA.
It's easy to overestimate the degree he was motivated by EA, sure, but there's a substantial degree of defensiveness that wants to ignore EA's contribution as well.
>"number of massive crypto scams per decade" is obviously way high anyway for any decade that has crypto.
While accurate, EA being the "effective", smarter-than-everyone, we-know-where-our-money-goes people probably should've been more skeptical of crypto than they were.
Part of the defensiveness is that the coverage is basically claiming that EAs are *uniquely* vulnerable to billionaire crypto fraud, that you would expect it to be *more* common than fraud in non-EA crypto billionaires.
So maybe the argument comes down to “what’s the denominator?” Personally I would expect the base rate of fraud, or at least some sketchy behavior, to be pretty high among crypto billionaires, whether they are EAs or not.
Now, if you were someone who thought EA billionaires were uniquely *unlikely* to do fraudulent stuff, then SBF probably should make you reassess that prior.
If EA crypto billionaires do fraud at the same rate as non-EA crypto billionaires, but crypto billionaires are more likely to do fraud than non-crypto billionaires, and EA billionaires are more likely to be crypto billionaires than non-EA billionaires are, then you risk controlling for what you are trying to measure. You have to ask *why* so many EA billionaires are crypto billionaires. Which way does the causality flow? Vitalik Buterin got into crypto first, then EA, but SBF seems to have gotten into EA first, then founded Alameda Research as a result of his intention to earn-to-give.
You’ve also got the problem of low sample size. It’s not like we have a million EA crypto billionaires to run statistical experiments on.
Is fraud on the scale of SBF 1 in 10, 100, 1000? With just a couple examples it’s hard to say if it’s common, or we’ve just gotten unlucky.
(On the gripping hand, I don’t get the impression that SBF would have NOT tried to get really rich in crypto if he’d never heard of EA and “earn to give”)
The WHO report on Covid origins said that just before the first reported case, WIV put all their samples in a van and moved them down the road to another building (as part of a planned move). As they say, a leak during such an event when usual containment is disrupted is more likely. The timing is a remarkable coincidence if you don't think it's causal.
I don't understand why banning "gain of function" research would be "going mad" or overreacting. Even on paper and before the first leak it sounds like an idea so deeply terrible and silly that my working assumption is that it's a way for people to do and fund biological weapons research in plain sight without being ostracized for doing so. What is the expected advantage and what's the reasonable rationale for this expectation?
"What is the expected advantage [to gain-of-function research] and what's the reasonable rationale for this expectation?"
The NIH has a page on this. I expect that this is close to the party line on why the research is a good thing.
https://www.ncbi.nlm.nih.gov/books/NBK285583/
"The NIH is happy to report that the people whose careers depend on GoF research funding say that GoF is very important and also safe"
A large majority of people in bio whose careers do not directly depend on "gain-of-function" research also agree with this sentiment.
How do you actually know this? I only know one person who works in this field, and he went from an ordinary bog standard individual dating girls and playing progressive politics, to moving out into the middle of nowhere and building an airtight hermetically sealed doomsday bunker because of covid and the treatment of the lab leak hypothesis
I work in the field myself, and followed the debate through various channels (Twitter, reddit, podcasts like the TWIV network). They all match quite well with the chatter I heard from conferences and from colleagues.
Many of the arguments for a lab leak advanced in the popular press also fall away if you have a minimum of molecular biology background. So there were a lot of conversations making fun of the journalists advocating the lab leak hypothesis for misunderstanding basic concepts.
I write in the past tense because nobody seems to care anymore.
One of my friends is a big Noah Smith fan, apparently he's a kind of big name? My friend compared him to Yglesias, a bog-standard neoliberal with perfectly ordinary neoliberal takes, but idk if that's actually accurate
either way, apparently Noah offhandedly made some kind of comment about how something, maybe to do with israel, was sorta like how the establishment deliberately suppressed the lab leak hypothesis for so long and how this really hurt institutional credibility in a way that maybe isn't the skeptic's fault
this was apparently recent, like last month?
and, according to my friend, immediately a bunch of institutional medical types came out of the woodwork and started calling him a crazy right-wing conspiracy theorist, and how all of his crazy fans were just being racist against the chinese, and how the lab leak hypothesis was even crazier than J6 (whatever exactly that means, i'm not sure what the metric looks like)
this surprised me because i thought we'd settled this, that the accepted narrative at this point was "yeah the lab leak hypothesis got suppressed by people who believed it was plausible but wanted to hide that fact, the main tool of the suppression was social status games and ridicule, but later on some establishment people admitted that actually it wasn't totally impossible like the establishment said, even mainstream liberals took note and said stuff like "wow that's really bad, don't do that again", the trust we all had for our institutions continued to crumble but we were all at least impressed that they were willing to come clean."
As far as the actual object issue of whether covid was a lab leak, i thought the consensus had settled on "there's not a whole lot of strong evidence in either direction, and the evidence that does exist is very weak. Mostly if you're participating in the debate, the arguments are going to be about the game theory of how we should handle china, not about virology."
I looked up this Noah Smith guy to see if my friend was exaggerating, and it sure looked like my friend was pretty much right. Noah offhandedly mentioned lab leak during a conversation about misinformation in general, some establishment medical folk *leap* on him and equate him with Alex Jones.
This makes me wonder. Is my impression of what happened, the above summary regarding the progression of the lab leak hypothesis over the past 3 years, just conservative propaganda? Have I bought into the bullshit?
Or maybe the medical establishment looked at the current israel/hamas situation, realized "wait, literally not a single damn person cares about the truth and we can lie all we want, maybe let's go back and try to memoryhole the lab leak thing and see if we can push it back into alex jones territory so we don't look so tyrannical anymore"
I just don't know what to believe
(this is all a pretty minor datum, too, compared to the PhD Lab Bio guy I know who left his 7 figure job at big pharma to build a hermetically-sealed doomsday bunker in the middle of nowhere, while screaming all the while that security precautions at virology labs were nowhere near good enough to reach the insanely high difficulty bar of not killing millions of people on accident, and who now won't shut up about the illegal chinese virology lab discovered in Reedley where the CDC simply refused to investigate. I pretty much either have to believe that he had a mental breakdown and is now totally insane, or that he's got a point)
Yeeeaah, but it still seems pretty obvious that the downsides outweigh literally the entire benefits of the field of virology collectively, and are probably not far off from the entirety of medical research combined. There's a sense of scale missing.
No opinion on the first half of that, but as for the second half... well, I haven't died of smallpox, polio, typhus, tuberculosis, dysentary, or the plague, which is pretty nice actually.
If I understand correctly, GoF research has only really been around since 2011, and there was a moratorium on it from 2014 to 2017. I'd say that the accomplishments of all medical research during that time period is not *that* much greater than the losses from a risk of a Covid-level pandemic (~25 million dead, lengthy global shutdown) every, let's take a guess, 10, 15 years?
Sure, but that’s different from “the entire benefits of the field of virology collectively”.
“We think it’s really important to understand how to detect and contain a nuclear disaster, and how exactly it would proceed, so we’re going to deliberately create the conditions for a full scale nuclear disaster and see what happens. In a totally controlled and contained lab environment of course!”
> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.
I know it's not "the point" of the post, but since you're talking about distributions, the one you imply having here (pretty much no difference between right- and left-wing terror) is wrong, for a few reasons.
1. There are 50-100x as many right-wing people as transgender people (~50% of the country is Republican, i.e. right-wing, and <1% are transgender). Naively, we should expect 50-100x as many mass shooting events where the perp is right-wing vs. trans. Even if you limit "right-wing attacks" to specifically "motivated by right-wing ideology", you're still looking at a large delta between the incidence of far-right extremists and trans people.
2. According to the US government (one source from the DOJ, https://www.ojp.gov/ncjrs/virtual-library/abstracts/comparative-analysis-violent-left-and-right-wing-extremist-groups, you can find others from e.g. the FBI, etc.), right-wing terrorism is a much more significant threat to the US than left-wing terror.
3. In point of fact, there is almost no left-wing terror/mass shootings in the US, whereas right-wing terror attacks are commonplace and rising. Heck, there was a ~~bloody~~ god-damned coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law... We are in a moment where the violent right is ascendant, and the right in general is getting more violent and less patient with democratic problem-solving.
Edit to strike out "bloody", as it was confusing. I meant "bloody coup" as in "a god-damned coup", just using bloody as an intensifier, but can see how that was confusing given that "bloody coup" means "a coup where many died".
"According to the US government (one source from the DOJ, https://www.ojp.gov/ncjrs/virtual-library/abstracts/comparative-analysis-violent-left-and-right-wing-extremist-groups,"
That source is from 1986. As for more recent ones, I'll note that many of them classify the Taliban and the Nation of Islam as "far-right."
Re date, sure, but they publish similar reports today, far right extremism has been the biggest threat for a long time.
And re Taliban etc., hat's accurate, since they are, but even if you only count white right wingers the point remains.
Then why is it the left that's so against Islamophobia, and the right that is persistently antagonistic.
The definitions of "right wing" that the establishment likes to use is completely worthless. These are people who label a party literally called the National Socialist German Workers Party as right wing. This parties members literally called each other comrades, but according to our glorious anti-extremism establishment they were the polar opposite of socialist.
I thought this comment was hokey, and then we got to the classic "but but but they have socialist right in the name!".
Read up on it a little bit. That word isn't there because they're socialists.
I've read up on it extensively. That word is absolutely there because they were socialists. It is absurd to argue otherwise given the historical evidence, in fact. The party leadership was always very clear on that point, stressing it repeatedly. That's why retellings of the 1930s never seem to trust the audience with the actual translated texts of Hitler's speeches, showing instead only a few seconds of a shouty man shouting in a foreign language. They don't dwell on what was said back then because senior Nazis can't go more than two minutes without praising socialism, calling people comrades, talking about how socialist they are, how they're going to eliminate class differences etc.
And it was no mere rhetoric! The policy gap between Germany and the USSR was very small. They did many of the same kinds of acts and for the same reasons.
This is really tired right-wing propaganda, and it's honestly kind of sad to see people trotting it out here. Either you're a troll/right-wing propagandist and know full well that you're peddling bullshit, or you're sincere but have completely and utterly misunderstood the topic.
If you have sincerely misunderstood, maybe this is more your reading level: https://www.washingtonpost.com/outlook/2020/02/05/right-needs-stop-falsely-claiming-that-nazis-were-socialists/
How is “right wing” the “polar opposite of socialist”?
What would you say the polar opposite is?
The polar opposite of “socialist” would be something like “individualist”. Both are compatible with right wing or left wing variations. Historically, socialism has most often been motivated by left wing concerns about the poor and minorities, but in several notable cases it has been motivated by right wing concerns about national or racial greatness.
The contemporary left is far more supportive of radical Islam than the contemporary right. Just look at who were the first to call for a ceasefire in Gaza, or who currently opposes military action against the Houthis.
You're confusing unrelated things here.
Radical Islam is, without question, a far-right movement. That stands on its own and is unrelated to the point you make here.
The contemporary "left" has inherited from the traditional left a slant towards pacifism, respect for human rights, etc., and so of course they are calling for cease-fires and for ends to bombing and so on. They did the same during the neoconservative adventures in Afghanistan and Iraq.
The "left" isn't calling for ceasefires because they support radical islam (the modern "left" famously supports women's rights, lgbtq+ rights, democracy, etc. etc. that radical islam is literally violently opposed to), it's because they're against violence and military adventurism in general.
And _even if they did_ support radical islam (which they don't, but just for the sake of argument), that wouldn't make a movement which is by definition far-right any less far-right. It would just make the "leftists" who support them confused, or compromising on some axis for some reason.
Have you already forgotten "what did you think decolonization meant"? The contemporary left has minimal if any slant towards pacifism.
The degree to which Radical Islam is considered far-right just demonstrates the simplistic fecklessness of a basic left-right spectrum; such labeling is grossly propagandistic in an American context.
What do *you* mean by “right wing” or “left wing” if you want to classify radical Islam as closer to the “left wing” than to the “right wing”?
> Heck, there was a bloody coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law.
Do you guys actually believe that?
...Wait, what's even your objection to that? They didn't even blame it on Trump. If a large group of people attempt to physically stop a transition of power in a democracy to prevent the current president from being unseated, that's effectively an attempted coup, yes?
Right? Like, that's literally just what happened. The people who did it are extremely up-front about what they were trying to do.
No it isn’t. You would need an army or a large group of armed paramilitaries for a “bloody” coup. Protests that have broken into parliament buildings are common enough.
Anyway as a European centrist I have broken my own rule about not talking about American politics anywhere because you are, both sides, utterly bat shit crazy.
Somebody needs to tell the coup-believers to "listen to the experts" on coups :)
https://marginalrevolution.com/marginalrevolution/2021/01/one-or-two-simple-points.html
In British English, “bloody” is an expletive attributive that is commonly used as an intensive. It is often used to express anger, frustration, or emphasis in a slightly rude way.
The important thing is that you have found a way to say nothing of substance but also proclaim yourself superior to both sides. Bloody good job done, that.
Ohhh, wow, I completely overlooked that and just assumed that memories were getting continually more exaggerated as time went on. Good to know there aren't people suddenly "remembering" corpses lining the streets or anything.
There were five deaths and hundreds of injuries, so "bloody" is hyperbolic but not technically inaccurate.
Yes, exactly, I meant bloody as an intensifier. Not a great choice of words given the potential confusion.
The point isn't just breaking into the parliament, the point is breaking into the parliament with the exact goal of changing an election result to install a non-winning candidate to power. I consider it noncontroversial to claim this was the goal of Jan 6 protesters, at least the more organized section, even if it turned out to be a farce in execution.
But they had free run of the place, literally nobody was stopping them from doing what they wanted to do, and they didn't do anything! They milled about awkwardly, took some selfies, and left!
That's pretty much exactly what would happen, I think, if the cops let in the crazy progressives who like to storm government buildings during protests, and let them 'accomplish their goal'
This is false. They did not get into any of the areas where congressmen or senators were. They did not leave of their own volition, they only left when cops cleared them out. It took physical force to do so.
They also broke into some offices and stole stuff. Their main goal was stopping the certification of the election, which they accomplished for a few hours.
It's hard not to comment on such hilarity. Don't be too hard on yourself.
[Lest I appear to vitue-signal that I'm 'not like other Americans', I should add that wherever you are, your politics probably aren't much better -- just less funny.]
It was a poor choice of words, as @Forrest points out, "bloody" is an intensifier and that's how I meant it, not considering that "bloody coup" is also a term meaning a "coup where many died". I just meant something like "a god-damned coup".
No, that's a disruptive protest. Disruptive protests happen all the time (almost always coming from the left).
When was the last “disruptive protest” that “attempted to physically stop a transition of power in a democracy”? In the United States, I am unaware of any such protests since Jan 6, 2021, but maybe there are some examples I should be thinking more about.
https://en.wikipedia.org/wiki/Brooks_Brothers_riot
> VP Pence, presiding over the joint session (or Senate Pro Tempore Grassley, if Pence recuses himself), begins to open and count the ballots, starting with Alabama (without conceding that the procedure, specified by the Electoral Count Act, of going through the States alphabetically is required).
> When he gets to Arizona, he announces that he has multiple slates of electors, and so is going to defer decision on that until finishing the other States. This would be the first break with the procedure set out in the Act.
> At the end, he announces that because of the ongoing disputes in the 7 States, there are no electors that can be deemed validly appointed in those States. That means the total number of “electors appointed” – the language of the 12th Amendment – is 454. This reading of the 12th Amendment has also been advanced by Harvard Law Professor Laurence Tribe. A “majority of the electors appointed” would therefore be 228. There are at this point 232 votes for Trump, 222 votes for Biden. Pence then gavels President Trump as re-elected.
> Howls, of course, from the Democrats, who now claim, contrary to Tribe’s prior position, that 270 is required. So Pence says, fine. Pursuant to the 12th Amendment, no candidate has achieved the necessary majority. That sends the matter to the House, where “the votes shall be taken by states, the representation from each state having one vote . . .” Republicans currently control 26 of the state delegations, the bare majority needed to win that vote. President Trump is re-elected there as well.
> One last piece. Assuming the Electoral Count Act process is followed and, upon getting the objections to the Arizona slates, the two houses break into their separate chambers, we should not allow the Electoral Count Act constraint on debate to control. That would mean that a prior legislature was determining the rules of the present one – a constitutional no-no (as Tribe has forcefully argued). So someone – Ted Cruz, Rand Paul, etc. – should demand normal rules (which includes the filibuster). That creates a stalemate that would give the state legislatures more time to weigh in to formally support the alternate slate of electors, if they had not already done so.
> The main thing here is that Pence should do this without asking for permission – either from a vote of the joint session or from the Court. Let the other side challenge his actions in court, where Tribe (who in 2001 conceded the President of the Senate might be in charge of counting the votes) and others who would press a lawsuit would have their past position – that these are non-justiciable political questions – thrown back at them, to get the lawsuit dismissed. The fact is that the Constitution assigns this power to the Vice President as the ultimate arbiter. We should take all of our actions with that in mind.
This was what Trump was trying to do on Jan 6, 2021. This is a direct quote from his own lawyer. When he sent the protestors towards the capital building, it was to pressure Pence; when he remained notably silent as they breached the capital building for hours despite people close to him begging him to act, that was also to pressure Mike Pence. I feel like that is, in fact, thousands of right wingers storming the capitol to overthrow the democratic rule of law, unless you think that this plan is in accordance with democratic rule of law (particularly when combined with fake electors and storming the capitol).
Not to mention the reports from many people actually there in the capitol - their intent was to install their preferred candidate as president.
I don't see how encouraging or not discouraging the rioters really put pressure on Pence to try some crazy legal manoeuvre and trash his reputation.
Trump's plan to try use procedural shenanigans to be declared winner doesn't require storming the Capitol. The rioters weren't really trying to seize the Capitol so that Trump's plan could succeed. Sure the rioters wanted Trump to be president, but they surely didn't know all the details of this legal strategy; they seemed to be Trump fans and conspiracy nuts who got overexcited. They were mainly unarmed, so even if they had fully taken over the Capitol temporarily, armed police or the army would have cleared them out soon enough.
"storming the capitol to overthrow the democratic rule of law" would be a fair description if they had been a paramilitary force who could actually take lawmakers hostage and hold off the army.
Surely the really bad thing was Trump refusing to accept the election result and considering crazy legal strategies to try to be declared winner; the riot seems kind of a sideshow to that.
> Sure the rioters wanted Trump to be president, but they surely didn't know all the details of this legal strategy
They were saying "Hang Mike Pence" for a reason.
OK, so they knew at least enough that they were mad at Pence. Did they have any credible capability and intent to actually hang him? Or was this perhaps hyperbole? A lot of people say 'politican X should be jailed/hanged' but that doesn't rise to the level of a dangerous insurgency/revolution unless they can make it happen.
Also, if the plan was to pressure Pence to do procedural shenanigans, what is the use of sending in a bunch of rioters to disrupt proceedings? In the worst case were they overwhelmed the Capitol security forces and actually hanged Mike Pence, that kind of ruins the plan that depends on Mike Pence's cooperation! I think everyone involved is too stupid and disorganised to call this an actual coup or insurgency attempt.
Thanks for the legal breakdown!
But I think the case for coup isn’t strong. He asked people to protest in front of Capitol, which is legal. When they broke in, he told them to stand back (after 3 hours, which you can interpret in infinite amount of ways).
> When they broke in, he told them to stand back (after 3 hours, which you can interpret in infinite amount of ways).
I will interpret it in the very obvious way.
<i>Heck, there was a bloody coup attempt just a few years ago where thousands of right-wing extremists stormed the capitol to overthrow the democratic rule of law...</i>
Of the five people who died on Jan 6, four were natural causes, and the remaining one was a protester shot by the Capitol Police: https://www.factcheck.org/2021/11/how-many-died-as-a-result-of-capitol-riot/
Whatever else you want to say about the Capitol-storming event, "bloody" is simply inaccurate.
Yes, poor choice of words, I meant "bloody" as an intensifier, as in "bloody infuriating", not "bloody" as in "many people died".
>According to the US government , right-wing terrorism is a much more significant threat to the US than left-wing terror.
The wingedness of terrorism is a notoriously contentious question, and rather like the abuse/confusion of what counts as a "mass shooting," so goes what counts as terrorism of one wing versus the other.
LIkewise, the amount of Islamic terrorism looks quite different if you start counting from 9/12/2001, 9/10/2001, or 02/25/1993 (https://www.state.gov/1993-world-trade-center-bombing/). Choose any other date as convenient.
If you include the Weathermen and the other Days of Rage, the Long Hot Summer, etc, the degree of right vs left terrorism being more significant will change drastically.
> Before 9-11, we might have investigated the frequency of terrorist attacks. We would have noticed small attacks once every few years, large attacks every decade or so, etc. Then we would have fit it to a power law (it’s always a power law) and predicted a distribution
I disagree with this example; I think that the model we had of terrorism prior to 9/11 was significantly different. Typical pre-9/11 terrorism involved making plausible political demands and using limited quantities of violence to terrorise the civilian population into accepting them. This was the terrorism of the PLO, or the IRA, or the ANC, and there was a rational (if amoral) political calculus behind it.
9/11 was sampled from a different distribution. We called it "terrorism" but it didn't really match that modus operandi, there was no rational political calculus going on (at least not one that we could understand), there were no specific political demands, nobody to negotiate with on those demands, and no sign of restraint. This new form of terrorism seemed to be simply aimed at killing as many people as possible.
Yes, this argument Scott makes here has another problem. Sampling from an expected probability distribution only makes sense if the distribution is actually semi-static, which in an adversarial scenario it isn't. In that case it's possible that rare events are rare exactly because each time one happens people freak out and do lots of stuff to ensure it can't happen again, forcing the adversaries to find new tactics.
I'm pretty sure that if the response to 9/11 had been a shrug and people saying "eh we knew it could happen, no big deal" then it would have happened again very quickly. OBL would have realised he'd found a soft underbelly on a peculiarly and irrationally unresponsive enemy, and would have just kept striking in the same way over and over until people realised that this type of rationalist Bayesian-priors argument was just plain wrong.
Fact is, in many scenarios past probability is no guide to future probability. Something can go from never-before-seen to very common, extremely fast.
In the case of COVID this lesson has NOT been learned, and so we would expect viral lab leaks to become a lot more common in future. Safety measures are hard work and virologists frequently ignore them as a consequence. They've now been taught in the strongest way possible that they can do whatever they like, deny it all, and nothing will happen to them. The establishment is so addicted to false narratives about progress that they can just do whatever the hell they like and be completely protected.
The result is that, unsurprisingly, the Chinese are now making new coronaviruses that are 100% lethal in mice due to some sort of terminal brain infection:
https://www.biorxiv.org/content/10.1101/2024.01.03.574008v1.full
> I'm pretty sure that if the response to 9/11 had been a shrug and people saying "eh we knew it could happen, no big deal" then it would have happened again very quickly. OBL would have realised he'd found a soft underbelly on a peculiarly and irrationally unresponsive enemy, and would have just kept striking in the same way over and over
I'm not so sure. I think - and I think it's generally accepted - that the point was to provoke an overreaction. If it had been unsuccessful, maybe they would have repeated similar attacks until they got the desired result (entirely plausible), or maybe they would have looked for some other way of achieving their goal.
On a slight tangent, I find it fascinating how in both this thread and the other Godwin thread where we're discussing, you don't really assign any weight to what people say their motivations are. I'd be curious to drill into that further if you don't mind. Are you carefully considering both these cases, or is it more like repeating things you heard? When you say "generally accepted" who is this, exactly? Academics? TV talking heads? Books?
OBL was always happy to discuss his claimed motivations and as presented they were fairly obvious: he didn't like the US being in the middle east, supporting Israel etc. To me, Occam's Razor says to take him seriously when he explains his motivations unless there's some decent evidence that he's trying to mislead about them (if there is such evidence I'm unaware of it). Likewise, to me, a party that calls itself National Socialist should be treated as socialist unless there's some compelling evidence that this was a deliberate lie.
You on the other hand see 4D chess behind both adversaries. OBL wasn't really mad about US troops in the middle east, in reality that was an Al Qaeda conspiracy. He instead wanted to provoke an overreaction, in order to ??? Make more people follow him, presumably? To what end? Likewise the Nazis claimed to be socialists publicly, but this was 4D chess. In reality they were .... what? Extremist libertarian capitalists, pretending to the socialists in order to .... ??? I'm not sure. Get voter support?
I feel like any explanation of bad guy behaviour that starts by discarding their stated goals is on slippery ice and needs to be treated carefully, especially when those goals are common and have reoccured throughout time.
They weren’t libertarians, they were fascists. Read up on the history of the Nazi party and you’ll see that there used to be a socialist wing to the party, but hitler killed them all in the night of the long knives because they opposed him. This was a necessary step for him to come to power. After he came to power he spend some time killing other socialists factions too. That’s why the ‘first they came for the socialists’ poem starts with the socialists.
What's the opposite of socialists? It's got to be libertarian capitalists, right?
The claim that Hitler wasn't a socialist because he killed other socialists is extremely silly. All socialists kill other socialists. Just ask the ghost of Trotsky. It comes with the territory. If you don't believe in the marketplace of ideas, or any marketplace, then all that's left is the raw exercise of power.
What does it matter what the opposite of socialist is? Fascists can kill socialists. Among the reasons to believe that hitler was a fascists was that he killed the socialist, but that’s obviously not the only reason. This false dilemma between either believing in the marketplace of idea or being violent obviously doesn’t hold. Obviously not all socialists kill other socialists just ask the next socialist you meet whether they’ve killed someone, and obviously not all socialists disbelieve in a marketplace, just ask the next market socialist you meet.
If 9/11 was an attempt to provoke an overreaction, then e.g. the attack on the USS Cole was an attempt to provoke an overreaction, by the same people and not too far apart in time. So we *know* what happens when a terrorist "attempt to provoke an overreaction" fails - they look for some other way of achieving their goal, and the obvious approach is to kill a whole lot more people, closer to home.
It wasn't completely new (see for example the Oklahoma City bombing), but what was new was that how big / funded /coordinated the group was behind it. Lone wolf (maybe + 1 close friend or something) kind of attacks that you described were a possibility, but if you had asked me (or I assume many other folks) the probability that you could get 19 (19!) suicide attackers coordinated together in secret without it leaking, get them into the US and living there for a while, get some trained as pilots, get them though airport security with weapons and onto 4 planes the same morning; I'd have said the odds were really, really low. It's something out of a not particularly believable action movie where the supply of perfectly loyal and fanatically devoted, even suicidally so, to the cause yet somehow also quite capable and able to pass as a normie mooks is inexhaustible.
As such I should have expected to see dozens of similar in scale and/or approach and foiled plots of similar sophistication for this one success. So many places something should have gone wrong. Instead this seems like more or less the first time someone tried something like this. That's a radical update.
At minimum it's an updated that these sorts of things are much less likely to be detected and foiled then I had assumed, and if potential terrorists made a similar update then perhaps we would see a lot more of these kinds of attacks in the future.
OKC & 9/11 had very salient differences for countermeasure planning:
- OKC was not a suicide attack, so swift capture, conviction, & execution of the culprit(s) is deterrent to copycats.
- OKC specifically targeted a Federal building, so had minimal implications for private infrastructure.
- OKC required a supply chain (up to & including getting the truck close to the building) which presents opportunities for monitoring & disruption.
If anything, OKC could have been an object lesson in how *not* to plan future attacks.
> there was no rational political calculus going on (at least not one that we could understand)
I believe the rational political calculus was that a large and unprecedented terrorist attack would provoke an extreme overreaction from the USA (and potentially its allies), which would weaken America's standing in the world, provoke animosity towards the US/the West amongst the global Muslim community, and cost America enormous amounts of blood and treasure in futile adventurism.
And it was _extremely_ successful, achieving all of those goals.
There's no challenge in getting the US Government to waste money. Might as well get ducks to quack.
"It was that, after it happened, we were so surprised that we flung ourselves to the opposite extreme and saw terrorists behind every tree and around every corner. Then we made the opposite kind of failure (believed Saddam was hatching terrorist plots, and invading Iraq)."
the logic here seems to be that launching the war on terrorism was an overreaction because during and after there wasn't much terrorism?
The issue is that it's possible the war on terror decreased terrorism, both through greater security and government powers, and removing important bases in Iraq and Afghanistan as well as putting sanctions on terrorist organisations around the world.
I don't know the relevant timelines, so I don't know how likely this is, but it really seems like since the US retreated from the war on terror, reduced sanctions on Iran, Houthis, and presumably others, and retreated from Iraq and Afghanistan, there DOES seem to have been a big uptick in global terrorism. So maybe the response to 9-11 did something?
I recommend the book Days of Rage about terrorism in the US in the 70s. If I recall correctly in 1976 there was an average of three attacks in the US each day.
In these three months following October 7th, I think we've really seen there are no shortage of terror supporters in the US or around the world.
My objection to treating terrorist attacks as randomly distributed along a power law curve is that the big attacks are not someone randomly deciding to take action. They're sponsored by organizations, which are funded by rich people and protected by state governments. Those actors will look at a once-in-fifty years bodycount and think "That worked really well, we should do some more of that!"
The counter is for the anti-terrorist side to take action. Enough action that the next time someone has a bright idea for killing thousands of Americans or more, the response will be "Shut up! Do you want the Americans hanging around for twenty years making our children attend immoral schools?"
Good point. We are dealing with people here, on both sides of the "events". Not only those who react to the "events", but also those who cause them. The latter may change their behaviors depending on how others react to what they do - thereby changing the frequency of future "events". Therefore I am a bit uneasy about this blog post by Scott. It seems to assume that the statistical distribution of "events" is fixed. That may be the case e.g. with volcanic eruptions, but not with man-made "events".
Let's assume that this isn't preaching to the choir... are you sure that even normal people typically are updating based on the the occurrence of a dramatic event and not simply updating on the reaction to it?
A dramatic event is often latched onto by people who stand to gain by making it seem high probability or impact, and the fact that they create drama around (regardless of the specific logic they put forth) is signal too. If those people are (or are signal boosted) by people close to your usual sources of world truth, it makes sense to update your model of world truth.
It's often not a power law! https://arxiv.org/abs/0706.1062
Does this essay's commentary on terrorist attacks begin from the premise that they are... independent events? That seems absurd- It's perfectly reasonable to assume a massive terrorist attack being successful might change the frequency at which terrorists attempt to do massive attacks! If the goal of terrorists is to create chaos and terror in enemy society, they will try to replicate the tactics which successfully do so.
Maybe the assumption is that the potential terrorists are perfect Bayesians too, and have already incorporated all available data in their strategies and success predictions.
Very true. Humans are social creatures, including the terrorists among us. We learn from experience, from the example of others, and from how others react to whatever we or others do. So do would-be terrorists, would-be mass murderers, those who are tempted to use their local power to get laid more often, and everyone else.
That is why statistics about the frequency of man-made "events" can have a short shelf-life. If something new happens, it may induce behavioral change, making the frequency tables constructed based on past events not valid any more. Bayesians should adjust their priors accordingly.
Yeah there is definitely a copy cat effect to consider.
As an example, the fact that school shootings are the big fear and not school bombings or arsons can largely be traced to the historical quirk that two teenagers in Colorado were shitty at building pipe bombs.
Not so sure about that. Guns are fairly easy to acquire in the US, and fairly easy to use.
Acquiring bombs is difficult. Setting buildings on fire in such a way as to create a lot of destruction or kill a lot of people is difficult.
Shooting up a cafeteria is relatively easy.
You're just seeing people take the path of least effort for greatest results.
You’re underselling the degree to which most school shooters have been obsessed with, and deliberately copycatting, Harris and Klebold.
Now, maybe “The Columbine Bombing” wouldn’t have had the same memetic staying power. Maybe it would have evolved into shooting anyway, being easier. Or maybe a different event would have overtaken it and become the prototype.
But I think there is definitely a plausible world where the bombs work and this spawns a couple decades of copycats.
See also https://slatestarcodex.com/2016/01/06/guns-and-states/
> FS/S correlates at a fantastically high 0.62. For some reason, suicidal Southerners are much more likely to kill themselves with guns than suicidal people from the rest of the States, even when you control for whether they have a gun or not.
That is, Southerners who own guns are more likely than non-Southerners *who also own guns* to kill themselves with a gun as opposed to some other method. Convenience cannot possibly explain that. But cultural transmission can.
I strongly agree with the overall point of this article (expressed around the SBF vs. Altman episodes), but not sure about this:
"But terrorist attacks after 9-11 mostly followed the same pattern as before 9-11: every few years, someone set off a bomb and killed some people, at about the same rate as always.... In retrospect, updating any of our beliefs - about Islam, about the extent of the terrorist threat, about geopolitical reality, based on 9-11, was probably a mistake"
I think the argument is that part of the post-9/11 paradigm was the government instituting a ton of security protocols, and collaborating with other friendly nations to do the same. So the rate of terrorist attacks after 9/11 probably reflected a heightened security environment- I disagree with the idea that they're randomly distributed and we just happened to hit a big in the early aughts. There are a lot of foiled attacks.
Also, there are a lot of *successful* terrorist attacks in Europe by Islamists. It seems to me the rate increased from before the 2000s? The Spanish train bombings were not that long after
Yeah I was about to write something similar. I got curious and read the wikipedia for "Suicide attack" and "terrorism", and my impression is 9/11 really did cause, or at least coincided with, an upsurge in this type of organised suicidal terrorist attack. Additionally, other late 20th century examples seem to be very tied to specific nationalist/secessionist movements and therefore lower risk to a place like the USA.
Yes, al-Qaeda had struck before, starting in the 90s, and this made something like 9-11 somewhat more predictable. But this kind of structured international terrorism was fairly novel (perhaps comparable to the anarchist movement many decades prior), and was followed by a long list of similar successful and unsuccessful terrorist plots by them or related groups in the West. In hindsight, it does seem reasonable for people to have updated their priors significantly after 9/11 (or at least rapidly between 1998 and 2003 - much faster than the multi-decade power law expectations Scott is suggesting).
Again this is just my impression from wikipedia, would be interested if anyone has studied this more deeply.
>I think the argument is that part of the post-9/11 paradigm was the government instituting a ton of security protocols, and collaborating with other friendly nations to do the same. So the rate of terrorist attacks after 9/11 probably reflected a heightened security environment- I disagree with the idea that they're randomly distributed and we just happened to hit a big in the early aughts. There are a lot of foiled attacks.
Yes, that's what I was also thinking about. In particular, all the onerous air security measures after 9/11 were specifically instituted to prevent a similar attack from 9/11 from ever happening again. And it worked! Indeed, back when I was a child (in the 90s) hijackings in general were a strong enough thing in cultural memory, at least, that I expected I'd go through at least one of them during my lifetime, and if anything the security measurements now mean that that *cultural* memory no longer applies (even though, apparently, there still are some hijackings! https://en.wikipedia.org/wiki/List_of_aircraft_hijackings)
>Also, there are a lot of *successful* terrorist attacks in Europe by Islamists. It seems to me the rate increased from before the 2000s? The Spanish train bombings were not that long after
It increased and then decreased again (see https://en.wikipedia.org/wiki/Islamic_terrorism_in_Europe). The terrorist wave was the strongest in the mid-2010s, concurrently with the existence of IS as an actual force in the Middle East, but started trending down after IS was mostly defeated in those areas.
I. Lab leaks. Another option is recognizing the pandemic for the wakeup call that it was and ask do we want to continue with the (20%,2%, 0.2%?) annual risk of repeating a pandemic due to a lab leak. Arguing for continued gain-of-function study is the ultimate luxury belief.
II and III. 9-11, nuking of a single city and mass shootings (don't worry folks, we predicted something like this would happen, that's just life in big city). Some people (enough to affect national discourse and some elections) demand political responses to certain events. "We're not changing our response, not updating our priors, and continuing to be guided by Bayesian math," may not be a winning political response, which could lead to being forced to update priors after the next election. Maybe take out Bin Ladin and his lieutenants but not invade Iraq.
There are several varieties of this, and in some cases I agree with your point more than in others.
1. Exceptional events, perhaps first of their class. In these cases, an occurrence typically provides a non-negligible update on the frequency of the event (even if the correct update may be smaller than many people think).
1.1 Exceptional events with major consequences. E.g. a lab-leak-caused pandemic that kills millions of people, or a nuclear terrorist attack. Policy changes may be warranted.
1.2 Exceptional events where the consequences are minor on a world/national scale. E.g. 9/11.
The US response to 9/11 wasn't an overreaction because it wasn't an exceptional event that should cause one to update non-negligibly, but because a few thousand people dying in a country of hundreds of millions is, as sad as it is, is too little to warrant major policy changes, even if similar attacks were to happen slightly more frequently than we'd thought.
(In the particular case of 9/11 there is the further consideration that air crews and passengers having learned never to cooperate with hijackers is enough to prevent attacks of this form from being repeated. OTOH that was an important lesson to have learned, with the side benefit of disincentivizing more traditional, hostage-taking hijackings.)
2. Non-exceptional events (say, at least a few dozen have happened already). In these cases, you should update negligibly based on a new occurrence.
2.1 Events infrequent enough that you're likely to hear about all of them: e.g. mass-shootings, air crashes.
2.2. Events (e.g. assaults, instances of sexual harassment, even homicide) more frequent than audiences have the appetite to read about. In these cases, how many you hear about in the media is almost entirely uncorrelated with the actual amount of occurrences, and depends entirely on how many stories can still keep up the readers' interests; and which ones you hear about depends on which ones the media decides to talk about (either deliberately, or through a random chain of events).
One problem with the cynical strategy of using crises as organizational opportunities is the strong tendency they have toward becoming left/right coded. Once you've polarized on an issue - or leveraged polarization on that issue - you risk enacting reforms that are too targeted.
An example: Sex scandals involving allegations of the Catholic church covering up abuse have not been in the popular news recently. (At least not the news I consume.) When they were, I remember thinking, "Wow, that's bad. I'm shocked at the reports I'm reading!" I updated some of my understanding of the inner workings of the Catholic church, and thought there must be something uniquely wrong happening there.
Recently, I started looking at other statistics about sexual abuse of minors. It turns out this is very common in public schools as well, including reports that some school districts cover up allegations by transferring teachers, hiding the allegations, and allowing them to continue interacting with students.
As I was reading about this, I reflected on my previous updates about the Catholic church. I thought, "Maybe this phenomenon has nothing to do with religion or with one specific institution gone astray. Maybe when something sufficiently negative threatens to make an institution look REALLY bad, the people in charge respond by trying to hide the abuse." I don't claim to understand what these people are thinking at this point, but it doesn't seem like you need a particularly corrupt/captured institution for cycles of abuse to be hidden by the bureaucracy. Or at least, this isn't the kind of institution that is as highly abnormal/unexpected as I'd thought on reading the initial reports.
I also think that the impetus for abuse of minors by people in positions of authority is much higher than I used to think it was. Given these two updates, I no longer think a solution that specifically targets the Catholic church would do much to curb this phenomenon. Indeed, it might get in the way of good general reform to make a hyper-specific reform targeted at Catholics that does nothing to address the general problem rooted in human nature.
If, instead, we looked at general institutional incentives and tried to shift them to the point where any institution that failed to report abuse suffers, while those that proactively report abuse are rewarded as being forward-thinking and honest brokers (because they're on the lookout a behavior we expect to find under normal circumstances), we might be able to make meaningful changes. Indeed, we might find abuse that has previously been successfully covered up, allowing us to stop it.
The problem with large updates on single phenomena is that the hyperfocus of the moment can cause us to make changes that are far too specific to address the root problem we care about solving.
Perhaps, although I do think that the celibacy requirement makes Catholic priests uniquely susceptible to these sorts of urges.
When I first heard about the scandal, I had the same thought. Yet you still have the exact same type of abuse in denominations without a celibate clergy. It's a great hypothesis, but based on the evidence I no longer believe it to be a significant factor.
Also married gym teachers. Or teachers of all types. Or those of any stripe who have access to the impressionable.
Since this article is also about 9/11, I should repeat the (factually accurate) joke that if the normal distribution of pedophiles in the population held for the twin towers, the attack killed 75 of them.
Exactly. The impression you get from the Catholic scandal is that you've some insight into the type of people who might abuse. "It's those celibate types," "it's those religious types," or whatever. It's the details of the one-off situation that create distortions you don't even realize are there. Then countervailing evidence demonstrates that maybe you've spent years with a defective pedophile detector because you updated too hard on a single event.
In this, as in all the other examples, the problem is that we've updated away from sound heuristics in the first place. Thinking that Catholic priests as a class are pedophiles is possible only when we've lost touch with more concrete and immediate cues. We're a highly evolved social species, and body language and facial expressions convey a ton of information (to most of us, of course -- it's a range). If you hear about some abuser being defrocked, imagine a normal Catholic priest that you've known or seen, and then imagine a pedophile. Only then look at the picture of the abuser, and see if he looks more like the former or more like the latter.
Similarly, foreign terrorism and Chinese lab leaks are in a category of risk that we're naturally suspicious of, unless we've learned to label this 'xenophobia' and update away from it. None of our ancestors before the industrial revolution would have been able to understand why unrestricted travel in our communities is a human right that we must extend to our enemies.
If it really is the case that celibacy makes people into paedophiles, maybe women really do have an obligation to have sex with disgusting incels. Wouldn't want them to turn to children instead.
Why would it do that?
Melvin, if a priest wants to break his vows and have sex, he can have affairs with women parishioners (or male ones). "Gosh, I want to have sex, but it's too risky with grown-ups, I'm safer to be fucking kids" isn't really the motivation here.
That was the big excuse used by those who want a more liberal church: if only we could have married clergy/women clergy/gay clergy! Then this would never happen!
And the response when the school sex scandals came out, from this side, was "oh no, if only teachers could marry! if only women could be teachers! if only there were gay teachers! oh, what do you mean that already is the case?" because it's not got very much to do with "can you marry", it's "are you capable of/want a relationship with an adult?"
Certainly, I think celibacy meant that men who could not see themselves/had no interest in heterosexual marriage had an outlet in the priesthood, "John isn't gay, he's got a vocation" was a way of avoiding coming out to family and that was an entire, separate scandal of its own (see Ted McCarrick https://en.wikipedia.org/wiki/Theodore_McCarrick#Abuse_of_seminarians and the rather, um, extravagant allegations made by Archbishop Vigano) and the same for men who may not have understood their own paedophilia.
But the underlying problem is: positions of authority that offer access to children will attract these types of people, whether or not they permit married gym coaches, doctors, teachers, and so on.
The biggest difference between the Catholic Church sex abuse scandal and sex abuse in other organizations is that the hierarchical nature of the Catholic Church meant the cover-ups were far more involved than they would be anywhere else. You could have a Protestant pastor in some storefront church answering to no one. If he commits any sort of sexual impropriety (criminal or otherwise), maybe he gets caught, maybe he gets chased out of town, maybe he gets away with it. In the Catholic Church, every priest answers to a bishop (or some sort of superior in a religious order), and those higher-ups had incentives for covering up crimes. And the Church as a whole is a lot bigger than a single school district, so it was easier to move people around in an attempt to conceal wrongdoing. But I think you're completely right that there's sexual abuse (and cover-ups of that abuse) in a lot more places than just the Catholic Church.
You're probably right insofar as the Catholic Church is one of the largest institutions on the planet. And for a time I thought that elements like their hierarchical nature and their ability to move people around to hide offenses made them uniquely able to perpetrate this kind of abuse.
But school districts have been caught moving teachers around within the city, hiding offenses and keeping teachers from criminal prosecution, etc., similar to what the Catholics did. When I think about it a little more closely, they probably can't move a priest from Boston to Bogotá. Indeed, any move more significant than within the city increases the logistical difficulty. (I don't know much about this aspect of the Catholic Church, so maybe it is as easy as an international transfer?) If most transfers are within the city/region, the nominal size of the church may not be as unique a factor as it at first appears.
Which is part of what I'm trying to say here. When I'm looking at an isolated incident, I've noticed a strong tendency to use unique features of the case to explain the phenomenon, absent any actual data establishing causal relationships. This can lead me to think that "those guys are uniquely bad", which can lead me toward partisan thinking that harms my ability to understand the problem at a more fundamental level. When I stopped thinking of child abuse as a unique or far-off phenomenon, prevention of that same issue within institutions I'm involved in became a more immediate concern. There but for the grace of god go my favorite institutions, (religious or secular).
I can't say for sure how often priests were transferred outside of dioceses. Transferring a priest within a diocese (roughly a metro area, perhaps a larger area in a less-dense state) is a very simple procedure. Transferring a priest to another diocese would basically require both bishops and the priest to approve. But after some brief googling and Wikipedia research, it looks like there were some priests (from Orange County, CA, according to Wikipedia) who were transferred to different dioceses and even countries.
And also, even one Catholic archdiocese can be enormous. The Archdiocese of LA has 4-5 million people. Only five other denominations have more than 5 million members in the entire US.
Thanks for this! Sounds like the Catholics definitely leveraged their size to help hide abuse. Indeed, moving someone to another country significantly complicates the process of criminal proceedings against the offender, which is probably why an international reassignment was agreed to.
Not to excuse the crimes, but at the time there was also "we consulted a psychiatrist who told us that therapy would cure Father Dan and now he's fine and safe to be moved to another parish" going on. Some of the higher-ups naturally didn't want scandal to be public, but they did try the Medical Consensus Of The Time, which often was "nah, some therapy and counselling and it'll all wash out".
I'm gonna need to see some kind of source that public schools have both a) the same large number of child sexual abuse cases and b) the same kind of large cover-up involving hundreds (if not more) of people over decades as part of an organized institutional effort.
As far as I can tell, neither are true. The weaker claim "there is also sexual abuse in public schools and sometimes it is covered up" is of course true, but not very interesting.
Edit: basically at a minimum, I would need to see evidence that multiple Secretaries of Education (or at least, very high-ranking members of the Department of Education) were routinely involved in national-scale sex abuse cover-ups, in the same way that the Pope and high-ranking Catholics were involved for decades in these kinds of cover-ups.
And *I'm* gonna want to see some evidence that "the Pope" (which one? the sex abuse scandals go back decades) was routinely involved in such cover-ups.
I can make demands for rigour to fit in with my biases, too!
Anyway, somebody is claiming the Department of Education was complicit in such cover-ups for schools:
https://edworkforce.house.gov/news/documentsingle.aspx?DocumentID=409216
"Today, Education and the Workforce Committee Chairwoman Virginia Foxx (R-NC) made the following statement in response to the Defense of Freedom Institute’s (DFI) new report that uncovered evidence of teachers unions, school districts, and the Department of Education concealing cases of sexual abuse in K-12 public schools:
“This report highlights a pattern of gross misconduct by school officials and the Department of Education—who are beholden to teachers unions—to conceal directly or indirectly sexual abuse in K-12 public schools. Cases of sexual assault have tripled in the last decade, but instead of investigations and terminations, perpetrators are often transferred to another school or school district, or given an administrative job.
“The Department’s Office for Civil Rights (OCR) is complicit too. OCR has worked to reverse the progress made by the prior administration to compel school district officials to take decisive action in cases of sexual abuse.
“In no uncertain terms, this report shows that the Department of Education and teachers unions are putting children in harm’s way. I hope that this report shines a spotlight on the deteriorating state of public schools in America and the consequences of the Left’s radical education agenda. Rest assured, the Committee will continue to hold the Biden administration accountable for not putting students first.”
So what is this report, then?
https://edworkforce.house.gov/uploadedfiles/catching-the-trash-fnl.pdf
Maybe it is only a bunch of nutjob right-wing conspiracy theorists, but that's the claim.
Some local news station in Boston also did investigative reporting into abuse in schools there, and the response (or not) of the authorities:
https://www.boston25news.com/news/local/25-investigates-finds-state-laws-enable-secrecy-over-sexual-abuse-ma-public-schools/YTB4U5XAHRCELAJU5WVSDTKH3Q/
Or California:
https://soe.vcu.edu/news/recent-news/cbs-report-on-sexual-abuse-in-schools-cites-vcu-scholar-as-resource.html
But I guess because Massachusetts and California school boards are not international, or even national, authorities then it's not the same thing at all, at all!
Thanks for providing this. I'd like to return the focus on the original point, which was a specific example of how updating based on dramatic events can cause you to become hyper-specific in your expectations. "This kind of thing happens because of X, Y, and Z features of the Catholic Church", or perhaps, "this report shines a spotlight on the deteriorating state of public schools in America and the consequences of the Left’s radical education agenda".
It seems clear to me that "positions of authority attract this kind of person", and perhaps, "extremely embarrassing and polarizing bad press can cause institutions to defend themselves by hiding misconduct privately, instead of rooting it out publicly" are more reasonable conclusions. The kind of solutions that make sense hinge on what you think the problem is.
I bet if Jeff Epstein was two decades younger he'd have given a lot of money to EA and there'd have been an entirely unnecessary massive controversy when his extracurricular activities got revealed.
I disagree, Epstein was never a big charity guy. He had ample opportunities to donate to charity and never did, and I don't think that inventing a slightly different version of charity would have changed his mind.
I think he donated to a number of non-profits, such as Edge.org
I think that is quite different from the sort of charity donation that might slide into helping poor people.
EA is a big category which includes things other than "helping poor people".
EA is to some degree a non-standard "charity," even if they fund some traditional (but above-average ROI) charities. There's also a fair number of Big Thinkers associated which might have appealed to him, that he'd want to brush shoulders with (like Marvin Minsky apparently did).
Marvin Minsky's "The Society of Mind" thanks Epstein for the funding on one of the first few pages, seems plausible Epstein would fund stuff in the AI x-risk space.
No chance - EA is for the uncool nerdy kids. Epstein was in with the big guys, who categorically do not support EA. EA is, cynically, a way for people to act like they're smarter than everyone else. Other, more trendy causes are for people who want influence and attention.
How do you figure, "EA is, cynically, a way for people to act like they're smarter than everyone else."? I would say EA is a way for people to admit that others are more knowledgeable about how best to distribute donations.
To be fair, people who want to act like they’re smarter than everyone else were a lot of Epstein’s social circle, especially around the MIT Media Lab.
I assume there is a theory for the evolutionary origins of salience bias? If not, maybe it goes something like this:
It's always a power law. All power laws are scaled versions of the others. Hence if one considers the worst event a of type A one has ever observed vs. the worst event b of type B and b is worse than a then, everything equal, one has reason to believe that the power law for B is a scaled up version of the power law for A. For example, I am guessing the worst individual predator attack you have ever heard of killed no more than three people. The worst earth quake you have ever heard of on the other hand...
"Everything equal" does a lot of work here, specifically the amount of observation done should be equal. But this used to be the case throughout all of prehistory. History, by definition, started when a written record enabled us to preserve certain events for much greater periods of time. And these are realiably the most salient events. Hence we have a written record of the destruction of Pompeji by the eruption of Mount Vesuvius but not one of some random lion attack (unless it killed a very important person). In prehistory, people had their own observations and what other people told them about which went back maybe three generations before fading into the domain of myth. So in terms of orders of magnitude, all worst events one had observed or heard about could be considered about equally likely.
Now, usually precautions against the worst ever observed event of a given type will also work against lesser events of that type. E.g. sleeping around a fire at night will deter lions but also leopards, hyenas, wild dogs, ...; keeping a safety distance of 50km around a big volcano will also keep one safe from smaller volcanos; and so forth.
So if one scales one's precautions according to the inferred scaling factor deltas of the known worst events, one should assign approximately the right amount of resources to each.
In other words, If during my lifetime I have seen a lion kill a family member and I have also seen a volcano wipe out an entire tribe, it appears rational that I view volcanos as the greater threat and take much greater precautions against them. Here I may misallocate, but any such misallocation is short term. My great-great-grand children will already have forgotten about the volcano but they will still be aware of the more frequently occuring lion attacks and have reallocated their resources correspondingly.
Viruses evolve and gain function in nature. What happens in a BSL4 could just as easily have happened in a bat cave. That's why I don't think it matters whether COVID was a lab leak. Restricting virology research does not materially reduce the probability of dangerous viruses existing or entering the human population. It would limit our ability to understand and respond to the situation when a naturally evolved virus does circulate.
"Wild animals attack and kill people all the time, so you should have no concern about the rampaging gorillas that just escaped from the zoo next door..."
Correct. We don't put a moratorium on zoos because of the risk of animal attacks, and it would be ridiculous to advocate this.
It's just an analogy. The point is that the risk already existing in nature is not an argument for artificially increasing it further.
"Zoos sometimes cause wild-animal attacks" does not imply "we should get rid of zoos to protect ourselves from wild-animal attacks" and neither do the analogous statements for virology.
Better legalese drunk driving? Want to have a mode of falsification of "lets accept risks"?
If you don't drive drunk, you don't hit anyone. If you don't research viruses, they still infect you.
The correct analogy here would be: If you drive, you risk hitting people. So why not drive drunk? People will be run over anyways.
A car that is not being driven doesn’t hurt anyone. A car that’s being driven carefully hurts fewer people than one being driven drunk or recklessly. But a pandemic pathogen that exists or can exist in nature is going to cause the same pandemic sooner or later - whether you give it a pathway or it finds its own. The question is, will you be ready? To optimize for survival, we definitely need to put our thumbs on the scale in favor of “later” but we’re also going to need to use that time to do virology research and get ready.
???
There's still car crashes without drunk driving
But we do expect zoos to take precautions to stop dangerous animals getting out of their cages.
I think your argument needs actual numbers in order to be more convincing.
Given a population of bats doing their usual dirty bat things, how often should we expect a nasty human-adapted virus to emerge? And what's the chances of a human getting infected with it?
Given a research program devoted to making random animal viruses better adapted to humans, how often should we expect a nasty human-adapted virus to emerge? (Very often, that's the point.) And what's the chances of a human getting infected with it? (Depends how foolproof your containment procedures are.)
I could easily be convinced that the existence of GoF research in its current form increases the frequency of nasty pandemics by anywhere between 10% and 1000%.
There are fewer humans around in a bat cave, less selection pressure for it to evolve to infect humans.
>Viruses evolve and gain function in nature. What happens in a BSL4 could just as easily have happened in a bat cave. That's why I don't think it matters whether COVID was a lab leak.
No, absolutely not "just as easily". Those are chance events. GoF is literally trying to make this happen as quickly and effectively as possible.
>Restricting virology research does not materially reduce the probability of dangerous viruses existing or entering the human population.
Yes! It does! You're just flat out wrong.
You're increasing the number of enhanced viruses that exist, you're increasing the extent to which those viruses are enhanced, and you're deliberately putting them around people to whom the virus can spread. ALL of these things increase the risk.
>It would limit our ability to understand and respond to the situation when a naturally evolved virus does circulate.
You mean like how the research that caused this pandemic in the first place helped us? Oh, NOPE, it only got people killed.
No GoF research that has been done to date has been worth millions of deaths and trillions of dollars. You're making a huge speculative bet if you claim that such research will, at the absolute bare minimum ever outweigh the cost of the one major pandemic it caused, let alone outweigh the known risk of causing future pandemics too.
THE biggest risk of future pandemics is lab leaks. And nothing about GoF research suggests the risk will ever be worth it.
> What happens in a BSL4 could just as easily have happened in a bat cave
1. You pulled this probability out of your rear end.
2. It's wrong because in labs they do serial passaging in genetically engineered animals and other tricks to massively speed up adaptation to humans, which in nature may have never happened given that humans tend to avoid bat-filled caves.
3. Finally, it's irrelevant anyway because the WIV team wasn't operating in BSL4 labs. They had them, but didn't use them, because they're a pain in the ass. The SARS-CoV-2 work they almost certainly did was done at lower safety levels.
"given that humans tend to avoid bat-filled caves"
How could I resist this?
"average person visits 3 bat-filled caves a year" factoid actually just statistical error. average person visits 0 caves per year. Bat Man, who lives in bat-filled cave & encounters over 10,000 bats each day, is an outlier and should not have been counted
The issue is gain-of-function research on potential pandemic pathogens like SARS. There were specific warnings from Carl Bergstrom and Marc Lipsitch against lifting the moratorium on this research in 2017.
In terms of COVID-19 the nearest relatives to SARS-CoV-2 are found ~1500km away from Wuhan in areas WIV sampled SARS-related bat coronaviruses like Yunnan and Laos. It arose in Wuhan well adapted to human cells with low genetic diversity indicating a lack of prior circulation and a furin cleavage site never seen in a sarbecovirus. WIV was also also part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses.
> In terms of COVID-19 the nearest relatives to SARS-CoV-2 are found ~1500km away from Wuhan in areas WIV sampled SARS-related bat coronaviruses like Yunnan and Laos.
The closest *known* relatives, but they're not all that close. This paper (which I am completely unqualified to assess) finds that the most recent common ancestor of SARS-CoV-2 and RaTG13 was around 50 years ago: https://academic.oup.com/ve/article/7/1/veaa098/6047024?login=false
“Just as easily” is doing a *lot* of work here. Seems to me that I would say we could “just as easily” do virus research without gain of function, since the same things are going on in bat caves. If this is going to generate any advantage in terms of data or evidence, it must be changing the probabilities of *something*, and it’s on the people doing this to argue that it’s the probabilities of safe learning about viruses, and not the probabilities of people getting infected with novel viruses.
> over learning
... only follows if people changed their minds, people thought intentionally getting animals sick to breed better virus was stupid before fauci killed more people then several wars
It was an active debate on if "gain of function" research should be banned for a decade before 2020. It wasnt a debate if shoe bombs were risks before 9/11 and the tsa
> math, assume this and such numbers, and you get such and such result
Making viruses stronger is bio terrorism research, he was actively avoiding a law, and what helping china learn a new type of weapon mass destruction; I don't understand a way to read a situation that you accept the facts that fauci moved money to the Wuhan institute to breed stronger viruses, that people where very very concerned about and stopped the research for fears of this exact outcome that isn't profoundly stupid, treasonous or psychopathic.
Its unlikely we we ever know which, but why not take a common denominator of "bad"
>But if you would freak out and ban gain-of-function research at a 27.5%-per-decade chance of it causing a pandemic per decade, you should probably still freak out at a 19-20%-per-decade chance. So it doesn’t matter very much whether COVID was a lab leak or not.
Notably, banning gain of function research wouldn't prevent SARS-CoV-2 leaking from a lab. According to the most plausible version of the lab-leak theory, SARS-CoV-2 was a natural virus from a cave, that researchers brought back to Wuhan, from which it subsequently escaped.
If it was a natural virus that escaped, the response should surely be to ban research that collects unknown wild viruses and cultures them in populated areas with inadequate containment.
But isn't one of the arguments for a lab leak that the virus has a furin cleavage site that is unlikely to have evolved naturally and is evidence that SARS-CoV-2 was actually engineered? Even the virus wasn't genetically modified, passaging it through non-bat lab animals (deliberately or accidentally) might have given it an opportunity to evolve to cross over into humans in a way that wouldn't have happened otherwise.
The furin cleavage site is evidence, but it's not super-strong evidence by itself. See e.g. https://www.sciencedirect.com/science/article/pii/S1873506120304165 . Coincidences do happen. The evidence for "the outbreak was related to gain-of-function research at the lab" is a lot weaker than the evidence for "the outbreak was related to research at the lab".
Of course, if we had functioning institutions, then answering the question of whether the research engineered that specific furin cleavage site would be trivial, since we could Just Ask. But that's a separate problem.
To me one of the arguments *against* the virus being deliberately modified is that the Wuhan Institute of Virology wasn't some super secret black site - they had openly collaborated with Western scientists. So if they had been doing research to make Sars-Cov-2, it's surprising that they hadn't mentioned this to anyone (although maybe they wanted to keep it secret until publication) or at least that there wasn't some clear evidence that would come to light afterwards. You'd think that the CIA or NSA could hack the systems of an academic institute pretty easily and find a smoking gun email or spreadsheet. This makes the scenario where a natural virus was collected and then leaked (maybe after passing through lab animals) or just infected someone in the bat caves who brought it back to Wuhan.
On the other hand, it is alleged that the CIA tried to bribe its own analysts to discount a lab origin: https://oversight.house.gov/release/testimony-from-cia-whistleblower-alleges-new-information-on-covid-19-origins/ so maybe the CIA is just suppressing any evidence it has because it doesn't want to endanger cooperation with China.
> if they had been doing research to make Sars-Cov-2, it's surprising that they hadn't mentioned this to anyone
They...they did, though? It wasn't a secret they were interested in spikes. As for specifically inserting novel furin cleavage sites into spikes, they applied for a grant in 2018. That was DARPA, not NIAID, but regardless. They specifically proposed inserting novel furin cleavage sites into bat coronaviruses. Like, the fact that WIV was looking into furin cleavage sites in bat coronaviruses wasn't a secret.
But, I mean...even if they didn't conveniently have an English-language grant application to DARPA. Let's take a step back here.
SARS: The First Pandemic of the 21st Century https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7086556
SARS-CoV-1 *terrified* people, and it was a much bigger deal in China than in the United States. Initially, nobody knew where it came from, or how it worked, or what. Everyone wanted to know. This kind of thing is why people become *become* virologists.
The reason we knew so quickly SARS-CoV-2 came from bats was because of previous research on SARS-CoV-1, back when SARS-CoV-1 didn't require "-1" as a modifier. Research done at...the Wuhan Institute of Virology. Funded by EcoHealth Alliance.
SARS-CoV-1 did not have anything special going on with furin cleavage in its spike, as far as I know. But it had long been known that this *could* happen, and people believed (correctly, as SARS-CoV-2 would later prove) that it could make a virus much more dangerous. Artificially inserting a furin cleavage site and watching how the resulting virus infected cells was a fairly obvious thing to try. It's far from the only obvious thing to try, but that's where we go back to "WIV specifically asked for money to do literally this."
If you had suggested in November 2019 that maybe the Wuhan Institute of Virology was inserting furin cleavage sites into coronaviruses, nobody would have even blinked. That kind of thing was what WIV was *for*.
> maybe they wanted to keep it secret until publication
Most research is not known outside the lab until publication. That is what publication is for. Indeed, in the United States it is a surprisingly common pattern for most of the research to basically be done before the grant application is approved; this is something of an open secret, since the whole point is to make the grant proposal more attractive by making it a slam-dunk from the grantmaker's perspective (since there's no actual uncertainty). Labs often push forward with research that has not (yet) been externally funded, in the belief that funding agencies will become interested once enough is certain that results are basically guaranteed. Often, that belief is vindicated. Many labs have a budget item explicitly for incubating projects this way.
You do bring up an important point that argues against the furin cleavage site being artificial:
> there wasn't some clear evidence that would come to light afterwards
If the furin cleavage site was a spontaneous mutation, then even if the virus came from WIV, nobody in WIV would necessarily *know* that the pandemic came from a leak. They wouldn't have to lie about anything; they'd just have to not notice things, and given the stakes I expect them to be very good at not noticing.
Whereas if somebody inserted that particular furin cleavage site on purpose, then as soon as SARS-CoV-2 was sequenced, light bulbs and alarm klaxons came on in *somebody's* head.
Which somebody? Remember there are three basic scenarios.
In the first scenario, a small number of people (fewer than ten, possibly as few as *one*) inside WIV working alone without telling anyone about the furin cleavage site they were inserting. This small group may, or may not, have included Ben Hu and/or Yu Ping and/or Yan Zhu.
This would be somewhat weird, but not very weird. I don't know how WIV operates, but in a lot of labs, PIs have wide latitude to try things as long as they're making progress on the moneymakers. If lab leadership later learned about the experiments, the reaction would be "Oh, did you learn anything?" If yes, then it could lead to a publication; if no, then it gets buried in somebody's filing cabinet and forgotten.
In this scenario, after putting together what must have happened, the person or persons deleted everything and never told a soul. (Because they would get in trouble for violating some rule or other, because in unhealthy bureaucratic labs everyone is always violating some rule or other.) WIV leadership, in this scenario, doesn't need to lie about anything; they just need to not notice things, and given the stakes I expect them to be very good at not noticing.
In this scenario, it would be somewhat weird for discoverable electronic records of the experiments to never have existed, but not very weird. The researcher(s) might have kept notes in a shorthand, totally indecipherable except to someone with both deep fluency in the Hankou dialect *and* deep knowledge of the SARS work at WIV, intending to write up a more coherent description later. In any case, the researcher(s) tried very hard to delete any electronic records that existed. Alternatively, some PIs are old school and keep research notes in paper notebooks; these could be *very* well-organized and readable, but undiscoverable.
In the second scenario, WIV leadership (meaning at least one person in a position of authority, such as Deputy Director Shi Zhengli, or Peter Daszak, the guy bankrolling the SARS work) gave the green light to use lab resources to try some exploratory work on the furin-cleavage insertion without the external funding.
In the second scenario, there would be an electronic paper trail showing the planned work. The principals would have tried very hard to scrub it, but put it this way: if you were one of the low-level researchers involved in such a conspiracy, would you really delete everything, or would you save a copy in a safe place in case the leadership later decided to throw you under the bus?
In the third scenario, WIV actually *did* get an external grant to do the work, such as from the Chinese Center for Disease Control and Prevention. Only in this third scenario would there definitely be *piles and piles* of electronic paper-trail, far too much for anyone to hide.
Even in the third scenario, note that the grant application would almost certainly not mention the *specific* furin cleavage site they intended to insert. It *would* talk about producing a chimeric virus by replacing the spike of one virus with a spike from another virus. Of course, we already know they were doing things in that vein, because some of the results they *did* publish.
> You'd think that the CIA or NSA could hack the systems of an academic institute pretty easily and find a smoking gun email or spreadsheet.
Leaving aside the conception of "computer hacker" as "evil wizard", any time U.S. spooks reveal documents obtained by hacking, that compromises the methods they used to Chinese spooks. They'd need a strong incentive.
And honestly, why would they want to? Like, at all? To embarrass the PRC? I mean, I'm totally willing to believe the U.S. government is willing to do dumb things to embarrass the PRC, but given the research in question was paid for by the U.S. government, I don't see how they could walk away with a "win" from revealing the whole thing, nor do I think they're stupid enough to *expect* to.
It would also be contrary to the approach the executive branch has taken so far. The White House's stance on WIV is...complicated, and evolving, and they may yet throw WIV under the bus, but as things stand now, I think it's fair to say that any spooks who tried to prove a lab leak would be starting a feud with the President. And while they were absolutely up for feuding with the last President, they do not feud with most Presidents and have shown no inclination to feud with the current one.
Compare to the situation in the United States. The Director of NIAID testified that they weren't doing gain-of-function research at WIV, while his internal emails said the opposite, because the cover-up was sloppy, because everyone involved correctly expected that there would be no consequences if they were caught. As you suggest, these emails were obtained by the FBI investigating the allegations and hacking into his...oh wait, says here that never happened. The emails were obtained via Freedom of Information Act lawsuits. The People's Directive on Transparency, the Chinese equivalent of the Freedom of Information Act, would technically not apply to communications with WIV, because no such thing exists and I just made it up. The PRC is just...really not big on the whole "freedom of information" thing.
Even if the spooks wanted to hack in (which they wouldn't), and even if the relevant notes weren't kept in old-school notebooks (which they may have been), and even if the relevant documents hadn't been deleted (which they may have been), the spooks wouldn't necessarily have been able to understand anything they found if it was in shorthand, since people with deep idiomatic Mandarin fluency are notoriously thin on the ground over there, to say nothing of the problem of understanding the biology. And they wouldn't want to compromise their sensitive op to outside experts without even knowing whether they'd found anything.
It remains true that in every scenario, it is at least *possible* that documents exist and would leak, and so the absence of such documents is Bayesian evidence against the furin cleavage site being deliberately inserted.
Thanks for your detailed response. By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses', which we do know they were doing. I was thinking that there would surely be some record of the project.
But I agree with your point that they could easily have done it outside a formal process (I have read that making a reverse genetic system to make custom coronaviruses is pretty simple and not a groundbreaking project that would need lots of funding), or with mainly paper notes, or just managed to delete everything.
> "Most research is not known outside the lab until publication. That is what publication is for."
Since WIV collaborated with outside labs, I was thinking that before official publication they might have emailed or casually communicated with an outside group and mentioned that they were working on a project to add a furin cleavage site. But it does seem plausible that they just did the work first to check they could do it without telling anyone.
I don't think it would be that difficult for a national security agency to remotely penetrate the network of an academic institute (since outside observers were worried about the standards of biosecurity at the lab, it seems at least possible that they were also slack about keeping everything patched, not to mention the possibility that their software had vulnerabilities that aren't widely known outside the NSA etc). But you're right that if the CIA has a 'smoking gun', they wouldn't just immediately leak it revealing their access, and Western countries wouldn't want to start a fight with China. In fact they allegedly tried to pressure their experts to find against a lab leak: https://oversight.house.gov/release/testimony-from-cia-whistleblower-alleges-new-information-on-covid-19-origins/
So on reflection I think the lack of public documents is pretty weak evidence against the deliberate insertion scenario. I think it's definitely possible that intelligence agencies have relevant evidence about what WIV was doing that they're keeping secret, but as you point out, whatever internal notes they could recover might not be a clear 'smoking gun' anyway.
> By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses', which we do know they were doing. I was thinking that there would surely be some record of the project.
Yes, and I agree if anyone inserted that specific furin cleavage site, there would likely be documents mentioning inserting that specific furin cleavage site. Though of course in each of the scenarios there exist ways that those documents might not be comprehensible, or might have been successfully deleted, or might remain undiscovered.
And I think it *is* relevant that they were known to be doing research in this area. If we were talking about some random biolab, then it would be *extremely weird* if, for example, some PI randomly decided to play a hunch and messed with a virus without telling anyone. That's just...not within the range of things that happens. If we were talking about a lab where this kind of work was not routine, then I would absolutely expect to see piles and piles of documentation, not least because they'd be using physical resources that they would have never used in the job they were supposed to be doing. But in a lab where this was routine...then not necessarily. Maybe, yes. But not *necessarily*.
> I have read that making a reverse genetic system to make custom coronaviruses is pretty simple and not a groundbreaking project that would need lots of funding
This is also my impression. This kind of work *used* to be incredibly expensive, but the new methods invented by UNC (which WIV was known to use and be competent with) are...not *cheap*, but potentially within the range of what a PI could use without talking to anybody. (If WIV was the sort of lab where that kind of thing happened. I don't know that they were, but the fact they were so *successful* (everyone seems to agree they discovered an unusual quantity of true facts about SARS relative to their budget, and it wasn't because they had a particularly convenient location, a thousand miles away from the bat caves) suggests to me that WIV was the kind of place where moving fast was valued and researchers might have routinely taken initiative.) It also uses basically the same resources regardless of what specific virus you're working on, so nobody would notice that a different experiment had taken place unless WIV had very unusually strict accounting, which they probably didn't.
> Since WIV collaborated with outside labs, I was thinking that before official publication they might have emailed or casually communicated with an outside group and mentioned that they were working on a project to add a furin cleavage site.
Uh, I mean, obviously that is a thing that they could have done that they demonstrably didn't do, which technically makes it further evidence against the furin cleavage site being artificial, but...I can't really picture that conversation. Like, just...emailing someone out of the blue "Hi, I'm about to do an experiment which I will now describe in an unusual amount of technical detail. I don't have any interesting results to share with you, because I haven't actually done it yet. Just...FYI." I suppose if someone had already been planning it when they went to a conference, then they might have happened to mention it when chatting over coffee. But they also might not have mentioned it. And of course I don't think anyone's assuming that the experiment, if there was an experiment, was planned for a long time in advance (except in the third scenario where they actually *did* get an outside grant from somebody we don't know about like the Chinese Center for Disease Control and Prevention), so there was probably no conference it would have coincided with.
This is just based on the way Baric described it, but it didn't sound like they collaborated all that closely. They obviously went to the same conferences, and they gave talks, and they chatted afterward. But it's not like people were regularly flying back and forth between WIV and UNC, or anything. Collaborating, but only in the sense that all virologists are collaborating. The only outside institution I know of with deep visibility into WIV was EcoHealth Alliance. I do agree that if there was a cover-up involving WIV leadership (the second scenario, where the process was internal-but-formal using funds explicitly set aside to incubate small projects without external funding), then *someone* in EcoHealth Alliance was probably in on the cover-up.
Put another way, it seems unlikely to me that anyone would have communicated with an outsider about the planned experiment *in enough detail to specifically identify it* unless that person, "outside" or not, was fully collaborating on that particular experiment (and therefore had their own reasons to fear for their own career, and so would also shut up about it).
> an academic institute (since outside observers were worried about the standards of biosecurity at the lab, it seems at least possible that they were also slack about keeping everything patched, not to mention the possibility that their software had vulnerabilities that aren't widely known outside the NSA etc)
I definitely don't think the NSA would ever willingly use vulnerabilities that aren't widely known outside the NSA this way. You, uh, you may have heard that the PRC has been making regular military feints at an American ally, any of which would turn into a real invasion very quickly, because the PRC would really really *really* like to just grab Taiwan and present it as a fait accompli. And consequently the U.S. would really really *really* like to have visibility into the Chinese military's true intentions, just in case Xi does decide "fuck it, YOLO". They have *much* more important uses for zero-days, is what I'm saying.
But you do bring up a good point that an academic institute in general, and WIV in particular, probably is, or rather was, unusually vulnerable to hacking. Like, "gateway server still running Windows 7" vulnerable. In which case hackers could easily get in using only vulnerabilities that *are* already widely known outside the NSA, which they wouldn't hesitate to use. (Of course, *after* the pandemic happened and everyone started looking at WIV, every WIV server is probably a honeytrap with every spook in China watching it, not because they're complicit in a coverup (if there even was a coverup) but simply because a chance to watch U.S. spooks in action doesn't come along every day.)
I don't really think anything about the CIA thing, because it's so vague. Apparently at some point the CIA convened a team of seven people. Uh, okay. I would venture to guess the CIA had a lot more than seven people looking into a global pandemic, but maybe there was something special about these seven? Then they offered "significant monetary incentives". What does that even mean? Does the CIA not *usually* pay its analysts?
Ultimately the CIA had to make a public statement. And I mean, obviously the content of that statement was going to be decided by CIA leadership, for political reasons. The actual evidence wasn't totally irrelevant, sure. Even if, for example, the CIA had really liked Trump (in fact they hated him) and Trump really wanted to say it was a lab leak and the U.S. hadn't funded the work so everyone expected the U.S. to be able to get a "win" that way...if they had reason to believe it *wasn't* a lab leak, they might have resisted out of worry for long-term career consequences for eventually being publicly proven wrong. The statement the CIA ultimately made was a maximally-vague statement that didn't commit them to any faction. Which was a logical choice for the CIA given almost *any* facts. The particular nuance of how CIA leadership gets their peons in line just doesn't strike me as especially important. Or rather, it's not important *to me*; I'm totally willing to believe that someone involved violated some federal rule or other. Management is totally allowed to hand out bonuses up to 25% of an employee's salary without needing permission from anyone, and I don't think anyone would be surprised analysts working on an important-somehow COVID team got a bonus that year. There's a currently-active proposal to allow up to 50%: https://federalnewsnetwork.com/pay/2023/11/agencies-would-have-an-easier-time-approving-pay-bonuses-under-opm-proposal And I doubt a CIA analyst would be too stupid to understand that their bonus, to say nothing of their long-term career prospects, depended on making their bosses happy. But it's possible that someone violated whatever the rules are and/or were at the time.
>Thanks for your detailed response. By 'doing research to make Sars-Cov-2' I meant 'doing the particular project that made that exact virus', not 'doing research in the general area of gain-of-function of coronaviruses'
" If we knew what it was we were doing, it would not be called research, would it?"
- A. Einstein
There's no such thing as "doing research" to make an exact specific virus. There might someday be engineering projects along those lines, but I don't think we're there yet. The most you can do is e.g. try to see if you can make a bat coronavirus into something deadly and highly contagious among humans by deliberately inserting cleavage sites and doing serial passage through increasingly human-like tissue cultures.
If someone does that, which we know Ecohealth and WIV were trying, and a deadly bat-coronavirus highly contagious among humans shows up near their lab, the proper response is NOT, "Nah, that must be a coincidence, we don't have documents showing they were trying to produce that exact specific virus".
Also, those documents were sequestered and/or shredded by the Chinese government. Which doesn't tell us much, because the Chinese government covers things up almost reflexively whether they've done anything wrong or not. But they're quite good at it, and they have the ability to throw anyone in China into an oubliette on a whim if they try to embarrass the Chinese government. So "...but no plucky WIV research assistant has blown the whistle with the documents showing that exact virus!", doesn't tell you very much either.
There’s good arguments for restricting gain of function research *and* scouting out remote caves for unknown viruses *and* hunting wild animals for sale at wet markets.
Yes, I guess this reinforces Scott's point. Whatever the actual origin, we know there is a risk from gain-of-function research, we know that a field researcher could get infected accidentally and we know there are risks of wet markets. Whatever the particular scenario was with Covid, all of these things should be restricted.
Apparently wildlife has been officially banned from wet markets in China since 2003 ( https://en.wikipedia.org/wiki/Wet_markets_in_China ), but now the restrictions are tighter and are actually being enforced. Which seems like a good idea, even if Covid was really a lab leak.
But Indonesia still has a 'thriving bat trade': https://www.abc.net.au/news/2023-08-01/indonesia-wet-market-bans-dog-meat-but-bat-trade-poses-risk/102659758
And somehow they have decided to partly ban dog meat but keep going with the bats - it's fine though because they're doing lab tests every 3 months... and of course everyone knows the problem was *Chinese* wet markets so why worry about what's going on in Indonesia?
That's fair, I incorrectly used "gain-of-function research" as a gesture at "doing dangerous virology". I'll correct it.
You can update on individual events, as you discuss, but you can also update on models, along the lines of Popper.
One might have a model that "Gain-of-function research has arisen for reasons of institutional benefit. It has very little benefit for the actual science of virology, and the risk of lab leaks is quite high on a recurring basis." A confirmation that COVID was a lab leak might still produce only a small increase in belief in that model, but the practical import of the increase might be very high.
It could be that the update coming from SBF and OpenAI is not the carefully formulated list of fine-grained beliefs that supposedly point in the opposite directions, but rather an increase in the single belief that "EAs aren't very good at reasoning about or acting on institutions."
Excellent way to put it, succinct to boot. Thank you.
Hard cases make bad law.
Every time rationalists or EAs complain about PR stuff like this, I think of https://xkcd.com/1112/. If you understand the game well enough to critique it, what's stopping you from winning?
policy people say "Never let a good crisis go to waste."
For a robust Bayesian analysis of the lab leak hypothesis, see physicist Michael Weiss man’s post: https://michaelweissman.substack.com/p/an-inconvenient-probability-v40
For a shorter and less expert but more easily understood version, see:
https://daviddfriedman.substack.com/p/the-lab-leak-theory
Very well written. Made me change my mind and accept lab leak as the likeliest source back when I first read it a few months ago.
As I have argued elsewhere, in a context where you can't trust people there is much to said for calculations simple enough so the reader can check them for himself, even at the cost of a less sophisticated analysis.
https://daviddfriedman.substack.com/p/land-gained-and-lost
Scott, Scott, Scott…
This is annoying in exactly the way you are most frequently annoying…so right in theory, so clueless in practice.
Despite your awareness of exactly this problem, you and so many commenters are STILL being a bunch of stupid quokkas who allow your basic factual background beliefs to be manipulated by psychopaths using the specific tool most effective against YOUR community of quokkas, namely taking advantage of a deep aversion to anything “right-coded” (an aversion which was itself cultivated in you by similar psychopaths ) to get you to not look at places you ought to be looking to get the necessary perspective.
It WAS a lab leak and NO ONE with
1) common sense who is
2) not a motivated reasoner and
3) understands molecular biology at the level of someone with a bachelors degree in the subject
thinks it was of natural origin, because of the three (3) smoking guns “human-infection-optimized furin cleavage site”, “Daszak and Fauci and Baric confirmed lies about funding gain of function research in Wuhan specifically including adding furin cleavage sites”, “Coincidence of Wuhan location”.
The reason it’s IMPORTANT is not anything to do with “updating on the likelihood of pandemics and therefore changing policy on gain of function research”, the reason it is IMPORTANT is the REVELATION that our public health establishment and our government in general are run by psychopaths who would *purposely HINDER response to a pandemic by hiding everything they knew about the germ that caused it*.
Quokkas need to update VERY VERY HARD on that.
If it was a lab leak, why did all the first cases happen around a wet market that was a 30 minute drive away from the lab in question? One of the scientists just really liked going to that market and coughing on people?
It it wasn't lab leak, why did China do literally everything in their power to prevent a proper and full investigation into the virus that would have vindicated the natural spread claims?
Because China prevents investigation into everything, they're notoriously paranoid.
Because they don’t want outsiders clamping down on their wet markets.
They didn’t. It’s very obvious that the psychopaths in the public health establishment and government (in this case, in China) allowed only counting cases around the wet market, and purposefully hid to their best ability information about cases that happened earlier and elsewhere, including by jailing scientists that tried to report such information for “panic-inducing disinformation” and other transparent crap like that.
You should trust that what the Chinese government and its scientists (which are still under its power) say about Covid is the full truth about as much as you should trust it about what happened during the 1989 Tiananmen Square protests.
The doctor guy who was jailed (and later died) was jailed by the Wuhan local authorities and he was jailed for talking about the outbreak in the market. So that doesn’t work.
Also why was the US keen to deny the jab theory?
Do you mean the lab theory? If so, because they were funding the GoF work being done at WIV.
Also during the pandemic, they probably didn't want to accuse China of being responsible because they needed China's cooperation and didn't want to endanger the supply of PPE, vaccine vials etc.
Yep, that was no doubt a consideration!
The first cases appeared along the railway line linking the WIV with Wuhan Airport. The market was a red herring caused by ascertainment bias: in the early days having been at the market was one of the diagnostic criteria for being classed as having the new disease.
All the market cases were lineage B but as Jesse Bloom observes lineage A arose first. The market cases aren't the primary cases.
What's harder to explain is why it arose in Wuhan when the nearest relatives are ~1500km away in Yunnan and Laos both locations the Wuhan Institute of Virology sampled SARS-related bat coronaviruses. Patrick Berche notes it arose in in Wuhan well adapted to human cells with low genetic diversity indicating a lack of prior circulation and a furin cleavage site never seen before in a sarbecovirus. WIV was part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses.
I thought they showed the market had both lineage A and lineage B, including in the samples from the raccoon-dog cages.
That's my understanding as well, though "raccoon-dog cages" is grasping at straws; AFIK neither lineage A nor lineage B nor any sort of proto-Covid was ever found in raccoon-dogs in the wild.
There is good evidence for the first superspreader event occurring at the seafood market, and no good way to confirm or deny reports of earlier solitary cases. But that market would have been a prime location for a superspreader event even if it had been selling e.g. jewelry. The question is, if Covid came out of the seafood market, how did it get *in* to the market? Through the back door, in an infected animal, or through the front door, in an infected customer?
If it was a natural zoonotic spillover, why did the first cases occur across town from the big virology lab but a thousand miles away from the habitat of the original host? Both scenarios involve a freaky coincidence, and "lab tech wanted to buy some fresh seafood that day, instead of a hundred other things he could have done after work" is not hugely more coincidental than "wild animal trader sent the infected bats to Wuhan and not one of a hundred other larger or closer cities".
If that were all we knew, I think the evidence would lean toward yes, the wild animal trader did just happen to put his infected bats in the shipment to the distant city with the virology lab. But not as a slam-dunk many-nines certainty; I think 90/10 was about right a priori. Then we started learning other things, that caused rational and well-informed people to make substantial updates.
I'm a bit worried that "It doesn't matter if COVID was a lab leak" is going to be read as "It's not useful to talk about whether COVID was a lab leak".
I agree that the object-level question doesn't matter, but it's useful to talk about WIV doing gain-of-function research in a BSL-2 lab. My priors for lab leaks in general were lower than Scott's because I assumed virology labs doing anything remotely risky would be similar to the BSL-4 lab near where I live. Even if COVID hadn't happened at all I'd still update on the knowledge that people study animal respiratory diseases in a lab that doesn't require respirators.
I mean, discussing whether COVID was a lab leak is a pretty low priority thing considering everything that's going on right now.
No, absolutely not. We're already being told we need to be worried about the next pandemic, and the best way to avoid another pandemic is to minimize the chance of another lab leak to as close to zero as practical.
>It’s hard to define “mass shooting” in an intuitive way, but by any standard there have been over a hundred by now. You can just look at the list and count how many were by Your Side vs. The Other Side.
The source you reference is, in my opinion, itself biased in order to push a certain narrative. Per their website, they exclude shootings motivated by armed robbery and gang violence, and as a result end up with a list that retains a left-wing "hard on far right extremism" outlook but ignores support for a right-wing "hard on crime" perspective.
That's technically true, but there is an important qualitative difference between crime-related mass shootings and ideologically-driven mass shootings: who the victims are. If you're in a gang and get shot, even the most compassionate of us struggle to find any tears. But terrorism could happen at your place of worship, your kid's preschool, the mall foodcourt where your teen daughter works.
This is a sub-type of what Scott discusses re: abortion and getting stabbed in an alley both being called 'murder'. The central category for a murder is a stranger kills you, an adult. You don't have to worry about being aborted. You also don't have to worry about gang violence in Chicago.
Which is exactly the problem with typical reporting about mass shootings (possibly not Scott's particular source): the statistics usually include gang violence in Chicago, but are discussed as if Columbine is the central example.
It’s true that including the organized-crime related cases changes some features of the overall statistics. But it doesn’t change the ratio of right-wing ideological terrorism to left-wing ideological terrorism to religious ideological terrorism to eco ideological terrorism or whatever.
That's not how Bayesian updating works. You're not supposed to have a single number for how many lab leaks you expect to happen in a decade (the parameter lambda for the Poisson distribution). If you only had a point estimate, you wouldn't be able to update it at all. You're supposed to have an entire probability distribution that is your credence for the value of lambda. Then you update your probability distribution for lambda. Doing the actual maths requires you to use some statistical software, but the point is that a single event can have a very large effect on your posterior if you start with a wide enough prior (which you probably should). There is a big difference between observing 0 lab leak pandemics in 20 years and observing 1 lab leak in 20 years in terms of how your posterior will end up looking like.
> There is a big difference between observing 0 lab leak pandemics in 20 years and observing 1 lab leak in 20 years in terms of how your posterior will end up looking like.
This is true, but I think Scott’s point was that we already observed 20 lab leaks in 20 years, and observing the 21st should not change your posterior about “how frequent lab leaks are” by much.
(Reposted from Reddit): The Bayesian model which Scott adopts is generally the correct one. However, there is an implicit assumption that the occurrence of an event does not change the distribution of future events. If that happens, then a larger update is required.
For example, the US has always had mass shootings even prior to the Columbine shooting. Yet the US has seen a gradually escalating rate of mass shootings since 1999 ([see chart entitled "How has the number of mass shootings in the U.S. changed over time?"](https://www.pewresearch.org/short-reads/2023/04/26/what-the-data-says-about-gun-deaths-in-the-u-s/). This chart is not adjusted for population size but clearly the growth in mass shootings exceeds the growth in population.
The reason is the copycat effect. There are always psychotic potential murderers out there, but Columbine set out a roadmap for those murderers to go from daydreaming about killing their classmates to actually doing so. So a rationalist ought to update their model because Columbine itself changed the probability distribution.
Another example where an event changes the likelihood of future events is where the event has such emotional salience that strong countermeasures are enacted. To take the gun example, after Australia's Port Arthur massacre, the government introduced strong and effective gun control. Thereafter, the rate of mass shootings plummetted.
The same applies to 9/11. Prior to 9/11, there was always the chance of terrorist attacks. After all, Al Qaeda had previously bombed the WTC. But Osama inspired other jihadis to attack westerners in the west. It made future attacks more likely. But the world also enacted strong airline security measures, so the likelihood of 9/11 style airline attacks decreased. But its harder to stop e.g. train bombings so it shifted the likelihood of terrorist attacks away from planes to trains. Hence the London tube bombing. So a rationalist ought to have updated their priors to think that planes are safer and that other public areas are less safe.
PS: I really dont want to get into a debate about gun control. The Australian solution won't work in the US, but clearly it affected the Australian probability distribution. Please constrain your arguments to whether Columbine changed the statistical likelihood of a mass shooting.
Right. There is a difference between events appearing in nature, e.g. volcanic eruptions, and man- made events. The frequency of the latter may change, depending on the copycat effect, including how would-be copycats interpret how other people react to the events.
It’s not totally clear whether this is a copycat event following from Columbine, or whether Columbine itself was a symptom of some underlying change that made people more likely to act out in this way. I’ve seen suggestions that “serial killers” were much more common in the 1970s than they are today, and that some of the rise in mass shootings might just be a result of something leading would-be serial killers to become mass shooters instead (or to out it equivalently, something in the 1970s that led would-be mass shooters to become serial killers instead).
Pre-Columbine, there was definitely a recognized trend of mass shootings, but we called it "going postal" because the modal archetype was a frustrated postal worker shooting up his workplace (which IIRC did actually happen multiple times).
The bit where mass shootings started frequently happening at schools, definitely seems to have been a copycat effect following Columbine. Possibly also the mass shootings occurring at places that are neither schools nor tedious dehumanizing workplaces, but that's less clear.
Might add that information availability and existence of opinions are potential missing factors. It's not that a single, hypothetical, highly-informed Bayesian is marginally updating existing beliefs; it's that people who hadn't thought about an issue at all (priors NaN) are suddenly forming beliefs at the same time. I suppose one could argue that everyone has an implicit, very weak prior about everything, but I don't really buy that. But if we suppose that's true, weak priors update a lot more dramatically than strong ones.
Re: lab leak: I don't think many people had any idea that there was an active area of research built around deliberately making dangerous viruses more dangerous to humans. Let alone that the field has minimal practical value (I haven't seen a good defense of it yet, particularly relative to the risks). Or that people involved were operating out of a lab in China with shoddy safety procedures, with some US government funding.
Awareness that such a risk is out there rationally should cause a dramatic updating of beliefs; far more than the incremental updating in your example. Colloquially, from "scientists probably study diseases, government funds it, that's reasonable enough I guess" to "wait, they do what now?".
To some extent that falls under the coordination problems and stupid people buckets, but think stupid is unfair. There are a lot of things in the world, and most people (including smart people) don't have opinions, let alone subjective probabilities, about most of them.
Gain-of-function research was legally restricted in the US (and Richard Ebright played a role in getting those regulations in place). That's part of why funding was being funneled into Wuhan, which wasn't so much a legal loophole as a way to avoid notice of skirting the law.
I may be missing the point of the article (which I largely agree with!), but... if it was a lab leak, knowing what caused the leak could be very important. Lab leaks are relatively rare and I assume the folks who run these labs try very hard to avoid leaks. Knowing how a leak occurred would be useful information that could help make future leaks less likely. Likewise 9/11 probably shouldn’t change your estimation of how likely a significant terrorist attack is, but in a very short time frame passengers (apparently) learned that being passive in the face of a hijacking is not the ideal response and it led to locked cockpit doors. Both responses probably should reduce your estimation of the probability of an airliner being flown into a building or populated area again. (The less said about TSA and security theater the better). Overall I agree that dramatic events shouldn’t necessarily cause you to dramatically update your priors, but that shouldn’t mean the truth doesn’t matter and that we can’t learn from dramatic events.
Would you agree with the summary that in both cases "The point isn't *that* it happened, the point is *how* it happened"?
In particular, the passengers of United Flight 93 did not update their probabilities on whether they would be hijacked based on the news that two other planes had been hijacked. (After all, they already knew that their plane had been hijacked: the probability was already 100%.) They had previously chosen to do nothing about this, based on their prior knowledge about hijackings. After they learned about what happened to the two previous planes, they adjusted their probability distribution on exactly how hijackings go. They adjusted their expectations quite suddenly and dramatically, in fact, and they took radical action in response to their revised expectations. Their response strikes me as obviously correct. If the news media had virtuously relegated the attacks to tenth-page news, the the passengers on United Flight 93 would not have changed their behavior in response.
We know that this kind of information matters, because it did in fact matter, literally the same day.
> Knowing how a leak occurred would be useful information that could help make future leaks less likely.
Bingo.
"Lab leaks are relatively rare and I assume the folks who run these labs try very hard to avoid leaks. "
This appears to be disturbingly false. Presumably *some* labs are well and conscientiously run, but not all of them and probably not even most of them so we can expect to be getting many lab leaks of *something* every year. Which make it rather important to understand what the labs are working with in the first place.
How is 20% a reasonable prior for "lab-leak pandemic"? Has there ever been one, other than possibly covid? Shouldn't this be way under 1%?
https://en.m.wikipedia.org/wiki/1977_Russian_flu
"The World Health Organization, however, ruled out a laboratory origin in 1978 after discussions with researchers in the Soviet Union and China"
Seems like a questionable data point.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4542197/
But it looks like you're right on that particular data point, since it looks a lot more like a vaccine trial gone wrong.
Regardless, most of my probability mass is from https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9274012/ (71 events in 40ish years) as well as anecdotal reports about how lax biosecurity was in order for lab leaks to happen
I hear you, but individual exposure is a far cry from global pandemic. Definitely something to worry about but I still have to point to "zero known instances of this happening in 70+ years of pathogen research" and reiterate my original point.
I thought the evidence that it might have been a lab leak came much later than that.
FWIW, a physicist did the Bayesian analysis of the lab leak hypothesis, put it out there, got a bunch of feedback, and updated it a few times: https://michaelweissman.substack.com/p/an-inconvenient-probability-v30
Just FYI, if you actually want to see the deep dive on this.
I've definitely always thought of 9-11 as "the day the terrorists won." Not only did we waste hundreds of billions of dollars, and thousands of soldiers' lives over it, we instituted flat-out stupid "security" measures that are purely theater.
The increase waits at airports over the ensuing 22 years have almost certainly cost more American lives (in US life-hours wasted) than 9-11 itself! My back of envelope estimate has 240k US lives lost to security theater, vs 3k in 9-11 itself. The terrorists *really* won on 9-11, authoritatively. The biggest self-own we've ever had.
Amen! I really wish somebody would "sunset" the TSA, because there's multiple proven TSA failures any time they're tested with intentional agency-led red teaming, and massive negative impact to US hours / lives wasted.
But no, bureaucratic inertia is apparently infinite, and never to be challenged by any candidate ever.
What’s remarkable is the lack of technological change to speed things up after 20 years. This could be something that image recognising AI might help with in future though.
Even if it's correct to say the US lost, the terrorists didn't win: they didn't come closer to achieving their goal, whether it was to get the US to end involvement in the Middle East (even with all the war-weariness, it's still more involved than before 9/11), or to provoke the US into wars that inspire the residents of Muslim countries to rise up against US-aligned governments, ending up with most of the Muslim world under Islamist rule (it did drag the US into wars, which made some Muslims angrier at it than before, but the rest didn't follow).
As for your calculation, with ~750 million air passengers a year spending 20 minutes more at airports, I get ~8000 lifetimes, equivalent to twice as many terrorism deaths if the average terrorism victim dies half-way through what would otherwise be his life, and assuming that wasted time is equivalent to time spent dead.
Thank you for the separate calculation, I had an error in my math. I'm still getting 40-60k US lives wasted by security theater. It didn't paste very well below, but if I copy and paste the text below into a google sheet document, it pastes flawlessly and you can infer the simple spreadsheet calculation I used myself.
The biggest difference I immediately see is I assume a 1 hour delay, which seems *conservative* if anything to me - before 9-11 you could literally arrive at the terminal a half hour before your flight, and now every official source recommends you get to the airport either 2 or 3 hours before your flight to accommodate security theater's time wasting.
Annual passengers Average flights per passenger Delay in Transit due to security theater (hours) Years since 9-11 Hours wasted annually Hours wasted in TSA waits Avg awake hours in a year Average awake years wasted equivalent Avg US lifespan Lifetimes equivalent
750,000,000 1.7 1 22 1,275,000,000 28,050,000,000 5840 4,803,082 76 63,198
And don't forget the TSA themselves, literally wasting their lives in a purely net-negative endeavor that both they AND their victims hate! Although that has *only* wasted an additional ~250 US lifetimes.
I don't know why people sometimes talk as if airport security didn't exist prior to 9/11. It did exist and wasn't fundamentally different -- you'd get your bag x-rayed and you'd go through a metal detector -- only the details have changed.
Airport security was introduced in the early 1970s, for a damn good reason -- in the period 1968 to 1972 there was more than one aircraft hijacking per week. In 1969 alone there were eighty-six aircraft hijacked, which is more than we've had in the last 25 years put together.
Source: https://aviation-safety.net/statistics/period/stats.php
I don't know, I definitely see a difference between a half hour and 2-3 hours of average waits. It's a difference of 4-6x, and beyond the personal inconvenience, that's quite significant multiplied over 750M emplanements a year.
While I've definitely seen some stupid security lines in the US, I've never seen 2-3 hours.
But anyway the problem isn't so much the procedures as the implementation. Countries that aren't the US can do the exact same procedure efficiently, meaning I rarely have to wait more than a few minutes to pass through security in any other country.
It's just some specific features of the way the procedure is implemented in the US that makes it especially painful; possibly a deliberate go-slow by the TSA, who apply the usual US public sector logic of sucking as badly as possible so that they can get more funding.
Yeah, I'm totally with you - I actually enjoy flying most international airlines (ANA, JAL, Korean Air, Singapore, Emirates, or Virgin especially), because not only is the actual flying experience much better, they have their shit together in the terminals too, and don't make you do ridiculous things like taking off belts and shoes, or explicitly remove your laptop so it can be all by itself.
I even have Global Entry and TSA Pre and Clear and all that stuff, and I STILL have to take my shoes and belt off half the time domestically, we're literal savages here.
Why god, why can't we ever improve, reduce, or eliminate pointless bureaucracies?? Every road to improvement just seems impossible in the US.
Does the average air traveler really arrive at the airport more than 90 minutes before the flight? I think there are a lot of people who fly once every few years, and arrive this early. But the people who fly every week are usually calculating whether they can arrive 55 minutes before the flight or 45, and this group makes up the majority of emplanements.
That was actually one of the daily trivia questions for my staff, who are frequent (though not weekly) business travellers. Almost all of us were in the 60-90 minutes category, because an extra half hour at the airport sucks *way way* less than missing your flight by ten minutes.
But my staff is a team of professional experts in risk mitigation, so maybe not representative of typical travelers.
Most of the remaining difference comes from these:
- I assumed the 750 million "emplanements" to already be the number of flights, and the number of TSA checks (actually the latter may be slightly less due to airside transfers)
- I assumed 20 minutes wasted by a security check, you assumed 1 hour
- You calculated with awake lifetime, I didn't take that into account and calculated with total lifetime
Makes sense, thanks for the triangulation. I was doing the 1.7 because of this definition:
An "enplanement" in the context of air travel is defined as a passenger boarding an aircraft. It is a term commonly used in the aviation industry to quantify the number of passengers flying from an airport. Each time a passenger boards an aircraft, regardless of the purpose of the flight, it counts as one enplanement. This term is distinct from a flight, which refers to the aircraft's journey from one point to another, and it's also separate from passenger count, which could include both arrivals and departures.
On awake lifetime, I thought it was fair - they're definitely "awake" hours being wasted, and awake hours are the important ones, the literal stuff of living and experiencing.
And on 20min vs 1 hour, I still think even 1 hour might be conservative, but taking an average of 1.5 total (1 delta with TSA) with official guidance at 2-3 hours made sense to me - I couldn't find any quantified before and after numbers, unfortunately.
What is the 1.7 figure? Why does one have to go through security check 1.7 times on average to board an aircraft once?
It's taking the average per passenger v per emplanement - ie passengers more often board twice on a given journey vs one way flights, and a given journey will involve that same person going through security 1.7 times on average, ie the time in security theater is not just wasted once at the *passenger* level, it's wasted ~1.7 times.
Now that you've brought that up, it might be double counting - the total emplanements figure should capture both ends of US-US flights, but not both ends of international flights, which will involve customs coming back (which does have heightened TSA security theater / times post 9-11).
Searching for international v domestic splits, I'm seeing at the Bureau of Transportation Statistics that we actually have 1.1B passengers across domestic and international, with an 800 / 200 split roughly. That's passengers rather than emplanements, though.
I'm not sure how this would cleanly translate, but playing with various combos of the numbers (1.25 vs 1.7, 1.1B passengers at 1 v 1.7, 1.1B passengers with average 1.7 security stops) I'm getting between 45k and 90k total lifetimes wasted. Still all 10x 9-11 (on the high end, 30x), and a huge net-negative waste either way.
Huh from the wind-up (especially the point on hyperstitional cascades) I really thought this would end with either a re-affirmation on how you would be sticking to the Substack platform or a gesture towards looking at an alternate.
FWIW I hope you at least consider moving
I agree, but less because of the nazis, more because this site is horrible to navigate. I have no idea why it takes so long to load, there must be a huge amount of javascript cruft.
One interesting dramatic revelation that DIDN'T result in any updates, debates, or changes at all: for years, computer geeks had been assuming / jokingly saying that the NSA was spying on them and everyone on the internet. They were called crazy, paranoid, and unstable for years. If the government were spying on all of us, we wouldn't take it as a society! The tree of liberty must be watered, etc...
It was of course, true, and it came out in public in a big way ten years ago. And the result?? Absolutely nothing. Not a peep. Nobody cared that every single thought, text, email, search query, the slightest dribble of text or ascii, all monitored, all stored forever in your Permanent Record (or whatever the NSA calls it).
And anyone who said they'd water the tree of liberty? Not a peep, nary a protest, nary a change.org campaign, not even any sternly worded emails or entirely cosmetic changes in Cabinet officials or administrators. We're all apparently happy about it, zero updating any direction. Why is that?
I had the opposite reaction, I was amazed by the number of people who updated their world view significantly on the revelation that the NSA spies on people.
Like, of course the NSA spies on people! What the heck did you think the NSA does?
Really? What were some of the specific actions they took based on the world view update? Because almost nobody I knew did *anything* (one person began taking the battery out of their smartphone regularly, but that's pretty much it, and who knows how long they kept that up).
And presumably, they thought the NSA spies *on the rest of the world, in the service of US interests and US citizens* rather than on *literally every single person in the US, for their own nefarious purposes.* I mean, there IS a difference between those two.
...What the hell are you supposed to do? Taking measures to not get spied actually makes you more suspicious, as this xkcd brilliantly illustrates: https://xkcd.com/1105/ The best thing you can do is simply not do anything to bring attention to yourself. If you are going to do something reckless, having a fake, benign internet presence to mask your actual anonymous presence would be the best idea, but frankly you should be doing that anyways because employers do background checks via social media.
Whatever they did in all these other cases of overreaction - presumably writing their Congress critters, protesting, doing change.org petitions, trying to overrun the state Capitol, whatever. SOMETHING. Of which, they did literally nothing, which I always found funny / horrifying.
America, the bastion of the supposedly "free," the ones so proud of their supposed rights to free speech and their rights to own 700 guns per person (to prevent *tyranny,* you see) - but no, the instant we learn the government is literally panopticonning every US citizen, a tyrant stomping on all our faces and internal thoughts forever, we all just quietly rolled over instead of doing literally ANYTHING, as we DID do in every one of these other cases Scott mentions. If ever a case *warranted* overreaction, this was it, in my mind. But instead, not a peep. It really made me update my priors on how much of the media might be literal and explicit propaganda channels.
Your advice is the archetypical case! Instead of saying "well, we shouldn't have to take this, let's vote the bums out, get our voices heard, it's clearly unacceptable for the government to literally be spying on everything we do every second of the day," the advice is instead "oh, just try not to bring attention to yourself (THEY HEAR EVERYTHING YOU KNOW)."
I was never more happy to be an expat when all the PRISM stuff came out (and yes, I know I'm being spied on by the NSA no matter what country I'm in).
On this note, I'm actually genuinely surprised Trump (or some similarly ethically-challenged "outside the inner circle" politician with high-level permissions) hasn't used whatever kompromat the NSA has on all his rivals / enemies to try to cut them down / nullify any challenges he might have.
He / they might have and it's just under the radar, I guess. But even if he hasn't, the ability is now THERE, for literally any sufficiently unethical politician with access. Are we happy about that? Apparently. Team America, hell yeah! :-D
I may have a reason why? If you record everything, it’s kind of useless. You’re drowning in noise. Very hard to find anything specific.
Regarding the Effective Altruism movement; I don't agree that it's that hard to draw more specific lessons from the two disasters you mentioned. However, even granting that it is, by analogy to the regular school shootings or terrorist attacks of cases of sexual harassment, it seems that one should update toward expecting regular disasters of a similar magnitude.
Perhaps you already had a distribution that assigned somewhere in the range of a 10-90% chance of this level of disaster occurring at this frequency; but I did not.
Dramatic events are about as un-bayesian as it's possible to be though. Usually if an event is a big one-off we didn't see it coming, or had very little idea about its likelihood and no real way to estimate a prior ahead of time.
Lab-leaks (not pandemics), would be an example of this. No lab leak has/had lead to a global pandemic before, and any attempt to predict their likelihood is full of unknown unknowns that mix notoriously poorly with Bayesian thinking. Once you've observed the first instance of an event, I'd say you should update massively, if nothing else, on the fact that that event can plausibly happen.
With most dramatic events, I'd say nobody is truly thinking about the event at all beforehand. They're implicitly in the process of deciding whether the event is worth thinking about, which would be more like deciding between
"Lab leaks common" (3%), "Lab leaks rare"(2%), and "Lab leaks don't matter" (95%)
So tautologically, the only time you shouldn't update from events is when you think relevant events were already predictable, and were able to assign well founded priors to them ahead of time.
> You can think of this as a common knowledge problem. Everyone knew that there were sexual abusers in Hollywood. Maybe everyone even knew that everyone knew this. But everyone didn’t know that everyone knew that everyone knew […] that everyone knew, until the Weinstein allegations made it common knowledge.
But this is obviously false. The infinite descent of public knowledge was clearly established; popular culture was full of acknowledgments of the phenomenon. The musical of The Producers includes the lyric "I want to be a producer / with a great big casting couch"!
Then what's your explanation?
Are you suggesting that, in the absence of a convincing explanation for some phenomenon, even an explanation which is known to be false should be accepted as correct?
No matter what explanation I gave, it would be no worse than yours. It isn't possible to do worse than yours.
Maybe it's a co-ordination problem. "Trading movie parts for sexual favours is a common practice" is a bit vague, and even if people agree that vague bad things are bad, it's hard for enough people to work up enough passion to stop them. Conversely, "This producer made this actress have sex with him in return for getting her a part in this movie" is specific; people can easily picture it, and even imagine it happening to them or their loved ones. It's much easier to get people worked up about specific things like this; it's the same reason why charity ads almost always include a specific example of someone suffering from whatever it is they want to fix, instead of just saying "This is a problem, help us fix it."
The explanation of #MeToo? Feminism existed before it and after it, don't think many people changed their minds.
Whereas gain-of-function research was known to very few before COVID, it really did lead people to update in the direction of "this s*** is messed up."
MeToo had 3 specific ideas (I'm not endorsing these points just stating what I took away as the initial goals of the movement):
1. These specific people did bad things at varying levels of criminality (Harvey Weinstein, Louis CK, Bill Cosby). Because people have trouble separating art from artists people have irrationally high bar to changing their opinions of famous people.
2. Bad things were done to lots of people so you shouldn't feel ashamed instead speak out to hold your abusers to account.
3. Practices that you thought were ok aren't and you should update to consider more things harassment.
>(it’s always a power law)
Power laws can be expected sometimes, but can also be surprising in some other contexts.
Here is a conceptual toy thought example: Small scale terrorist attacks that require only handful of people to execute are much "easier" than large scale operations, which require larger organization and more resources. However, after organization has been founded and organized, it has a steady income, permanent resources, and recruitment. Why wouldn't they be able to carry out large scale attacks at a constant rate, instead of them following a power law phenomena? Power law is less surprising if one notes the terrorist organizations are opposed. Every moment a prospective organization spends preparing an attack and building larger network , more opportunities the authorities have time to stop them. Additionally, after an successful attack, their organization is often disrupted and the security apparatuses step up their threat model -> further action of similar scale requires either gathering similar amount of resources again or coming up with a novel attack the enforcement is not prepared for (another form of gathering resources).
Coincidentally, this is more or less the toy model proposed by Clauset, A., Young, M., & Gleditsch, K. S. (2007), pages 76-78 (though they conceptualize it as "planning time" than "gathering resources"). Their final model is x^(-alpha), where the exact shape of the power law depends on constant alpha = 1-kappa/lambda, where kappa/lambda is supposed to reflect the relation between filtering effect (by state action) during planning vs increased severity of attack due to planning.
Point being, in a competitive environment each novel "attack" must be matched by increases in "defense", or the kappa/lambda ratio changes (and so the distribution). Seems plausible that the dramatic reactions to dramatic events would be driver for this.
I'm unfamiliar with the e/acc movement, what's the problem with them?
They want to accelerate AI development with relatively few safeguards in place.
I predict that Scott will be the first liberal rationalist mass shooter
You could probably also predict specific details, by thinking: "what would make a better pun?"
Tried really hard to work "the Schelling has stopped" into other comment, but it'd be an appropriate news headline for such an act. Or maybe New Study Confirms Lead-Crime Correlation. Probably it'd also happen in the Bay State. But the article would be kept appropriately sparse, since Everybody Knows about infohazards here. Just a bulleted list, maybe.
[Paranoid disclaimer about morbid humour as mere coping mechanism for tragic coordination failures, as many ill-advised attempts at levity are]
The thing about mass shootings (using the Mother Jones definition, and not some of the broader "four or more shot, regardless of deaths" definitions) is that they're actually pretty close to a representative racial sample of the country, even though they're seemingly always portrayed in the media (and even in this post) as "crazy white guy" or maybe "Muslim terrorist."
Out of 149 shootings on the list, there's 10 Asian, 26 black, 12 Latino, 3 Native American, 80 white, and 18 "other," "unclear," or with no race listed. At least 6 of that last group (and two of the white people) have some sort of name connected to an Arabic/Muslim-majority country.
(What really baffles me is that the media, as much as they love to talk about white male mass shooters, never fixated anywhere near as much as I thought they would've on the 2022 Buffalo shooting, perhaps the most obviously anti-black racist mass murder on the list. Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?)
> Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?
Sounds like the kind of thing that Scott wrote about in The Toxoplasma of Rage -- since nobody has anything to say in defence of the Buffalo shooter (Payton Gendron, I had to look up the name and I don't think I've ever even heard it before) there's no agitation in the news cycle, everyone just agrees that this is a terrible thing done by a terrible person and then they forget about it.
A more cynical explanation is that there were no riots in May 2022 because the powers that be didn't want there to be riots in May 2022; George Floyd was part of a very deliberate campaign to stoke up racial tensions in advance of the 2020 election.
You may recall there was a string of heavily reported dumb racial-conflict incidents in the weeks leading up to the George Floyd thing. Two weeks earlier it was the "jogger" who got shot by neighbourhood watch. One week earlier it was an argument about an off-leash dog in Central Park which got breathlessly reported across the world.
>Two weeks earlier it was the "jogger" who got shot by neighbourhood watch.
I thought it was pretty clearly established that Arbery was, indeed, just jogging, so there's little need to put that in quotes expect if one is consciously trying to present the shooting as justified.
In addition to the other notable conflicts before the Floyd case, it should also be obvious that it became a huge thing because there happened to be a highly evocative and memorable photo of Chauvin's leg on Floyd's neck taken and presented in the news. Sometimes it just goes like that.
The other big thing about it was that it happened at the end of the serious lockdown period (start of good weather) so people were still free to go to protests and looking for opportunities to get out. By 2022 people were back to normal so it's a much higher opportunity cost to go to a protest.
>Why is George Floyd a household name, but no one can name any of the ten victims in Buffalo?
Related to the Toxoplasma of Rage explanation, it's easier to canonize one person rather than ten. Easier to focus on, lots of controversy involved, easier to remember one name. I suspect most people naturally have Great Man tendencies regarding history, and it's much easier for them to sympathize/demonize individuals rather than systems.
Modern progressives try to focus on systems, but ultimately they often end up with Floyd-like avatars acting as synecdoches for those systemic problems. Easy to pin Floyd on the widespread problems of police; despite the popularity of demonizing young white males as neofascists, it's still not as easy to smear the whole population of them like it is with police.
Mostly unrelated to Toxoplasma, Floyd became a cause celebre due to the lockdowns and vast amounts of people looking for any excuse to disrupt the pandemic news cycle and the pandemic boredom. Absent that tension, would it have exploded like it did? I highly doubt it; maybe isolated protests like after Trayvon Martin, but not the widespread ones.
Likely also a "hangover" or backlash- everyone was so burnt-out by the time of the Buffalo shooting there was no energy left.
This misses an important challenge with extreme events: they may, or may not, be governed by the mechanisms that control the more mundane events. Observational evidence of extreme events updates *on the physical limits* of the system.
It is notoriously sketchy to extrapolate power laws to predict the tail of the distribution. Power laws work until they don’t (If things were actually power laws weird things would happen).
Extreme event are likely probing the edge of the power law, in this sense they are a rare bit of information to inform what actually governs the tail.
I’ll give an example (one that I am quite familiar with): what it the biggest possible earthquake?
I could fit a power law to the small events- this turns out to fail quite dramatically in many cases, for many reasons that I won’t get into.
I could figure out what limits the size of the earthquake. Ok, but this is not very empirical. And note that this does not depend on the things that control small earthquakes. This is sometimes used for building codes.
I could update A LOT on the largest earthquakes that we have in longer term records. This is a decent approach that is used for building codes.
The key here is that if we know of ~10^6 earthquakes in California (for the most part tiny). A new large earthquake is NOT a 10^6 + 1 update.
To bring this back to some of the examples in the text, in a world without nukes or bioweapons, I would update A LOT if I learned about a terrorist attack that killed an entire city. This is because my model of the world placed physical limits on the scale of terrorism. New extreme evidence, significantly changes my model on these limits.
I think this is a special case of "if you previously assigned near zero probability to something, you should make a very large update when it happens".
I don't think we necessarily disagree. 'What is the shape of the distribution in the main part?' and 'where does the distribution end?' are sometimes (non-obviously) separate questions. It's worth being careful when deciding which of those two questions prior observations inform. My 10^6 observations make me very confident in my general shape of the distribution, but I need to not forget that I only have single-digit observations on its boundaries and should update those more easily.
When people say: “THIS CHANGES EVERYTHING!” The charitable case is: “Are there mechanisms that could limit such a dramatic event from happening (e.g. laws, social norms, economic pressure, coordination challenges, etc.) and does this new extreme event significantly update our understanding of said mechanism?"
> A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school.
I would have put extreme odds that this was someone assigned male at birth. I know that lesbians commit domestic violence at relatively high rates, but I heard in my head JK Rowling saying "these are not our crimes". This post has many extremely valid points - among many, "when you accept risk, don't overreact to risk paying off" - but I'm still viscerally shocked by losing so many internal Bayes points.
Well, you know, JK Rowling has admitted that her critiques come from a place of trauma, so she's not exactly coming to it from a logical and rational basis.
Does it matter how most fires start?
We know that some fires start by arson and some start by other means.
Let's assume prior odds there's a 20% chance per decade of a deadly fire being set by firefighters who are secretly arsonists.
The cause of one very famous fire is disputed, but even if it were caused by arsonist firefighters, that would only update our priors to 27% odds.
Either way, we should have heavy restrictions on firefighters, because 20% and 27% are both alarmingly large numbers. Because we don't know which firefighters are arsonists, and they refuse to all confess to the alarming criminality among their profession, we should call many of them before congress to testify.
And, since non-arson fires are too boring to talk about, let's pretend those don't exist and make absolutely zero changes to reduce the rate. After all, hiring more firefighters would probably just cause more arson, right?
That argument should sound absurd, but it's roughly where the public conversation is, with regards to viruses and virology.
So, what's wrong with your argument?
Well, your prior odds might be reasonable. There is perhaps one research related pandemic in history, the 1977 flu, which some people think could be the result of some kind of vaccine trial. It's not proven, no one even knows which lab would be to blame, but let's just assume that's real, and there's 1 in 50 years. That's 2% odds per year, or 20% per decade.
Okay, but there was nothing exceptionally bad about the 1977 flu. The number of people that died was about the same as every other flu year. So in 50 years, 49 of the flus were natural and one was possibly research related. The natural ones are 98% of the problem.
And during the same time, nature also brought us HIV, Ebola, several novel coronaviruses, and lots more diseases. So the natural diseases are well over 99% of the problem. Putting some 50% reduction in the risk of natural viruses would have much higher impact than improving lab safety by 50%.
Even if the virologists were like the firefighters above (and some are arsonists), you'd still have a net positive effect from hiring more virologists, just as you'd have a net positive effect from hiring more firefighters.
For some reason, people keep making this mistake, again and again.
With vaccines, we focus on the small rate of side effects, not the large rate at which they save lives.
With police officers, we focus on the small amount of police brutality, and not the large extent to which policing save lives.
If virologists have a 2% chance per year of making a flu no worse than the average flu, then focusing on the labs and not nature is a waste of time.
I suppose covid could be different, if we're talking about a chance of labs making something worse than typically comes from nature. And in that case, perhaps you're going to have to come up with different priors -- you can't use 1977 anymore. There has never been a gain of function pandemic in history, so it's hard to know what the prior odds are.
In practice, I'm not sure this matters. Covid is simply not a lab leak. The congressional hearings came up with no evidence. Rootclaim came up with no evidence. Covid came from nature, with at least 99% certainty. The next pandemic will almost certainly come from nature, as well.
And our society's obsession with a false lab leak theory will only make it more likely that we are unprepared for that next pandemic, because we've focused on a hypothetical 2% per year risk of a future lab accident, but have done very little to reduce the much higher annual risk of a natural pandemic. We've started cancelling viral surveillance programs because of the popularity of the lab leak theory and we've lost good communication with scientists in China.
It's not even clear that we've done anything to reduce lab risks -- if you're worried about lab safety in China, we now have even less transparency than ever as to what's happening in Chinese labs.
And if you consider the (low but real) annual risk of biowarfare, or warfare in general, having the world's two largest powers blame each other for creating covid certainly doesn't lower those risks (American people think it started in a Chinese lab, but Chinese people think covid started in an American lab).
AFAICT your argument is mostly that it matters how high you think the risk of GoF lab leaks is, and not that figuring out whether or not COVID was a lab leak is super important, which seems like it's not really disagreeing with Scott.
Regarding the point you're making: I think it would be a big mistake to stop hiring virologists, but the thing I see is a call for defunding / banning gain of function research, which I take to be a small subset of virology that isn't obviously helpful: in hindsight, if DEFUSE was funded and had happened in like 2015 or something, would that have helped COVID much? My impression is no.
Since there's never been a gain of function lab leak, the error bars are very wide, and figuring out if Covid is one could change the odds dramatically. That makes the two things inseparable.
The result of the lab leak theory has not just been to defund GoF research. We've also cancelled other things like viral surveillance in nature. Here's an article on one project that got cancelled because of the changing politics:
https://protagonistfuture.substack.com/p/us-virus-hunting-grant-quietly-canceled
Sampling of bat viruses in caves is extremely safe, relative to gain of function research. It's hard to even culture the samples, let alone infect yourself with them. The WIV collected 20,000 samples and only successfully cultured 3 sarbecoviruses.
We've also harmed collaboration with China, raised political tension with China, and harassed the western virologists investigating covid and other diseases.
Do I think that exact work proposed in DEFUSE would have prevented this pandemic?
No.
But I can think of many ways in which listening to virologists and other intelligent people would have helped. Let me list a few.
First, I'm reminded of Bill Gates' Ted Talk, from 2015, arguing that we're not ready for the next pandemic?
https://www.youtube.com/watch?v=6Af6b_wyiwI
No one really listened to him. Instead of looking back and thinking he was prescient, many people now just make conspiracy theories where he was involved in planning the pandemic.
Second, I think there was some important work being done on therapeutics, at the WIV and a few labs in other Chinese cities. Shi Zhengli and others were working on a drug (I want to say it was a fusion peptide inhibitor) that would treat coronavirus infection. Had the US or China invested more money into studying which drugs can treat sarbecovirus infections, and found something that was even 50% effective, that could have saved 5-10 million lives around the world when the pandemic hit. And many of the steps involved in doing those experiments could be classified as "gain of function", depending on how you regulate it.
As is, Shi Zhengli recommended chloroquine, based on a few preliminary SARS cell experiments, but that didn't work well.
Third, we could have listened to virologists about how pandemics actually start. Remember Eddie Holmes' trip to Wuhan in 2014? His Chinese colleagues took him to the Huanan market, to show him an example of a place where a future zoonosis could occur. While walking through the building, he stopped to take a picture of a raccoon dog in one shop. He took that picture because he knew that was a dangerous animal to sell, those were susceptible to the original SARS virus.
In 2021, Eddie looked back at his photo and discovered something shocking -- it was from the same shop that we now think that the pandemic started in.
And China noticed that shop was dangerous, as well -- it was fined for selling illegal wildlife earlier in 2019, one of only 3 in Wuhan that got fined that year. They were given a $30 fine, but continued operating.
In another world where scientists are better respected, perhaps decision makers would listen to them when they say, "we should not sell these animals at wet markets". Politicians might ban those practices that increase the risk of a pandemic. The wildlife trade might be regulated better than with $30 fines.
In another world, perhaps we'd think about how to get more feedback from people like Eddie Holmes. As is, we have conspiracy theories where he is covering up the origin of covid. The US congress subpoenaed him to ask him the "hard questions about proximal origins".
In another world, we might look at this pandemic and ask how we could have prevented this, or how we could have reacted better after it started. Aside from the key step of regulating the wildlife trade, we could ask how to do disease surveillance, how to test and approve drugs faster, how to get vaccines out faster.
Instead, about half the world fell for a conspiracy theory that scientists created the pandemic. And we've decided to listen to scientists less, not more.
My own takeaway from this controversy has not been "virologists bad" but rather "wow, GOF on PPPs (1) is a thing, that (2) may apparently-still-plausibly have just killed 30MM people; maybe we SHOULD be reevaluating the risk/benefit of this specific research activity."
Except no raccoon dogs have been found with SARS-CoV-2 and there is little evidence they can transmit the virus. Eddie Holmes also privately noted the furin cleavage site was unlikely to have arisen given the small number of animals in the market cages. The nearest relatives to SARS-CoV-2 are found in bats in Yunnan and Laos (areas WIV sampled) ~1500km away from Wuhan. Unlikely to have contact with raccoon dogs which come from the north of Wuhan.
It arose in Wuhan with low genetic diversity indicating a lack of prior circulation. The FCS codon usage is also unusual. At this stage the raccoon dog origin claims seem speculative at best.
> Since there's never been a gain of function lab leak, the error bars are very wide, and figuring out if Covid is one could change the odds dramatically. That makes the two things inseparable.
This is potentially a good point, altho I'd think of the relevant uncertainty as "how much lab leaks do we get of viruses of X level of pandemic potential" rather than thinking of GoF viruses as a special category.
Regarding your other points, I feel like the vast majority of the projects you mention are a good idea even if you're certain that COVID was a lab leak - so it seems weird to link those to the question. Instead I'd have the model that a bunch of people hate pompous scientists and doctors telling them what to do, and that's why they believe in lab leak and also want to defund harmless virology research.
The one thing on your list that popped out to me was therapeutic development that arguably involves GoF research - no chance you have a link I could read about that?
> altho I'd think of the relevant uncertainty as "how much lab leaks do we get of viruses of X level of pandemic potential" rather than thinking of GoF viruses as a special category
Hmmm, unless GoF provides more "surface area" for leaking because of multiple passaging / having to handle the virus for longer to get it to do what you want? I guess this is a place where it would be helpful to understand what's actually involved in GoF research, and probably different subtypes will be different levels of risky.
"how many lab leaks do we get of viruses of X level of pandemic potential" is also mostly inseparable from GoF research.
Like, we had several SARS lab leaks, but none caused a pandemic. That's because SARS is not a pandemic worthy virus, and none of the natural or unnatural introductions could make it one. If SARS had been pandemic worthy, it would have been all over the world before it ever leaked from a lab.
Same thing with a lab studying, say, Ebola. That lab could leak the virus 100% of the time and Ebola would not become a pandemic.
A lab needs GoF research to make a natural virus into something pandemic worthy (or perhaps there are weird examples of reviving a frozen virus that's no longer circulating, like smallpox, but I'd be happy to lump that in with GoF since most labs would have to recreate smallpox somehow).
With regards to GoF bans, I think they sound great in principle. There are experiments that I've read about that seem clearly too dangerous to me -- in one US experiment, scientists recreated Spanish flu and infected monkeys with it. I struggle to see how you can possibly justify the risk of that experiment. It makes perfect sense to me, to ban things like that, and the Obama admin did call for a GoF pause back in 2013.
I believe that the problem actually comes with the drafting of the laws, where it's really hard to spell out the language as to what's dangerous and what's not -- there's no consensus on what is and isn't gain of function and some scientists think the current bills would restrict most of what they do. I'm a bit short on time today, but I'll try to dig up some of the letters that scientists have written criticizing the new GoF bans.
My biggest concern is not that we'll over-regulate virology, though. It's just that we'll ignore the actual natural risks. To actually reduce pandemic risk, we could do things like regulate the wildlife trade in Asia or maybe ban mink farming in the west.
If there are sensible policies like that which could, say, cut future pandemic risk by 50%, then that would ultimately save tens of millions of lives over the next few decades. But when a majority of the general public thinks that scientists created the pandemic, it's hard to get support for those kinds of policies, and it's easy to instead regulate the scientists and ignore their advice.
This reminds me a bit of why I first got interested in Covid misinformation. My friends and family were buying ivermectin from internet pharmacies instead of getting vaccinated. It's not that I was particularly worried that the ivermectin would harm them -- my friends are likely worm free now. It's that I thought the drug had little to no effect, while the vaccines they did not get worked pretty well to prevent serious illness and death.
Likewise, if you rely on a pandemic prevention method that has a 0-5% chance of preventing the next pandemic, while ignoring the methods with a 50% chance, the net result is much worse.
I may be misunderstanding the figures, but I was under the impression that the 1977 Russian flu was one of the worst pandemics of the century, with 700,000 deaths vs. about 50,000 in an average year.
(I also don't know how to think about it if it "only" caused the normal 50K deaths. Does that mean it took the place of another flu that would have evolved naturally that year, and so caused 0 deaths on net, or that it is indeed responsible for 50,000 deaths?)
I think if something causes either 50,000 or 700,000 deaths, that's a pretty big mistake that deserves important efforts to stop. Although I agree that virology is great in general, my understanding is that experts don't think the two highest risk activities - gathering new viruses from weird caves, and gain-of-function research - do much good (maybe we should add "experimenting with preserved historic flu viruses" to that list). Certainly the fact that Wuhan Institute successfully discovered some COVID precursors years before didn't seem to help much with COVID, and I can't think of any examples of where something like that did help (I could be missing some).
We did not discuss priors much in the verbal debate, but I questioned Rootclaim's priors a bit in the written portion. See here, starting at slide 53:
https://drive.google.com/file/d/1N2IKOelaTz9c1unWGQ2VjXcFSwAPm5Xx/view
Slides 57, 59, and 60 show annual flu deaths in the US. 1977 does not appear different than any other year. I could not find comparable world-wide data, but it seems unlikely that the flu would be significantly more deadly, world-wide, but also be normally deadly in the US.
I'm arguing that the moral panic about gain of function research will definitely reduce our natural pandemic preparedness in a variety of ways.
If you start with a prior that 99% of pandemics are natural, making a small reduction to our preparedness for natural pandemics is certain to be worse than whatever gains you make from criticizing virology.
To be clear, I don't think we actually know what percentage of future pandemics will be natural. I thought a little bit more, last night, about how to calculate that number, but there's a very wide range of uncertainty since a gain of function pandemic is something that's never happened before:
https://manifold.markets/chrisjbillington/will-bsp9000-win-the-rootclaim-chal#mpJlG68nXRAehKkMtXN3
Because there's such a wide range of uncertainty on how dangerous gain of function research is, the annual odds of a GoF pandemic could be anywhere from, say, 5% annually down to nearly 0%. Going from zero known GoF pandemics to one would have a large update on that number, not a small update.
If expected future GoF pandemic risk is comparable or greater than future natural pandemic risk, then the emphasis on prevention would of course have to be on labs. Ergo, it does matter how Covid started.
> the annual odds of a GoF pandemic could be anywhere from, say, 5% annually down to nearly 0%
You have identified a lot of true facts, but you're thinking about this entire situation wrong.
We will have another pandemic. It all comes down to this: https://ourworldindata.org/urbanization
The question of when the next pandemic will happen is a probability distribution over years. Does virology research shift that probability distribution earlier? Of course it does. It would be ludicrous to claim otherwise. Is that a bad thing? Of course it is.
...but why is it bad?
If some virus in a weird cave is going to cause a pandemic, then it's going to cause a pandemic eventually. (That isn't really how any of this works, because it's really a question of mutations, but "go out and look for the virus instead of waiting for it to come to you" is a simple little picture story that encapsulates the issues involved in proactive research.) If we do nothing, maybe the next pandemic comes in 20 years. (There's *some* Everett branch in which that's the right number.) If we do something, maybe the next pandemic comes in just 18 years. But we'll know more when it does. How much is that worth? Hard to say. Covid turned out to be pretty bog-standard as flus go (the hard part was just manufacturing enough vaccines quick enough), so research into potential pandemics turned out to be irrelevant. On the other hand, sometimes new diseases emerge that are radically unlike what we've seen before (see e.g. AIDS), and there's no rule that says the next pandemic can't be something very different.
We could have a regime where literally 100% of all pandemics emerge from labs, while it is simultaneously true that the labs are a good idea.
Suppose that 100% of suicides happened in psychiatric hospitals immediately after the Pre-Suicide Unit identified that person as a suicide risk and ordered their hospitalization. Therefore, shut down the Pre-Suicide Unit?
Suppose that almost 100% of murderers have previous contact with police. Therefore, the police cause murder? I mean, there's an extremely narrow technical sense in which, yes, the police probably did cause the particular murder, in the sense that if the police had never done anything, that specific person would not have been murdered on that specific date in that specific way.
Proactively seeking out the cause of the next pandemic *obviously* makes it happen sooner. But that only matters to the extent that we're *less prepared* as a result of the pandemic happening sooner. Lots of other things can also make us less prepared, such as pointedly refusing to proactively seek out the cause of the next pandemic.
You're so close to understanding this. You mentioned urbanization. You mentioned AIDS. It spilled over into humans about 100 years ago. Why only just then? Because that's when cities started forming in Africa. Before that, it could spillover but the odds would be very low of it infecting more than a few people in one village.
Now think about SARS. SARS was found in farmed animals in a few places (including Hubei province), but we never recorded a human infection in a farmer. Probably there were some, but if you live on a rural farm and get infected, you're not going to pass it to many people. Maybe your family gets it.
We did find SARS cases a thousand miles away from those farms, in the middle of big cities. We found them in markets and restaurants where people ate those animals. In that case, the human density was high enough to sustain transmission, and an epidemic started.
With Covid, we also found the first cases at a market in the middle of a big city.
The fact that some bat disease exists in some distant cave does not make a pandemic inevitable -- these bat diseases have existed in those caves for millions of years. What causes a pandemic is farming susceptible intermediate host animals and then bringing them into dense urban population centers.
Those practices are preventable. Making changes to the wildlife trade between SARS and Covid could have dramatically cut down the odds of the Covid pandemic ever happening -- it's possible the two diseases even share the same intermediate hosts. But no one is talking much about fixing these things, because they're too focused on some imaginary lab leak.
Ehhhh...I mean, I certainly don't want to discourage anyone from trying to clean up meat markets, but...
There is one intervention that would definitely work to solve the problem: make everyone rich. (This would also solve a lot of other problems.) The problem is that we haven't managed to do that.
As human population increases, my default expectation is more human encroachment on (insert any given location on Earth). Are there improvements to be made on the margins? Sure, maybe, but you're talking 1% improvements, 2% improvements.
At the end of the day what you're saying is that you want to stop humans messing around with animals, which...I don't think you can fix *that* without fixing *everything*. Which is totally possible: rich countries are rich. But it's not like we're not already trying to make everyone rich. What you're talking about is a *targeted* intervention, where we can skip the "make everyone rich" step, and that...this isn't like malaria, you know? With malaria, yeah, we just have to interrupt the cycle, intercept the carriers. Tricky, but possible. But what you're talking about...this isn't one disease we're talking about here, it's every disease that can potentially mutate and jump to humans. I'm skeptical that we can meaningfully reduce contact between poor humans and animals without either making everyone rich (which we already want to do anyway) or massive human-rights abuses.
It's not like the CCP hasn't been trying to stamp out wet markets. They've been trying for decades. The educated classes are 100% on board. But they haven't been willing to take the drastic steps necessary to actually succeed, nor, frankly, *should* they be willing to. When they engage in horrible abuses to maintain their own power, outside observers are rightly horrified. Stomping on the poor to the degree necessary to actually enforce something as hard-to-enforce as "don't mess with animals" would be bad, however noble the intentions.
Your problem is the 67-year-old nobody in Guangdong who thinks eating a bat is protective against cancer. (To be fair, this isn't any more ridiculous than the listicles' ongoing quest to classify all substances in the universe as causing or preventing cancer.) (One could also make the case that he isn't exactly *wrong*, per se...)
Closing wet markets is like whack-a-mole: they just pop up somewhere else, in poorer, less surveilled areas. And of course, the harder you stomp, the less the authorities know about what's really going on on the ground (because no one will talk to the authorites, Stop Snitchin', etc), which brings its own problems. You can get past that with extreme enough measures, but...how to put this.
Another big part of the problem is China's population density, of course. Other places also have poor people, but have *fewer* poor people. If we reduced China's poor population, it would produce such diseases at the same rates as those other places. But I think the ethicists would have a few quibbles with that plan. This is pretty much how I think about prospective bans, too. I can definitely think of enforcement tactics that would make it stick, but I would not approve of those tactics and neither should you.
I mean, heck, what do you even want? Shut down all the wet markets? On paper, *we did that*. They were banned in February of 2020. They were "reopened" two months later. In reality, of course, they never went away at all, and the authorities correctly made the call that better visibility was more important than reducing volume.
They tried in 2003, too, when they were scared of SARS-CoV-1. It failed. They tried again in 2013, when they were scared of H7N9. It failed. "Hey, what if we just stopped poor people from messing with animals" is not some kind of new idea! Every so often there's some huge disaster and the authorities say enough is enough and try to shut it all down...but it never sticks, and they eventually (wisely) give up.
Heck, go back further. Authorities have been trying to get rid of nasty, dirty, smelly places frequented by poor people essentially as long as there has been human civilization. The upper classes' visceral reaction of "Ew, get rid of it" predates germ theory by thousands of years. Success rates are holding steady at 0%.
Even if you think it would be a good idea to try again (I don't), you have the problem of getting the CCP to do it. The governments of Australia and the United States at least, probably others, *have* leaned on the CCP to do what you want. The CCP has ignored them, because...lemme see here...ah yes, it doesn't like them in the first place and has no particular inclination to do what they say.
If you have some clever idea to make the wet markets slightly cleaner, or to discourage them, I'll give you a hearty slap on the back on a "Go get 'em, tiger." That stuff is good to do, sure. But the maximum plausible benefit is ultimately very small relative to the size of the problem. It's like having "I know, let's have everyone wash their hands!" as your pandemic-prevention strategy. You can't make a meaningful dent in the number of dangerous interactions between humans and animals. You can make a big dent in the number of *officially reported* dangerous interactions between humans and animals, but that would be a bad thing, not a good thing. It makes a big difference how quickly a new pandemic comes to the attention of the authorities. (After it's noticed, we still have to actually *do* something while we still have the initiative, but that's a separate problem.)
> it's possible the two diseases even share the same intermediate hosts.
Random aside, but some research suggests SARS-CoV-1 might not have had nor needed any intermediate host at all. SARS-like coronavirus WIV1 was able to use ACE2 directly, suggesting that SARS-CoV-1 might have made the jump straight from bats to humans. See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5389864
Nature. 2013; 503(7477): 535-538.
> Most importantly, we report the first recorded isolation of a live SL-CoV (bat SL-CoV-WIV1) from bat faecal samples in Vero E6 cells, which has typical coronavirus morphology, 99.9% sequence identity to Rs3367 and uses ACE2 from humans, civets and Chinese horseshoe bats for cell entry. Preliminary in vitro testing indicates that WIV1 also has a broad species tropism. Our results provide the strongest evidence to date that Chinese horseshoe bats are natural reservoirs of SARS-CoV, and that intermediate hosts may not be necessary for direct human infection by some bat SL-CoVs. They also highlight the importance of pathogen-discovery programs targeting high-risk wildlife groups in emerging disease hotspots as a strategy for pandemic preparedness.
> Our findings have important implications for public health. First, they provide the clearest evidence yet that SARS-CoV originated in bats. Our previous work provided phylogenetic evidence of this5, but the lack of an isolate or evidence that bat SL-CoVs can naturally infect human cells, until now, had cast doubt on this hypothesis. Second, the lack of capacity of SL-CoVs to use of ACE2 receptors has previously been considered as the key barrier for their direct spillover into humans, supporting the suggestion that civets were intermediate hosts for SARS-CoV adaptation to human transmission during the SARS outbreak24. However, the ability of SL-CoV-WIV1 to use human ACE2 argues against the necessity of this step for SL-CoV-WIV1 and suggests that direct bat-to-human infection is a plausible scenario for some bat SL-CoVs.
If that turns out to be the case, having multiple kinds of animals in proximity might be as irrelevant as bad odors. But of course, we can't know for sure without more gain-of-function research into how viruses jump species.
"Slides 57, 59, and 60 show annual flu deaths in the US. 1977 does not appear different than any other year. I could not find comparable world-wide data, but it seems unlikely that the flu would be significantly more deadly, world-wide, but also be normally deadly in the US."
The flu in question is called the *Russian* flu, and it's called that for a reason. It really did cause 700,000 or so deaths. In China and Russia, which in 1977 were on the far side of the Iron Curtain and so that strain did not spread to the United States. About the ordinary number of Americans died of non-Russian flu in 1977; that's not what we are talking about.
If correct, that's less than 2 times the death toll for an average flu year: normally the flu kills ~400,000 people worldwide, with a lot of annual variation:
https://ourworldindata.org/influenza-deaths
And I'm pretty skeptical that the iron curtain stopped viruses from travelling worldwide. Can you find statistics from those 2 countries?
Wikipedia also says that the 1977 flu was relatively mild for older people (because of its "frozen" nature, resembling a 1950's strain), so it seems odd that a strain which was milder for old people would cause an abnormally high death total.
> the fact that Wuhan Institute successfully discovered some COVID precursors years before didn't seem to help much with COVID, and I can't think of any examples of where something like that did help (I could be missing some).
They did propose a treatment, but it didn't pan out.
Inhibition of SARS-CoV-2 (previously 2019-nCoV) infection by a highly potent pan-coronavirus fusion inhibitor targeting its spike protein that harbors a high capacity to mediate membrane fusion https://www.nature.com/articles/s41422-020-0305-x
Fusion mechanism of 2019-nCoV and fusion inhibitors targeting HR1 domain in spike protein https://www.nature.com/articles/s41423-020-0374-2
However, for what it's worth, Ralph Baric (another gain-of-function researcher, not at WIV) has argued that he deserves some credit for Operation Warp Speed correctly going "all in" on mRNA vaccines very quickly. It wasn't a priori obvious that the mRNA method would work, but Baric's experiments (funded by the Vaccine Research Center at NIH) prior to the pandemic showed that it would in a series of experiments in 2018 and 2019 (and early 2020 before Operation Warp Speed launched in May; by that point, of course, SARS-CoV-2 itself had shown up).
Baric also pointed to remdesivir, a treatment for coronaviruses that he identified in 2017. Unfortunately, remdesivir only works intravenous, so it didn't end up being much of a factor in the pandemic. They were trying to come up with an oral formulation that would work, but the pandemic came before they succeeded.
Baric *also* pointed to molnupiravir (sometimes called EIDD-2801), another treatment he identified as likely useful against the next pandemic before the next pandemic actually came. And molnupiravir was an oral treatment. The first documented COVID-19 hospital admissions were on the 16th of December; in view of the deadly peril and dire need, and that molnupiravir had already been proven broadly effective against coronaviruses, the FDA swung into action and issued what they call an Emergency Use Authorization, allowing molnupiravir to be used to treat COVID-19, on the 23rd of December. (Granted, the pandemic started in 2019, and the EUA was in 2021, but that is not Baric's fault.)
On the more blue-sky side, Baric continues to say that it should be possible to get ahead of the variant race and make a vaccine that will be effective against coronavirus variants broadly, by constructing a cocktail of variations ahead of time. He did show (with Martinez, etc) that their chimeric vaccines showed a meaningful improvement over the SARS-CoV-2 vaccines in common use.
Chimeric spike mRNA vaccines protect against Sarbecovirus challenge in mice https://pubmed.ncbi.nlm.nih.gov/34214046
It also seems worth remembering that a number of basic facts about SARS-CoV-2 we take for granted, such as the fact it comes from bats, are things we know because of the previous WIV work on SARS-CoV-1 and relatives.
(But I mention this only for completeness. I broadly agree that all the more proactive research was, against this particular pandemic, of only marginal value, nothing we couldn't have lived without. The question is the next pandemic.)
If I follow your formulation about "COVID as lab leak", the conditional probably is something like P(lab leak's happen once per decade | this lab pandemic was caused by a pandemic). I'm updating my expectation that I should expect a lab leak in the future based on what I learn about this particular pandemic (proven hypothetically to be 100 percent a lab leak vs not).
But we don't really know or cannot know whether COVID or any other pandemic was absolutely a lab leak. So we flip the conditional to get a prior on whether this or that pandemic was a lab leak such that P(this pandemic is caused by a lab leak | lab leak caused pandemics happen once per decade). If I have a higher estimate, then I'm more inclined to think that COVID was a lab leak; lower if my prior is lower. But there is a more nuanced formulation that isn't effectively modeled: P(Covid was a lab leak | the sum total of evidence), where "sum total of evidence" includes my prior on whether a lab leak pandemic is caused once per decade a well as the empirical and circumstantial evidence.
How should we update those priors? It turns out doing so well is epistemically difficult because our "sum total of evidence" is shaped by the news media we read, our skepticism or acceptance of government science, and so forth. I don't know how we can be a naive Bayesian about this and minimize our ideological priors.
Maybe that doesn't change the fundamental point Scott's trying to make here. It might. I'm still thinking it through.
I learned this a long time ago under the guise of, anecdotes are not data. My primary objection to even open the news is that I want to learn about the statistics, not about isolated events. And btw by this metric of course the new media are the worst, Twitter especially.
Any organization of sufficient age and clout has a huge scandal. Yet, people are always surprised that org x doesn't live up to it's own values. But there is a corollary too. There's also a percent chance of flubbing the response. And there's a percent chance of taking response actions that make "things like this" more likely or some other horrible thing more likely in the future. This corollary goes a couple levels deep and when you truly roll all 1s, it's going to be a bad time.
Separately, I just read "The Voice of Saruman" and so calling people "the stupid people" rubbed me the wrong way. For one, it's needlessly condescending - focusing on *their* stupidity, rather than the call to be wise, or one's own stupidity. Two, it doesn't help us deal with the dynamics of living alongside non-distribution-model-having humans. Three, people are stupid, for letting base emotional hits guide them instead of Lady Philosophy and explicit models, but why insist on it?
I hope to cultivate love or at least pity for the madding crowd rather than contempt. Hard as it is to do. Oh, and I want you to help me in this. Your job is to benefit my character too. I have large expectations of you. Sorry.
The "stupid people" thing is uncharacteristically high-handed and charmless but clearly deliberate, on the strength of rhetorical repetition alone. As a near-median person, I eagerly await the next puncture in the equilibrium.
How would you have recommended addressing that topic?
I don't know! I would reconsider talking about "the stupid people demographic" as though it's a group of cattle that need to be prodded by the Wise, and lacking agency or intelligence. I know you don't actually believe there is a stable stupid people demographic who get together and have stupid people conventions etc. I understand that "stupid people", in this case, is just synecdoche for a real and ever shifting phenomenon based upon the drama of the day, and you not claiming that people who fall into this phenomenon are stupid all things considered or universally dull or that they are "bad people." You are naming a phenomenon (but the people who do it are kind of the outgroup?).
I counsel against calling people who routinely error in reasoning in this way stupid, because, I don't know, this type of stupidity is an emergent phenomenon of many people... and calling them stupid too many times in a row gets you a 'Niceness' penalty on your moral stats sheet.
I'm sympathetic to your approach, at least for this audience, and I don't think I could do *substantially* better.
But I think I would have used "foolish" rather than "stupid". The latter is usually my intuitive first take in cases like this but "foolish" carries the connotation of "perhaps unwisely didn't think this through" rather than "thought about it and got it wrong because mentally deficient. So, A: less pejorative and B: probably more accurate in this context.
If you didn’t want to substantially rewrite it I would maybe suggest ‘undereducated people’ instead of ‘stupid people’ since that at least gets rid of the essentialism and suggests a way in which it can be ameliorated in the future.
At the end of Antifragile he suggests that you should avoid the news because among other things it should almost always fail to update your model of the world. Year-in-reviews or Decade overviews should be enough.
"Antifragile" author who literally cannot handle a single jot of criticism whatsoever
True. He feels so very misunderstood, perhaps deliberately!
These things do matter and the most sophisticated supercomputer probabilities are meaningless for new events. Extrapolation from the past has always been doubtful: even more so now that change has accelerated.
I kinda see what you're doing with the FTX vs OpenAI contrast, but the example falls flat for me. The problem with SBF was not that he was a CEO doing shady things or whether or not his company had a board. The problem was that the entire EA movement went all-in on this one charismatic person, tying its own public reputation to that person, shifting priorities in the direction of things SBF wanted, being unprepared for the FTX future fund being backed by nothing, etc etc.
Ironically, just as SBF himself, EA's turned out to also have rather naive linear utility models, a clear lack of deontology, common sense, due diligence, whatever you want to call it. The thing that makes you not bet your entire movement on one person.
As Yudkowsky says: If professional investors couldn't work out SBF defrauding people, why would you expect charities to?
When it's a charity that is chock-full of people convinced they are smarter than the average bear, have all these VUNDERFUL! maths and statistical tools, and is courting/courted by Silicon Valley deep pocket types, I sure the hell do expect that.
Mary Murphy running a cake sale to raise funds for new curtains for the local tennis club building, not so much.
I don't think we actually went all in on him. Will MacAskill said he seemed cool. He did seem cool. Most other people never mentioned him. I don't think I mentioned him, except once in a book review as an example of someone who had weird thoughts about how to do business. Mostly his role was "he gave us money and we accepted it". I think you would need a very high standard of suspicion not to accept $100 million from someone who was giving it to you for free.
Mmmm. I can see why this approach from EA survivors, since I've just read a bunch of comments upthread about how my church is a nest of rapists, and while I can assure you all that I've never raped anybody, what are you going to think when you hear "Catholic"?
But Sam (and his brother, to a larger extent), *were* involved in EA. From Lewis' book (and not the fawning Sequoia Capital article, for a change):
About the early version of Alameda Research:
"The business hadn’t even really been Sam’s idea but Tara’s. Tara had been running the Centre for Effective Altruism, in Berkeley, and Sam, while at Jane Street, had become one of her biggest donors. …Tara was no one’s idea of a crypto trader—before moving to run the Centre for Effective Altruism, she’d modeled pharmaceutical demand for the Red Cross. She had no financial background and no money to speak of and yet was generating tens of thousands in profits trading crypto.
…Her success led Sam to his secret belief that he might make a billion dollars by creating a hedge fund to trade crypto the way Jane Street traded everything.
But he couldn’t do it by himself. Crypto trading never closed. Just to have two people awake twenty-four hours a day, seven days a week, he’d need to hire at least five other traders. He’d also need programmers to turn the traders’ insights into code, so that their trading could be automated and speeded up. Tara had been making a handful of trades a week on her laptop; what Sam had in mind was an army of bots making a million trades a day....
His access to a pool of willing effective altruists was his secret weapon. Sam knew next to nothing about crypto, but he did know how easy it was to steal it. Anyone who started a crypto trading firm would need to trust his employees deeply, as any employee could hit a button and wire the crypto to a personal account without anyone else ever having the first idea what had happened. Wall Street firms were not capable of generating that level of trust, but EA was."
And then it all went kablooey because he was who he was, and the majority of the disgruntled EAs left and bad-mouthed him to the community (allegedly) but clearly not widely or forcefully enough, because while it did put a stop to his gallop for a while, it didn't finish off Alameda Research and him.
To be clear, accepting the $100 million was completely fine. One does have to be careful, as a sudden funding increase with strings attached will have a tendency to warp organizations. In this case, it seems there was a big push towards AI safety based on SBF's preferences (correct me if I'm wrong about this). IMO thinking about AI safety is a worthwhile thing to do and should be funded, but it is absolutely not the central mission of EA. I'm worried (in general) about AI alarmists overtaking EA, because it's a totalizing set of beliefs.
The other issue is that somehow, SBF turned into the poster child for EA in the public's perception. I genuinely don't know how deliberate that was, and which EA orgs helped this stuff along. It might have been himself doing it? I vaguely remember billboards with his face on it associated to EA, 80k hours holding him up as a shining example, and stuff like that.
"A few months ago, there was a mass shooting by a -->far-left<-- transgender person who apparently had a grudge against a Christian school. " This has in no way been proven. There was a manifold market about it and it resolved NO, because this has not been proven.
https://manifold.markets/johnleoks/will-it-be-revealed-that-the-nashvi-57950dca88ed
> I think same mechanism
Typo, missing "the".
I mostly agree, but want to add an important caveat here:
You are not supposed to significantly update on dramatic events only if you indeed have a probability distribution that these events fit in.
Otherwise, you get a trapped prior, an awesome excuse not to update on any evidence. The "pretending to be wise" mindset where you fail to notice your own confusion and just pretend that you are one of the cool kids who totally saw the event coming.
It's okay not to expect some things and be surprised. It doesn't make you a stupid person. Personally I didn't expect FTX collapse. It was a "loss of innocence" for me. I knew that crypto is scammy, but these considerations were outweighed by the halo effect of EA - obviously good and reasonable people there know what they are doing. So my probability distribution didn't really include this kind of dramatic event. Not that I was literally thinking that it was impossible, but I just didn't really think about it, and if I did, would put a tiny probability estimate of such thing happening per year. And so I updated from my halo effect. Not to a point where I disavow AI, but to a point where I see it just as another community with its own failures and biases. A community that I mostly ideologically agree with, one that I wish I could truly be a part of. But not more than that.
Base rate for FTX should have been the rate at which multibillion dollar companies are fraud, which is low but not zero (eg Enron, Theranos), plus a lot extra for crypto, either plus or minus some extra for the EA connection, I don't know. If you'd asked me beforehand I would have said 1% based on knowing and trusting some of the people who worked there; if not for that, it would have been maybe 5%. I'm updating the degree to which I can trust people, but I'm not sure 5% is wrong for the general class.
Yes, this sounds about right.
In retrospect, my own mistake was giving too much of discount for the EA connection, and I updated against it.
Re lab leaks: you can reasonably not have a very high prior on them. You read about the occasional minor case or close call, but without having been there it's hard to know whether it was a real risk or just sensationalized news making something that never really had a chance to go bad sound scary. Maybe it's like that "there's a mass shooting every N days" thing where it turns out they define "mass shooting" very loosely and most of the examples are non-central. And you could try to do your own research, but without having been there or being a biosecurity expert it's genuinely hard to be sure (and you can't just trust the biosecurity experts either, since they have their own agenda).
So having a specific thing example of a really bad lab leaks pandemic really does provide good important evidence of lab leaks being worth worrying about (unlike say the SBF thing, where "crypto is full of scams" is something that was already obvious with many central examples).
Re 9/11, your conclusion of "then after 9/11 we didn't have any mass terrorism" is kind of bad because it only observes the world where we *did* have a massive response. It's easy to imagine a world where all the terrorists weren't suddenly busy fighting US Marines in Afghanistan and had time to go "wait you can just crash planes into buildings? Let's go!" And you get a massive terrorism spike (the sudden rise in global terrorism since withdrawing from Afghanistan gives at least some positive evidence for this). This works for true stochastic processes, but terrorism is a game against a human adversary, not a random process. It's the difference between gambling on a horse race and the stock market.
This is straightforwardly a clash of the different descriptors in use, I think...
I reckon the news originally was a blend of two kinds of events: the unexpected, and the impactful. The unexpected (e.g. violent crime, because most of us live non-violent lives) should make you update because it's unexpected. Impactful (e.g. who won the election) should make you update because it affects your base assumptions.
I would argue that 9-11 was worthy of an update for most people, because most people did not realise that terrorists were capable of organising something that big; or that small numbers of people could weaponise infrastructure against a major city like that. Perhaps we should have known, but I don't think it had ever crossed my mind before 2001.
The word dramatic gives us a clue about why some news is not worthy of an update: it's just rubbernecking. Spectacle. There's plenty of that, particularly as news has become more national, and then more global. For example: a murder in your community is worth thinking about. A murder in your local paper is important. But a murder *somewhere in the USA* is not worth thinking about; however they are now presented to us in the same way. For that kind of news, I think Scott's right - not unexpected, not worthy of our attention or an update of priors.
There's one more problem, which good news organisations had ways of dealing with: slow events. Like, WHO fails to deploy malaria vaccine for another day, 1,000 children die. (For those who didn't know, or who forgot about it, this is very important information and should prompt a change of priors.) Good news outlets had dedicated reporters for that sort of thing, but news is in the middle of a big transformation, so sometimes it's getting lost.
Lab leak matters enormously.
It shows that scientists, politicians and the media (the ones accusing Trump supporters of "misinformation") are perfectly willing to engage in outright lies and propaganda about even scientific issues in the name of ideological agendas.
It means that whenever anyone (rational) hears the term "misinformation" or reasonable explanation for things being called "conspiracy theories" (especially "racist" conspiracy theories), this should be a giant, MASSIVE red flag that the person is engaging in propaganda.
Additionally, the people who engaged in this massive lie suffered ZERO consequences for it, because approximately nobody on the left has any principles other than in group good out group bad. Maybe the right are no different, but they're not the ones acting high and mighty over such things.
Of course, even if you believe that lab leak shouldn't make us more worried about gain of function, we should still be very very worried about and opposed to gain of function. But doing anything about it would be seen as vindicating Trumpian "conspiracy theorists", and besides, all the scientists whose careers are based on this stuff said there isn't a problem, and you're not an anti-science republican are you?
Can we all agree then that the Holocaust is of absolute political irrelevance?
I think if someone had previously really internalized the lesson of the Armenian genocide that mass genocides were possible, and was under no illusions that European countries were morally superior enough to avoid that, they could have stayed updateless after the Holocaust. I don't think many people had done that, and I'm not even sure we're there now - I totally don't believe a modern European country would commit a Holocaust-level genocide; maybe I'm being foolish.
You're not foolish if your time horizon for large European genocides is short enough or if you rule out large disruptions (war, a really bad plague, unheard-of natural disasters etc.). After a few years of the right kind of instability, even large outgroups would be at risk of being outlawed anywhere, I suppose. Sorry about that.
Probabilistic design is in civil engineering the governing economic rationale and is always related to the potential death toll. Engineering logic applied to terrorism would be by far to prioritise the prevention of large incidents over the prevention of smaller ones. Which seems to be the opposite of what we are seeing, also in geo-politics.
Uncertainty is a prerequisite for the evolution of life. Hence our continuous navigation to cheat it:) (to stay dumb:)?. So we should use its principle to help guide us in making smarter future-enabling decisions. Hence my quote: “Human progress can be defined as our ability to create and maintain high levels of future uncertainty“. Which at first may seem a paradox...but it's not.
I like the idea of harnessing the reactions of people to drive policy. Discerning actors should be able to craft opinion pieces so that they are ready to go when the right event happens. I know it's common to have two different pieces ready to go when it comes to presidental elections, but I'm curious if there's anyone that have prepared something for the day terrorist nuke a city. Might be a good time to use public sentiment to drive disarmament.
This post on LessWrong seems to partly be arguing the opposite side of this: https://www.lesswrong.com/posts/TgKGpTvXQmPem9kcF/notice-when-people-are-directionally-correct
I'm confused what he's thinking. He knew there were Somali pirates causing minor trouble. He admits that the Houthi trouble will probably be, in the grand scheme of things, minor. So I'm not sure what update he should make between knowing about the Somalis and knowing about the Houthis, except maybe going from n=1 to n=2, which is fair.
Would this logic also apply to the once in a while killing of black American males by the police?
Cops kill about 10 unarmed not-directly-attacking-police black people every year; I think this number has been pretty consistent. I agree that if another such killing makes the news tomorrow, this is basically meaningless for the question of how much you should care about this issue, or how many you should expect them to kill next year.
Okay, your answer rightly anticipated my next question.
And if there aren't very many killings of black American's next year we still should not update and should assume the problem remains. We should still work towards staffing our law enforcement better. (Which probably should involve higher pay to attract better officers)
It almost certainly came from a lab.
https://michaelweissman.substack.com/p/an-inconvenient-probability-v40#%C2%A7worobey
This article makes a reasonable case that drastic events don't tell you much about the base rates of said events.
But is that the only thing you can learn from a drastic event? Sure, in the abstract, we knew that flying a plane into a building was possible, but the data points of large terrorist attacks was still n=0. The 9/11 attacks were an unprecedented event, in terms of scale of planning and execution. Of course you can learn stuff from it!
You might know, in the abstract, that there are probably sexual harassers or abusers in a large community. You can, and should, put preemptive measures in place to protect against them. But you are still shooting in the dark, knowing nothing. Whereas if an article comes out listing a ton of examples of harassment, suddenly you have a sense of who some of the abusers are, how and when they operated, and how they got around whatever protections you had in place to catch them. If you care about preventing sexual harassment, this is very useful information! It can inform the proactive measures that can be put into place to prevent similar events from occurring in the future.
On the origins question, the problem David Relman described is the early case data is "hopelessly impoverished". Still, location, sampling history in Yunnan and Laos, lack of secondary outbreaks, features of the virus (binding affinity to human ACE2, low genetic diversity, FCS, codon usage) research proposals fit with lab origin.
1. Chinese researchers Botao and Lei Xiao first observed lab origin was likely as the nearest bat progenitors are ~1,000km from Wuhan. The Wuhan Institute of Virology sampled SARS-related bat coronaviruses in these locations - Yunnan, Laos and Vietnam.
2. Patrick Berche, DG at Institut Pasteur in Lille, notes you would expect secondary outbreaks if it arose via the live animal trade (screenshots below) https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10234839/
3. The features of the virus are consistent with lab origin. SARS-CoV-2 is the only sarbecovirus with a furin cleavage site. It arose well adapted to human ACE2 cells with low genetic diversity indicating a lack of prior circulation. Again inconsistent with the claim it arose via the animal trade. The CGG-CGG arginine codon usage is particularly unusual. The restriction site SARS-CoV-2 BsaI/BsmBI restriction map falls neatly within the ideal range for a reverse genetics system and used previously at WIV and UNC. Ngram analysis of codon usage per Professor Louis Nemzer https://twitter.com/BiophysicsFL/status/1667224564986683422?t=Vh8I9fl3lwj6k6VJ8Kik8Q&s=19
https://www.biorxiv.org/content/10.1101/2022.10.18.512756v1
4. The Wuhan Institute of Virology was part of a proposal to add furin cleavage sites into novel SARS-related bat coronaviruses. https://www.pnas.org/doi/10.1073/pnas.2202769119
Jesse Bloom, Jack Nunberg, Robert Townley, Alexandre Hassanin have observed this workflow could have lead to SARS-CoV-2. Nick Patterson notes work often commences before funding is approved and goes ahead anyway. The Wuhan Institute of Virology had separate funding for SARS-related spillover studies from the NIH and CAS.
5. The market cases were all lineage B but as Jesse Bloom observes lineage A likely arose first. So *the market cases were not the primary cases*. WHO has also not accepted market origin as excess death data points to earlier cases. Peter Ben Embarek said there were likely already thousands of cases in Wuhan in December 2019.
https://academic.oup.com/mbe/article/38/12/5211/6353034
See also Kumar et al (2022) https://academic.oup.com/bioinformatics/article/38/10/2719/6553661
6. The evidence for both lineage A and B in the market itself is tenuous. The evidence for lineage A in the market is based on a single sample found on a glove tested on 1 January 2020 out of 1380 samples. Liu et. al. (2023) note this is a low quality sample.
7. Bloom found the market samples are *negatively correlated* with SARS-CoV-2 genetic material. Another Bloom analysis published 4 January 2024 shows abundance of other animal CoVS but not SARS-COV-2. https://t.co/i0HzwvIPeo
https://academic.oup.com/ve/article/9/2/vead050/7249794
https://t.co/i0HzwvIPeo
8. Lineage A and B are only two mutations apart. François Balloux notes this is unlikely to reflect two separate animal spillovers as opposed to incomplete case ascertainment of human to human transmission.
9. There is a documented sampling bias around the market. Something even George Gao, Chinese CDC head at the time, acknowledged to the BBC stating they may have focused too much on and around the market and missed cases on the other side of the city. David Bahry outlines the sampling bias.
https://journals.asm.org/doi/10.1128/mbio.00313-23
10. Wuhan was actually used as a control for a 2015 serological study on SARS-related bat coronaviruses due to its location.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6178078/
11. Superspreader events were also seen at wet markets in Beijing and Singapore (Xinfadi and Jurong).
12. The Wuhan Institute of Virology has refused to share their records with the NIH who terminated their subaward last year. This year the Biden Admin issued a wider suspension over historic biosafety concerns and refusal to share records. https://www.cnn.com/2023/07/18/politics/biden-admin-suspends-wuhan-lab-funding/index.html
Excellent post! I think this instinct to use probabilities/deduction/Bayes/rationality to understand problems like these is the biggest difference between STEM types and humanities types.
After a lifetime of maths, science, engineering and software, I'm doing a degree in the humanities. I have an assignment due next week on Aristophanes and, as far as I can tell, the question has absolutely nothing to do with the text. This happens to me with every single assignment.
Instead of figuring out the answer, I use what feels like a kind of blindsight and I start making stuff up. After a couple of hours, I have a story that kind of makes sense but it doesn't (to rational me) seem to answer the question. It's worked for me so far and I am getting good grades.
Fingers crossed for next week!
Would you say that you are doing this because you are updating towards “humanities types” are all just making stuff up?
Are you lending much probability weight to the theory “I am deeply misunderstanding what is happening here, but I have found a lucky strategy, so I am just sticking with that until it suffers a massive failure, like the kind described in the article?”
I don't think they are just making stuff up (I used to think that!). I think they have some secret way of knowing that I don't understand. I seem to have some instinctive way of knowing what they know — but I am not used to knowing things by instinct. In my field, we know things by figuring them out.
I don't know where this will end up. Will my instincts fail me? Will I learn to understand the way that they understand? Will blindsight work for me forever? I really don't know.
Could you share with us the question and the text you mean? I'm curious because I'm more the humanities than the science type, and while it's perfectly possible that professors do have bees in their bonnet about things, I'd like to see how *every* question has nothing to do with the text.
It may be that you are approaching it in a literal "well there's nothing in the words written down on the page about slaves/cheese sandwiches/best knot to tie sandals" spirit, but that does not mean that "What is Aristophanes' opinion on cheese sandwiches?" can't be relevant to the text or is not discoverable from it.
We're not allowed to share the text of assignments (Sorry! They are really strict about this!) but it is approximately "compare and contrast what Source 1 and Source 2 say about XXX in Athens". Source 1 is about 100 lines from a play and source 2 is a photo from the Parthenon that looks like a bunch of statues with no heads. Neither has anything to do with XXX as far as I can see.
Postscript. I've had my blindsight brainwave! I know what I am going to write now but it requires me to pull in an absolute ton of context from elsewhere. There's absolutely no way I could answer from Source 1 and Source 2 because Source 1 says almost nothing about XXX and Source 2 is a bunch of statues with no heads.
"a bunch of statues with no heads."
🤦♀️
Raggy (if you will permit the familiarity), c'mon now, we can do better than that. Even headless statues can tell us something - are they of men or women? what poses? athletic youths with no clothes on, or demure young women fully robed?
Lemme hit up some online "Parthenon statues" to see what I can see.
So, since the famous ones are the Elgin Marbles, I'm going to assume these are the ones meant, and if I'm wrong, pardon me.
"Pausanias, a Greek geographer, described their subjects: to the east, the birth of Athena, and to the west the quarrel between her and Poseidon to become the tutelary deity of Athens."
Since Athena is the tutelary deity, then of course these do represent the public face of Athens, as it were. We have two accounts of the myths - the ones carved in stone, and (again I'm presuming) the lines from the play.
Do these fit with one another? How is Poseidon represented in his struggle with Athena? Who or what is emphasised more - the goddess, the struggle between gods, the victory for Athens?
Again, these are public statements and are either acceptable or controversial with the Athenian authorities and public of the time. Does the play undercut the heroic reputation of Athens, or a famous Athenian figure? Is it calling on present-day (of its day) Athens to live up to their glorious mythological past, or is it indicating all these stories were lies?
Think of our modern day movies and characters like Captain America and the changes in how he has been perceived; one interpretation could be like the heroic statues on the Parthenon, of the idealised past and the best version of ourselves. Another interpretation could be critical, reminding us of the inequality and racism of the time. Cap started out literally punching Nazis, but does that mean the same thing today? Can he still be a hero for today?
The Parthenon Marbles are saying "Athens! You are so great and so important, two gods fought over who would get to be your deity!" The play may say "Make Athens Great Again!" or it may say "The truth is that Athens was always a dungheap, and the pretty lies we tell about our history just cover it up".
(Since I don't know the play or statues, I'm pulling this out of the air, but it's not necessarily that 'there is nothing in sources about X' because you have to look under the surface).
I'll add another couple of examples from previous courses so I am not completely obscure. There were several questions along the lines of "Study this painting. What does it say about XXX's reputation?"
We had several variants of this question where XXX was Elizabeth I or Cleopatra or the Virgin Mary. I interpreted the question as "Write everything you can think of about this painting and throw in some other stuff that might be relevant."
It's working for me so far!
Well, the portraits of Elizabeth I, for instance, are political and meant to be such. They're not about "this is what the queen looks like", they're about communicating power, authority, and copying an established image that was the ideal of the queen, not the woman as she aged.
In an age where you weren't likely to see the queen unless you were on the routes of one of her regular progresses, or a member of court (which includes servants), these portraits were a means of letting the general public know "Okay, so this is the new queen". They were propaganda, PR, and crisis management responses during times of turmoil.
For example, the Armada Portrait and the symbolism it contains which conveys the message of imperial right to rule, calmness and power in the face of threat, the opening up of the New World to English as well as Spanish colonisation, and the implication that Divine Providence favours Elizabeth and England, protecting them from the Spanish by wrecking the invasion fleet:
https://en.wikipedia.org/wiki/Armada_Portrait
Several versions would be made of the same subject and given as gifts to foreign monarchs, or commissioned by subjects jockeying to show off their loyalty to the crown (in hopes of reward and advancement). These aren't simply random collections of imagery, but carefully worked out to, indeed, bolster reputation.
Look at the complex symbology in this portrait of Elizabeth, especially the eyes and ears on her gown - a warning, as well as reassurance, that the monarch knew all that was going on and was in control. She had a well-established spy network under Sir Francis Walsingham.
https://christopherpjones.medium.com/why-queen-elizabeths-dress-is-covered-with-eyes-and-ears-8fef793fbd71
Elizabeth may well have learned this from her father, since it is the iconic Holbein portrait of Henry VIII which has shaped our mental image of the monarch, and it was that image which was repeated over and over again even as Henry grew older and weaker.
And things such as colours are also controversial - when Katherine of Aragon died, Henry and Anne Boleyn appeared in yellow garments. Some claim that this was mockery of the death, since they were celebrating instead of being in black mourning. Others claim that yellow was, in fact, the mourning colour in Spain where Katherine came from. So - were the king and his new queen officially rejoicing over the death of the obstacle, or were they observing correct etiquette in the death of the woman who has been queen? Up to the beholder to decide, since we don't have any records of what Henry intended:
https://www.englandcast.com/2022/01/henry-viii-anne-boleyn-yellow/
Same with Cleopatra - how the portrait is painted, in what style, and representing her as Seductive Temptress Who Wrecked The Empire, or Loyal Egyptian Queen, or the likes shapes our view of the character and reputation, and naturally the decision to paint her as Temptress or Ruler is coloured by the purpose for which the portrait is commissioned - is it by political rivals trying to blacken her name? is it a romantic, later portrait that sells itself on the pop culture notion of Cleopatra and not the historical woman?
Think of the recent movie and the decision that "Cleopatra was Black" and the political choices and controversy around that. Movies are probably our modern version of such state portraits, rather than the state portraits and photographs that are produced today.
I wish I had known you three years ago, Deiseach!
I'm not educated, I've just read a lot of art books! 😁
There are good resources online, even Youtube videos. Mostly it's because I see a picture (like Elizabeth in a dress covered with eyes and ears), go "what the heck is going on there" and look up explanations.
Another:
"What can THIS PAINTING tell us about the reputation of the artist during the period that he painted it?
I am getting good grades so I am doing something right!
Struggling artist, man on the make, successful society painter, rebel against the artistic conventions, an old dog trying to learn new tricks (and succeeding or failing) - there's a lot we can learn.
For instance, John Singer Sargent's "Portrait of Madame X" which was a scandal and damaged his reputation at the time, though it may have helped it later on. He was already successful and rapidly becoming known, so why paint this portrait - which was not a commission, but one he wanted to do himself? And why in this style? (He was forced to make some changes to the picture due to the outcry). Did he think his established reputation could protect him, or was this a genuine artistic impulse to create something that was not bought and paid for in advance? Why didn't it wreck his career, and would a less-established painter have survived?
If you're getting the grades, you are doing well and not just pulling crap out of the air, so bravo!
I've been there. I think there are two things going on: one is a lot of background knowledge which the experts have, in the light of which you can actually make reasoned deductions from the texts given. So, for example, in your question about what you can say about painter X given this painting - when you add in the knowledge that the subjects of paintings are aristos, painters are not aristos, painters are commissioned by the aristos, painters' livelihoods depend on the aristos being pleased, beauty standards in that time were PQ&R... then you can see that the painter painted this dude ugly, which implies that he was protected. Or something. There are a lot of assumptions in there, but there is a logic to it.
The other thing is criticism/commentary as creative process, where the source texts are not so much evidence as a source of inspiration for the critic's own creative process. There may not be any logic here, but maybe a train of thought to be followed - for whatever that's worth.
"then you can see that the painter painted this dude ugly, which implies that he was protected."
I have to share this anecdote, which I got off an art restorer's Youtube channel, about an artist named John Riley:
https://www.youtube.com/watch?v=0lZ5I1Mczmc
https://artuk.org/discover/artists/riley-john-16461691
"In 1681 Charles II appointed him ‘painter and picture drawer in ordinary’ and he is said to have produced a portrait of Charles (now lost) that prompted the response: ‘Is this like me? Then, odd's fish, I'm an ugly fellow.’ "
Anecdotes like this are what make me warm to Charles II; you would have to be confident both in your own standing as an artist, and in the likely reaction of your subject, to produce such an image of the king and expect to walk away with your freedom and all limbs intact 😀
Speaking of art restoration, this is a restoration of a portrait which had been later repainted to 'prettify' the subject and fit in better with beauty standards of that era:
https://www.youtube.com/watch?v=TFhKZv-fgXs
I worry that this essay begs the question a bit. If you already knew enough to know that, say, 10% of people are sex offenders then observing a sex offender should not cause an update. If you thought that 0.1% of people were sex offenders, as is certainly the case for some, then observing one should cause an update.
I think the counterargument is already in the essay when Scott says "I think whether or not it was a lab leak matters in order to convince stupid people" if we interpret 'stupid people' to mean people with miscalibrated priors. Personally, I was not aware of gain-of-function research before the lab leak discourse bloomed, and if asked prior to it would have estimated like a 1% chance per decade of something like that happening, or even less. Now that people are making credible arguments about this I have updated, and this is rational. I would have been more on board if the essay had been titled Against Learning From Dramatic Events When Your Priors Are Well Calibrated.
I think if you thought 0.1% of people were sex offenders, learning about one in a community of ten thousand (which your theory predicted had 10) still shouldn't surprise you. I don't know if anyone thinks the number is so low that it's shocking for there to be at least one in Hollywood.
(I think the number is either 0.1% or 10% depending on what level of sex offender we're talking about).
In principle, even learning about one sex offender in a community of 100 doesn't force you to update - you assumed there would be one in ten such communities, and perhaps this is just that one community that has the sex offender.
It's still the case that larger updates are warranted when your prior differs greatly from reality versus differing less, and that "should not be updating" is roughly equivalent to "was already well calibrated." Honestly though, if I'm overestimating my update even in my own numerical example then that's a point in favor of your point that maybe people are thinking there's more information than there actually is. I do think it's the case that single observations are given more weight than they should, and that base rates are the right thing to be thinking about.
That depends on how you learn about the one sexual offender.
If an amazing detective investigated every single person in the community and tells you "there's at least one", then sure, you should barely update at all. (But you might be able to update a lot more if this detective tells you more about their findings: did they find 1 or 10 or 100?)
If you speak with 1 randomly selected person and learn that they are a sex offender somehow, then you could update a lot. You didn't just learn that there's at least 1, you also learned that a random person was a sex offender, which is 10x more likely if there's 10x more of them. If you were [90% on 0.1%] and [9% on 1%] and [1% on 10%], then I think you should go from ~0.3% to ~3% that the next person you talk with is also a sex offender.
What about the case where you hear about sex offenders via news or rumors? I guess it's a bit of a mix. You could have a model where each incident has an independent, small probability of making the news/rumors, in which case the number of news/rumors will be proportional to the number of offenders, and it's closer to the random case. Or you might think that someone first decides to do a story about sexual offenses in a community, and then finds corresponding cases. (Though even then, you could update if the number/severity of cases they found was very different from what you'd expected a reporter to find.)
Not sure about that. If your priors are so miscalibrated that you think the event should never have occurred, this certainly requires an update - but I think that's precisely the case which Scott calls "stupid people", or at best "science fiction". If your priors just make such a case very unlikely, observing one doesn't have to cause an update at all.
What you have done seems to me not to have updated based on a dramatic event, but rather used a (possible) dramatic event to learn more about that topic and updated based on that, which seems a sensible thing to do.
I'm confused (and possibly repeating existing comment, sorry) about the implication that one's prior on lab leaks ought to be blind, due to No Evidence. Haven't there already been several lab leaks over the years, including some big newsworthy whoopses, thus updating towards [Not Actually That Unlikely]%? The footnote does amend this to "lab leak *causing a pandemic*", which makes a bit more sense but also doesn't seem like the actual crux at issue wrt covid origin debates. Either way, I find Zvi's conclusion more persuasive, that even if one pegged the chance of covid lab leak at just 10% or whatever, there's definitely positive trades to be made insuring against that small-but-very-painful possibility in the future. I guess in that sense it "doesn't matter" whether lab or zoonosis - many prophylactic precautions would be the same either way - but the Schelling still needs to actually happen to get some redeeming value from Holly Hindsight. I very much hope it's not permanently lost to the partisan melee, like so many potential cause areas.
Similarly scratching head at your prior of one mass shooting by a far-left transgender person every few years. I wasn't aware that had ever actually happened until reading this post*, and just on base population rates for T that seems...really high. We're comorbid for lotsa mental illnesses, sure, but doesn't that cash out far more often in self-directed violence (say, suicide) rather than other-directed violence*? And similarly tend to be strongly anti-correlated with gun ownership? Maybe it's the same kind of precise definitional thing as previous paragraph, where you really mean "far-left mass shooting" or somesuch...sadly there's no potentially-clarifying footnote here.
*with some notable exceptions like, iirc, schizophrenia
I don't think it ought to be blind. Like you say, I think there's enough evidence to think it's pretty common. I think there's some room for not being sure how often it will be a pathogen important enough to cause a major pandemic, but even the Russian flu might be an update in that direction.
In terms of transgender shooters - let's say about 2% of shooter-aged people are trans, there are about 5 mass shootings per year, so you'd expect one per ten years. I'd go slightly up because trans people seem angry at society and have more mental problems, but then down again because they probably own guns less than usual. I'm not too attached by these numbers and could easily change them by maybe a factor of 5 with a tiny amount of evidence.
Fair enough, the conjunctions get real noisy when working with so many fractional factors. I also failed to factor in the recent large rise in transgender identification, partly from an aspirational "don't actually want this to be an exponential" and partly due to same bias others noted where they assumed said shooter was biologically male. Shootings (of all degrees) so often involve men that it feels like similarly highly-unlikely confluence. None spring to mind other than I guess the iPhone Backdoor incident.
I don't think pariah status necessarily dovetails with anger at Society which refuses to bend over backwards; that would also predict such malcontentedness decreasing as such accommodation continues apace, which...we don't see, really. But that's getting out of scope. The appropriate small update would be, I think, continued high baserate of firearms means even increasingly unlikely candidates will become shooters, given the opportunity. (Which on reflection seems better than, say, bombings? The devil you know...)
In your lab-leak example, you assumed that in the "rare risk" case, there's a 10% chance of a lab-leak-caused pandemic per decade. That's very high! My guess: If most people thought this was the actual risk, we would no longer have labs.
I'd bet that prior to covid, most people would have assigned a <1% chance per decade to a covid-sized pandemic caused by a lab leak. With this prior, you'd get a much bigger update. That's what the public response, I think, reflects.
Now, you could argue that this prior is stupid, or whatever. But arguing about priors is a difficult thing.
This also applies to elections -- if someone you predicted would get 48% of the votes ended up getting 78% (or 18%), that would mean your model of the world is badly broken and you need to fix it, but if they end up getting 51% (or 45%), then that's a completely ordinary occurrence that means no such thing (but only that you'll have to cope with a different president/mayor/whoever than you expected), and other people shouldn't get to feel smug at you about that.
Great point - I wrote https://slatestarcodex.com/2016/11/07/tuesday-shouldnt-change-the-narrative/ about this a few years ago, and thought about including it here, but figured it was too long already.
Millei's victory in Argentina is a good deal of the way to your 78%. If I remember correctly he was projected to lose the election, ended up with 56% of the votes.
How should we update?
How badly was he predicted to lose? A 6% polling error in a place like Argentina (reasonably developed nation but not as deep and sophisticated a media ecosystem as the US) is probably in the noise, but if it was 16% that's quite significant.
"So if there’s a community of 10,000 people, probably 1,000 of them have sexually harassed someone" -- okay, but in certain communities it does seem to be a lot more common than that -- to the point that people will point out how Keanu Reeves or "Weird Al" Yankovic are not known to have sexually harassed anyone as though that were something particularly noteworthy.
Seems wrong; you should update on everything.
Re: OpenAI, I think it's been sufficiently demonstrated by now that the board was *not* strong - Altman was re-instated and the board was replaced by a new, more 'light touch regulation' friendly board.
I think the lesson from FTX and OpenAI in conjunction is "A weak board is better than no board, but a weak board will be beaten by a strong board/whoever is holding the purse strings, if those are not one and the same". FTX *badly* needed governance and an organisation chart; the book by Michael Lewis, even where he is sympathetic in parts, had several bits where I wanted to slap the face off SBF, e.g. his resistance to appointing a Chief Financial Officer because (according to Lewis) he felt insulted that people thought he didn't know where the money was going. Well, turns out he *didn't* know.
EDIT: If you believe Lewis. If you don't, he didn't want a CFO appointed because that would have revealed all the money he and his family were siphoning off for their own little projects and personal use.
Once is chance. Twice is coincidence. Three times is enemy action.
The fourth and subsequent times, those ones are your own darn fault if you weren't expecting them.
Also: this is basically the Chinese Robber Fallacy applied to small sample size, yes?
A few thoughts. First, I agree completely about all of your positions on regular events that have a known probability and distribution. I think we update far too much on new (or newly reported) individual events that are part of a pattern that we could have known about for years.
For extremely rare events, I don't agree. There are two kinds of knowledge in a society. Things that people can or should know, and things that people actually do know. I'm seeing this more and more as my children get older and I think about how I know something and realize that whatever mechanism that was, my kids haven't had the same thing happen. They could know the same things, but they're busy learning the things that they actually do know and aren't updating on things that may have been important in the past (and may be again in the future) but instead on things that seem important to them now. My kids don't need to learn how dial-up internet works, because they're not going to use it. Maybe they should learn about terrorism, but maybe not. The answer depends on information we don't have (future major terrorism) and not prior events.
Prior to 9/11, most people put the chance of a major terrorist attack on US soil at approximately 0.0%. If you asked them directly, they may not say absolutely no chance, but in practice we lived our lives that way. I'm sure there were people in the US who had lived in countries with major terrorism who were not surprised, but the vast majority of people in the US clearly were. And this is not surprising - there have been no major terror attacks in the US - either ever or at least in living memory (depending on how you define "major" and "terrorist").
Similarly, the chance of a nuclear detonation in a major city is not 0%, but most people live their lives as if it is. And this is rational. There is some amount of resources devoted to ensuring that no nuclear weapons are detonated by terrorist groups, and this amount of resources is apparently sufficient to achieve this purpose. There's no distribution of nuclear terrorist attacks because none have happened. This doesn't feel like a fluke because we try very hard to make sure this doesn't happen. As long as our collective efforts are sufficient, there will be no nuclear terrorist attacks. If a nuclear terrorist attack happens, we *should* heavily update. It would mean that our efforts are not or were not sufficient.
Bayesian updates are naturally very difficult in situations with insignificant numbers of data points (particularly zero data points). Most people live their lives categorizing events into two pools - things that they should worry about and things that they should not. They don't have enough data to be more specific, even if they were good Bayesians. As bad Bayesians, they haven't even tried. So when a lab leak happens, they should *heavily* update, because they aren't updating from 20% to 27.5%, but from an approximation of 0% (don't think about it, not worried about it) to a non-negligible percent (think about it, worried about it).
It's how we hold politicians to task. We don't want or need to know every negotiation and consideration. We just want to know that they're taking things seriously enough. That Fauci hired a US company to perform gain-of-function research in Wuhan China and then had that same US company's CEO weigh in on the chances of a lab leak is all relevant information. Not in a "this is a thing that provably happened" but in a "this may be a thing where politicians were insufficiently protecting my interests and I need assurance this will not happen again." People, with limited bandwidth to follow all things happening, want to know if they should continue being worried about something or to know that it's been fixed. The biggest thing George Bush did after 9/11 was make a big show about how he and the rest of the government were taking this seriously and working to ensure that it never happens again. A lot of the efforts and money spent was inefficient or even pure waste, in terms of preventing future terrorism. But it shows that they took it seriously, and that people could go back to their lives without worrying about continued attacks. (It helps that we're aware of increased investigations of terrorism and by things like locked cockpit doors on planes that will probably fully prevent any repeat attacks even if the TSA turns out to be worthless). But again, most people aren't even trying to do Bayesian reasoning on terrorist attacks, but asking "should I worry about this?" The government has clearly answered with a "no, you should not worry about this" and it worked. So long as that proxy for safety exists, people aren't sitting at X% chance of major terrorist attack, but 0%. If another attack happens, it tells people that the government was wrong, that they should worry about it, and this is and should be a major update.
In general, I liked this post. But is it just me, or is mentioning "the stupid people" so many times sort of off-putting? I won't deny that such people exist, but I think it gives the blog a sort of kicking-down/us-against-them attitude that I have previously not found here, and is one of the reason I appreciate the blog.
Makes it harder to share with people who are not in the rationalist-sphere too.
I strongly dislike the “stupid people” thing as well. Not just because I don’t think the people Scott refers to are irrational, but also because one of the reasons why I came to like SSC (beyond the great writing and the interesting topics) was that calling other people names was frowned upon. This seems less like “The whole town is center” and more like “I can tolerate anything except the outgroup”.
I really like this post - it harkens back to the glory days of SSC. Sadly I cannot adjust my prediction priors, but a couple more great posts and I could.
I think it’s important to note that this doesn’t mean that the probabilities are static - they do change over time, *and they can be altered by deliberate action*.
For example, nuclear terrorism is very rare (literally never happened yet), but it would be wrong to conclude “we should stop spending so much effort on nuclear security, because look how rare it is”. Because of course, part of the rarity is precisely because we spend a lot of effort making it hard for terrorists to obtain a functional nuke.
On the other hand, if you’re someone who wants nuclear terrorism to be less rare (ie a terrorist), you wouldn’t say “gee successful nuclear attacks are vanishingly rare, may as well give up”, you’d say “maybe I can make it more likely by getting lots of double agent terrorists into key roles at nuclear facilities”.
I guess this is in some sense your “stupid people who think that it’s sci fi just because it never happened before”, but I think it’s more subtle than that - I’m talking about actually altering the likelihood, not just our estimate of it.
And if you're someone who wants nuclear terrorism to be less rare, and you heard that there *was* a successful nuking, or that a plant blew up and some terrorist group claimed credit and no one knew if they were just bullshitting, you wouldn't shrug it off as "well my Bayesian probability already predicted that." You'd want to know more. Did they do it? How? Did they try to get double agents into key roles? Did they succeed in getting double agents into key roles, and if so, were those double agents particularly useful?
But isn't the issue with SBF that this wasn't some sort of major screw up? His actions were the logical outcome of his fundamentalist EA position. It's how he justified his behavior. In that case, we're outside the realm of probability analysis, and in a realm of discussing how These ideas led to This outcome. Arguably, 9/11 is the same thing: it was the logical outcome of US foreign policy. So you'd want to alter that foreign policy in ways that create a different outcome, not "change nothing" because a probability distribution convinces you it would have happened anyway.
typo ... and if the answer was yes, *spent* the money on counter-terrorism beforehand.
*spend
- the reaction to the lab leak is something that causes me to update. Lab leaks are a lot more common than we think they are, because whether you actually hear about them is extremely political.
- the criteria for Mother Jones to include an incident as a "mass shooting" is extremely biased. For instance anything considered "gang-related" is not included, even though bunch of people still get shot. What this means is that your priors are wrong, and your updates will be wrong (since you're updating based on MJ and the news), so Bayes won't help you there.
- the post-9/11 security updates have mostly been useless I agree. Locking the cockpit doors is likely more useful than adding another layer of bureaucracy to the security gates, and was a lot cheaper. I do notice that we don't have airplane hijacking any more, but that was true before 9/11. I think that has more to do with violent movements not getting as much funding for a while.
I think there's a big issue here that you haven't addressed, Scott. Based on my own knowledge and experience, I think that the chance of dangerous AI takeoff is next to zero, much too small to worry about. But, I notice that many people who claim to have studied this closely disagree with me. Should I update my beliefs or not? There are plenty of groups that worry about silly things, e.g. the dangers of vaccines causing autism. So, if we're just going by background rates, the update should be tiny. However, the people worried about AI takeoff also claim to have expertise in evaluating risks, so maybe their claims are worth taking more seriously. The problem with that is that a big risk in the form of SBF was right in their midst and they didn't seem capable of correctly evaluating it at all. Is there some other track record of correct risk evaluation that outweighs this, or are we just back to the background rates of treating them like any other group with weird hobbyhorse beliefs?
I’m not sure if I interpret Scott’s argument correctly, but at least the way I read it, I disagree with most of it.
He seems to suggest that people update a lot from dramatic events, and that it’s a bad thing. While I agree it would be a bad thing, I think that hardly anyone does that. They might *learn* from single dramatic events, but that’s a very different thing, both epistemologically and practically.
I feel that the post fails to differentiate (at least, explicitly) between three types of people who might, say, call for a ban on certain types of biological research if COVID turns out to have a lab origin:
- People who thought a lab-caused pandemic was likely/inevitable, and now see themselves vindicated
- People who thought a lab-caused pandemic was highly unlikely/impossible, but now think it's probable and dangerous
- People who never thought about lab-caused pandemics and now think those are dangerous
The first group is not the target of the post’s criticism; I assume we can ignore them. Let’s look at the second one, which seems to me to be the main target.
I haven't followed the debates, but, based on other topics, I'd venture a guess that only a tiny minority of people who previously thought doing pathogen research was OK will significantly change their opinion; and most people who call for a ban will be from the second and third groups.
Scott writes that he didn’t update after a recent mass shooting by a far-left transgender person. But I bet hardly anyone did! I assume (although I don’t have data) that most of those who thought transgender people were evil treated this in exactly the way Scott described: “we already had 100 cases of transgender being bad, here’s the 101st one.” And I assume the opposite side also reacted exactly like Scott said one should: “we know people do terrible things sometimes, one instance of a person doing a terrible thing doesn’t change anything.” Perhaps I just misunderstand Scott’s claim that “people fail to consider events that have happened hundreds of times, treating each new instance as if it demands a massive update”, which precisely uses mass shootings as the example; I am under the impression that people hardly update at all, and tend demand an update from their opponents on the basis of the cumulative evidence, not just the latest instance.
I even think that people often make this argument relatively explicitly. To take examples from Europe, which I’m more familiar with, most right-wingers who demanded action after the 2005 French riots (https://en.wikipedia.org/wiki/2005_French_riots) or the 2015–16 New Year's Eve sexual assaults in Germany (https://en.wikipedia.org/wiki/2015%E2%80%9316_New_Year%27s_Eve_sexual_assaults_in_Germany) said something like “We’ve been warning you about this; the Muslims were breaking laws in small ways all the time, and it was just a matter of time until something big happened.”
So, the point of all these examples is that people generally don’t *update* a lot on dramatic events. If they had prior expectations, they tend to update them only a little or not at all (or to update them in the opposite direction, because the other side is so insulting and loud and arrogant). Coming back to the lab leak issue, I assume that, if we ask the people who argue about this if any result would lead them to a major update, I assume they’d say no (it would be interesting to see data on this, of course).
What remains is the third group – people who didn’t really have strong priors because they’ve never given much thought to whether gain-of-function research might be dangerous. I’d say it’s most of the population, and I’d say that their ignorance is not stupid; it’s rational. It’s one of hundreds of difficult topics with complex arguments, complex implications, and experts on both sides of the divide. Why should the average person spend weeks of their time researching the matter, given that they don’t have any way of influencing the developments anyway? Scott does it; but then, I assume Scott enjoys the intellectual challenge, and Scott surely has many thousand times the influence of an average American.
So I deny the “A good Bayesian should start out believing there’s some medium chance of a lab leak pandemic per decade” part – I think you can be a good Bayesian and don’t start out with any beliefs at all.
I think this group, or at least significant parts of it, *does* learn from dramatic events, for the simple reason (that has been mentioned in the comments like https://www.astralcodexten.com/p/against-learning-from-dramatic-events/comment/47462922 already) that they didn’t really think about the problem before, but it has been thrust into their lives now by the media. And I assume that they do tend to assign a high probability to such events happening again – presumably one that is too high.
If so, this might be systematically wrong, but not as intellectually indefensible as one might think. Of course, the epistemically optimal way to deal with this would be to dive into the biosecurity literature and debates and form an educated opinion based on long-term data and experts' opinions – but, as mentioned above, I think that would be irrational for most people. The rational thing would be just to take what they know and make the best guess based on that. I’m not well-versed in epistemology, but as far as I understand, there are no clear rules on how to assign probabilities based on a single observation. If you think about it, it might make sense that the examples you get to read about are particularly egregious, and probably not representative – but that’s already more effort than most people do. If that’s what Scott criticized, I agree with that point, but it seems to me it wasn’t the main thrust of his argument.
So I don’t think most people *update* too much on dramatic events – but people who didn’t have an opinion probably often overreact, and that’s what might change the public opinion or a society’s attitudes and rules.
"I’m part of the effective altruist movement. The biggest disaster we ever faced was the Sam Bankman-Fried thing."
The biggest disaster you ever faced YET
I think "a lab leak should only increase your beliefs from 20% to 27%" is downstream of your assumption that even in the common-leak world, leaks only happen 33% of decades!
An alternative toy example: the Common/Rare Lab Leaks worlds have lab leaks occurring in 90%/10% of decades, and my prior is 10%/90% on them, so before covid I expect an 18% chance of a lab leak per decade. One lab leak will update my beliefs to 50/50 between the two world states, which also puts my expected chance of a lab leak per decade at 50%. So an 18% -> 50% is also possible!
An even more extreme example: I only model the world as "Lab Leaks are Impossible/Happen Every Decade", at 80%/20%, so my prior is they should happen 20% of decades, but once I see one I update to 100% chance every decade!
A final example where the probabilities are not discrete: if your prior is "the chance we live in a world where lab leaks happen p% of decades is proportional to (1-p)^3", an update takes you from 20% to 33%. This isnt as big a jump as the previous examples, but its still almost doubling your expected harm from a lab leak!
I think you're wrong about the SBF/OpenAI thing. The board there did the right thing initially and only the backlash was incorrect. If anything, what we should learn about the Altman fiasco is that no EA can be trusted who is named Sam.
This was nice, Thank you. I guess I must be one of the stupid people, because I mostly don't think about any of this stuff happening (or not happening) beforehand. And so when it does, there's a knee jerk response, and then maybe some time for reflection. And to be honest I just don't want to spend a lot of time thinking about terrorists getting nukes. (Or any of the other shitty things that could happen.) I figure there is maybe someone in government, or a think tank, or on substack doing my worrying for me, and I thank them for it.
As an aside, I think the most important thing to do now is more nuclear fission. (so less regulation or whatever it takes.) And yet this leads right in to a term in 'chances of terrorists getting nukes'. And that is a risk I think we must take... more nukes leads to more nuclear 'accidents'.
I think that's precisely the trouble with Scott's argument (and wording). Sure, the "knee-jerk" part might get handled better, but the rest is not just understandable but, IMO, sensible.
I don't have any trouble with the post, but there is the sense of "Don't let a crisis go to waste." in big institutions (governments.)
Sure, people will use a dramatic event to execute on their agenda; I guess that's what Scott meant with "coordination"...
There's also an important factor, having an updating strategy that's difficult for others to manipulate. As long as no terrorist has detonated a nuclear weapon, there's no good way to check whether a security agency that wants money to prevent a detonation is presenting you with a correct cost/benefit analysis. Indeed, I think that factor is important relative to a lot of arguments that circulate among highly-educated tech communities: one of the most intense evolutionary needs is to be difficult to deceive by the many, many actors you are interacting with that have an interest in deceiving you. Do not optimize your analytical tools for being *correct* relative to "the environment", optimize them for being *unexploitable* by your cohorts.
Can I assume that "there will be a nuclear terrorism attack once every 100 years" actually means "if I run 100 simulations of the year 2024, one of them will contain a nuclear terrorism attack"? Because obviously the actual passage of time and evolution of tech and society will change so many things that may render such an attack impossible or irrelevant.
In 1970 it would've seemed roughly correct that there'd be a 60 HR season in baseball about every 25 years, but if you simulated the 1970 season 100 times I doubt you'd see 4 (or even 1) 60 HR seasons, the actual leader Johnny Bench only hit 45 despite the rare outlier feat of having played nearly everyday as a catcher. But a quarter century later during the Steroid Era, you'd have to simulate 100 seasons to find one that did NOT have a 60 HR hitter, because the game changed.
Doing things to change the frequency of the event therefore seems a LOT more impactful. The danger we want to avoid is not really the 1/100 chance of such an attack in 2024, or the cumulative chance over a decade, the danger is that you could wind up with the equivalent of a Steroid Era of Nuclear Terrorism that your priors treated as nearly impossible but in fact was nearly certain to occur because the tech reached the threshold that made it so.
<i>Does it matter if COVID was a lab leak?</i>
As others have said, it's not so much the fact that COVID might have been a lab leak, but rather the fact that all the respectable government and media outlets went all-in to smear the lab-leak idea as an ignorant conspiracy theory when we now know that it wasn't. *That* seems like a thing you might reasonably learn from, unless your priors were already weighted quite heavily towards the "respectable government and media outlets are fundamentally untrustworthy" end of the spectrum.
<i>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all. It was always obvious that far-left transgender violence was possible (just as far-right anti-transgender violence is possible). My distribution included a term for something like this probably happening once every few years. When it happened, I just thought “Yeah, that more or less matches my distribution” and ignored it.</i>
Similarly, it's not the shooting itself, so much as the reaction to it. The media refused to publish the shooter's manifesto and instead rushed to lecture us all on the dangers of transphobia; Joe Biden made a speech calling transgender children "the soul of our nation". So whilst the shooting probably shouldn't change your priors very much on the likelihood of left-wing transgender violence, it probably should change your priors on the likelihood of the left in general condoning violence when it comes from someone they view as sympathetic -- again, unless your priors are already weighted in that direction.
I agree with the basic sentiment in the article, and I guess I'm jaded enough about the world that I don't have the feeling of updating much when bad stuff gets reported. But as people have hinted in different ways in their responses, there is a big difference between society-wide expectations about the world, and individual ones.
A functioning society makes sure to have people engaged in estimating all kinds of dangers, and figuring out how much it makes sense to prepare for them and/or prevent them. So there is a conversation going on, and institutions get created and specialists do their research, and the occasional interested member of the general public may join in.
Whereas on an individual level, for things like pandemics or major terrorist attacks, where you individually have no role to play either way, it's perfectly rational to just not have or seek a precise prior. You basically know the thing exists, but you don't need to worry about how likely it is because there isn't anything you would do with more accurate information anyway. So you just think, if it happens in my lifetime, I'll notice it right there and then.
Motorcycles are fun, because the safe thing to do is often the opposite of what instinct tells you. If you are too fast in a sharp turn, your brain screams at you to slow down and round off the turn, but the safe thing is to lean in to the turn and roll on the throttle, because it forces the tires into the road and increases traction. Your survival instinct actually increases your chance of dying.
The Wuhan lab thought they were lowering the risk of a pandemic by tracking down potential pandemic pathogens. They were trying to be proactive (hugely overrated), but in the process caused the thing they were trying to prevent.
The operators of Chernobyl were trying to lower the risk of a meltdown, but in the process of running a safety test caused the thing that they were trying to prevent.
Complex systems increase the possibility that higher order effects of precautionary measures will cause more damage than doing nothing. This basically neuters the precautionary principle. People seem to understand this intuitively when it comes to the environment, which is why there is such a strong hesitancy around geoengineering, or efforts to wipe out malarial mosquitos. When it comes to society or the economy we're much more interventionist.
That's what Hayek meant when he said "the curious task of economics is to teach men how little they understand about what they think they can control".
motorcycles are stupid, because there often is no "if" you'll lay the bike down, its when. and 80 percent of accidents end in injury or death. The safe thing is not to ride them because even the safest cyclist can't always control others and the penalty for one mistake can be ruinous.
My cousin suffered brain damage from one accident. if anything people understate how dangerous that kind of complex system can be.
"Forces the tires into the road and increases traction", LOL. Do you actually think this is true? I do bike although I have much more experience with cars on the track (as a driver and a tuner). The only thing forcing the tires into the road is gravity. On or off throttle definitely changes weight balance, although, if you want a tighter turn, I would expect you would want more weight on the front, (meaning deceleration). It could change the geometry (definitely fork extension) but again, this is front vs. rear weight balance. I guess the biggest improvement from not dropping throttle is improved stability (again, through weight balance).
If the motorbike (or car) produces downforce, then wouldn't higher speed force the tires into the ground and cause more traction?
No motorbike produces downforce. In order to produce downforce, you need a wing and it needs to be (more or less) parallel with the ground. Since motorcycles corner at varying and dramatic lean angles, you would need a wing that moved in opposition to the lean of the motorcycle. You could try to do it with 2 fixed wings but then you would have no downforce except at a very specific lean angle. This would likely make the bike impossible to drive since leaning into and out of turns would cause the grip level to rise and plummet abruptly. Even in cars, only the most extreme wings produce actual (net) downforce . The fairly large wings you see on some Porsches, for example, merely cancel out lift (at the rear) to prevent the car becoming dangerously unstable at higher speeds. In other words, they prevent traction reduction at speed rather than increase traction at speed.
[edit: You wouldn't want a car to have a rear wing that increased traction at speed unless you also had a front wing that increased traction. Otherwise, you would find that as you increased speed, the car would gradually lose it's ability to turn (since turning ability is defined by the ratio of front to rear grip).
If no motorbike produces downforce, then why do I get a lot of articles and movies discussing winglets and downforce on motorbikes by simply googeling "downforce motorbike"?
Most likely, it is because people will pay for it despite the fact that they do nothing. The ones that I see are over the front wheel. If it was over the rear wheel, you could argue that perhaps in a straight line it improves stability and braking. You definitely don't need more front grip in a straight line. Under braking, every bike already has enough straight line front-end grip to "endo."
I found this video on the subject to be illuminating: https://m.youtube.com/watch?v=Y3nEbwOTN3g
An example of downforce calculations for a 4 square foot (576" squared) wing (that's four feet long and 1 foot wide). The article suggests 37# of downforce by 60MPH and 66# at 80MPH. Considering that bike wings are about one 1/100th that area, if it was positioned at the right angle to help you at full lean, if you changed speed from 60MPH to 80MPH you would gain about 2.9# of downforce. For a 500# bike, that is 0.6%. However, your cornering speed has now increased by 33%.
The effect you are talking about is not speed related, it is acceleration related. Immediately, upon increasing throttle input, the bike feels more settled. This is weight transfer to the rear and/or geometry changes. You weren't (presumably) trying to say that 30seconds later when you reached 80MPH, you had 0.6% more grip.
[ edit: forgot the link https://occamsracers.com/2021/02/05/calculating-wing-downforce/ ]
>A few months ago, there was a mass shooting by a far-left transgender person who apparently had a grudge against a Christian school. The Right made a big deal about how this proves the Left is violent. I don’t begrudge them this; the Left does the same every time a right-winger does something like this. But I didn’t update at all.
Bear in mind that the media and the Twitterati aren't neutral parties. If a right-winger commits a mass shooting, it gets trumpeted far and wide as an example of evil right-wingers. If a far-left transgender person does so, it gets buried, or reported in terms that omit the perpetrator's affiliation.
Given how the media buries misdeeds done by allies, having one such misdeed that's so bad that you managed to hear about it anyway should update your beliefs towards evil leftists a lot more than the corresponding report on the other political side.
That was a very interesting and thought-provoking read. However, I’m skeptical of the OpenAI/SBF comparison. Maybe it’s hindsight (or the outside view, since this blog is basically my only window into EA), but the two affairs seem deeply different – so there’s no reason why the same lessons should apply.
In the OpenAI thing, the board (the “EA side”, I guess?) made the first public move against Altman. He was in an extremely strong position: the CEO of a company whose ground-breaking products had become household names in less than a year. The board firing him, even though it was their right, should have been backed up by comparable evidence, which the board didn’t do. Not that they had to address the public, but them not even explaining themselves to other stakeholders (I’d say investors, Microsoft with which they had a partnership, and enough key employees?) destroyed their credibility. Of course, hindsight is 20/20, but this reads like Rational Debate 101: “if someone’s extraordinary claim is not backed by extraordinary evidence, they’re the unreasonable ones”.
For SBF, it’s very different, in that the EA movement was “reactive”, and only tarred by association. There weren’t any actions as resounding as the one above to justify (or was there?), so I think it would have been safest to weather the Week of Hatred (of EA people) and be mostly silent until they could do some good again. I also doubt that SBF’s position within the movement was as strong (both de jure and in perception) as Altman’s at OpenAI.
Of course, it can be that in both cases, the EA movement (or “side”) didn’t have a rhetorical position they could really defend, and got outplayed by people who were better at power or status games. In which case, the (slightly snarky) lesson would be “welcome to the real world, congrats for making it, now get good”.
Regarding the “deontology vs 5D utilitarian chess” lesson, isn’t this taking it to the extreme. No one (afaik) can play with any confidence 5D utilitarian chess. But many non-EA people can ably play regular 2D chess with a mixture of consequentialism and deontology – so maybe the conclusion is that you should stay at a level where you have strong evidence that you’re not out of your depth?
By the way, I’m sure you must have thought about it (perhaps even blogged about it), but you wrote a while ago that a pandemic risk as high as one percent should have been enough to drive all states to seriously prepare for this scenario. This was, of course, completely correct.
But what would have been a reasonable prior on SBF doing something shady enough to tar EAs by association? Given that he was operating some crypto “stuff” in Bahamas without any corporate control or serious oversight, and that he became a billionaire extremely fast, shouldn’t this baseline probability have been higher than one percent? And given how bad it would be if SBF was a fraud, even if your assessment was low, how come this “tail risk” wasn’t taken into account and prepared against, since it’s so central to the movement’s stated philosophy?
It's fine to update priors and all, but the main reason we want to know whether there was a lab leak is so we can make sure the people responsible are never allowed near a lab or a funding spigot again, and also so the people who lied about it are never in any trusted position ever again.
In particular, Lancet should reject any submission with Peter Daszak as one of the authors, since he was one of the authors of the March 2020 article attacking lab leak theories, it included the claim that the authors had no unrevealed conflicts of interest, and it did not mention that Daszak was the president of an organization that had funded the WIV.
That's academic fraud.
I'm going to try to put your position and my position into a quality control analogy.
You have a part that is supplied to you by several vendors (A, B, C) and you do quality control.
You set your acceptable error rate of failed parts at X, based on a dimension Y, and one day you see that the parts from supplier B came with more failed parts than your acceptable X.
Given this factual observation, there are three possible alternatives:
- Assume that all suppliers are behaving well, assume B's failure is an expected outcome, and simply inform them that you are going to raise your quality requirement Y (raise the quality requirement) so you observe fewer X failures.
- Assume that supplier B is misbehaving and take the necessary measures specifically affecting that supplier.
- Or a bit of both
You choose the first option. In my opinion, you do this for ideological reasons, since you are ideologically motivated not to blame China because that would mean agreeing with horrible people like Trump.
This is a classic example of what Taleb refers to as wrong-normal distribution statistics-mediocristan based reasoning. I ask you to consider that this is not a normal situation where supplier B sent you a part out of spec, you throw it away and ask for another batch and wait for the normal distribution to operate and expect to see everything according to distribution again.
In this case, your supplier B sent you A HIGHLY CONTAGIOUS VIRUS. It's important to know if supplier B is a complete idiot, a son of a bitch, or both. It's important, and action must be taken accordingly.
You make sense. I have long thought along similar lines when it came to conservatives/libertarians who decided they were less pro free market after the Great Recession. (I am thinking specifically of Richard Posner and Razib Khan.) Don't you guys read economic history? Why didn't the Great Depression make you less pro free market? It was much worse than the Great Recession, and the American economy was way more laissez-faire in the '20s than the '00s.
Please consider replacing the term "stupid people." I'd bet non-statistically literate people make up >95% of the population. Many of these people are brilliant in ways we aren't. I'm not the language police, but outright dismissal of folks who think differently from us isn't likely helping us build the community or future we need. It reads as low-confidence in-group signaling and risks the bottom 5% of your audience (the stupid people?) latching onto this kind of language and thinking. This only stood out to me because of how different it seemed from your typical style.
Love your work. Wishing you all the best in fatherhood :)
Is there evidence that the kinds of stupid people Scott is arguing against actually exist? (I know I know, no limits to stupidity, etc.) Maybe everyone is just engaging in the same sort of coordination game around visible events, however consciously?
E.g. in the Harvey Weinstein case, did most people really update how likely they thought it were to get sexually abused as a young actress in Hollywood? Or did they just think, "This is a visible and upsetting event, and everyone I know is upset about this! We can finally do something about it!"
This relies on everyone having a really good model for everything that could happen, despite there being a million issues that *someone* is saying should be my #1 priority right now.
For example, it seems like everyone's been worried for decades about bee populations. I probably first read an article about this 20 years ago, but I can't quite recall what the upshot of this "crisis" is. For now, I think it's reasonable to not make "bee policy" a major part of my voting or charitable decisions.
If tomorrow, some sort of catastrophe happens due to a lack of bees, that signals to me that I should reorient in a big way on the bee thing. I don't have to be "stupid" to wait for the catastrophe, and then update on it, I just have to have non-infinite capacity for research.
But yes, once I point my attention there, I shouldn't then assume bee stuff is our biggest problem just because it was our biggest problem yesterday.
Interesting post
But
Yes, it matters if it was a lab leak because obviously JFC
That argument is a great example of so-smart-you-are-stupid
Great response by Curtis Yarvin: https://graymirror.substack.com/p/you-will-probably-die-of-a-cold
You know its bad when yarvin feels the need to write a direct response article to someone
My major takeaway for the media from this post is to be assiduous about putting events into context especially in terms of frequency and magnitude. Of course in the heat of the moment putting things into context is considered to be somehow betraying the victims.
Most media seem to be more concerned with generating engagement, and less concerned with sober analysis and putting things in context, unfortunalty.
True. The incentives on the margin for it aren’t great.
> So this hypothetical Bayesian, if they learned that COVID was a lab leak, should have 27.5% probability of another lab leak pandemic next decade. But if they learn COVID wasn’t a lab leak, they should have 20% probability.
You talk about lab leak probabilities as if they're handed down from on high, and our job is to figure out the correct number. That is not how this works. The possibility of a Wuhan lab leak is important exactly because of the official response to it. And that response is important exactly because it *causes* future lab leaks to be more likely.
Safety procedures developed via the "some guy sits in an office and brainstorms what procedures a lab should follow and writes up rules" have an extremely poor track record. If we used that method, then the whole system would fall over as soon as a lab encountered real-world considerations like "Hey, what if we're moving to another building and need to pack all our viruses into a van - how do we handle that?" Therefore, that isn't the method we use. Instead, safety is maintained via constantly, constantly updating.
A healthy lab is constantly seeking out times when things didn't go as planned. At a basic level, reports are solicited from individual team members, often monthly or quarterly, and filtered upward. The more serious happenings get a full report written up; different labs call these different things, usually some bland variant on "Incident of Concern". The lessons learned are then socialized throughout the team, with everyone hearing the short version and the people who think they might face similar situations able to read up on the details. Everyone intuitively grasps that the top-down regulations are often only loosely connected to reality, and that it's important to always have information flowing up and out about how the regulations meet the real world.
I cannot stress enough how central this process is to safety in the real world. The top-down regulations are nothing; the feedback process is everything.
An unhealthy lab treats the top-down regulations as perfect and does not seek out information. Any deviation from how the "guy in the office" expected the lab to work results in a punishment for whoever was foolish enough to mention it. Consequently, the gap between what the head-office rules say and how the lab actually works continually grows.
In a *pathological* lab, the entire leadership chain is complicit in fraud from top to bottom.
People who talk a lot about ethics in governance talk about "tone at the top". People at the bottom obviously do not know exactly what the people at the top are doing, but each concentric circle interacts with the circle one step closer. And each gets a sense of what they are expected to do to "play ball" (or, if they fail to learn this, their careers go nowhere).
Binance's Chief Compliance Officer wrote internally in December 2018, "we are operating as a fking unlicensed securities exchange in the USA bro." Assuming that your brain is normal, you do not directly care at all about whether your financial institution is following all the Byzantine U.S. financial regulations, such as the one where they have to file a suspicious-activity report on you if you mention that you know they have a regulation that requires them to file suspicious-activity reports on people who know about the regulation that requires them to file suspicious-activity reports. Nevertheless, I claim that you ought to be concerned about the gap between what Binance's leadership was claiming in public and what they knew in private. Because of tone at the top. As long as Binance's leadership is actively engaged in cover-ups, anyone trying to have a career in Binance will have to learn to cover up any inconvenient facts that their bosses might not like. They will have to learn to do this automatically and implicitly. When everyone in an organization defaults to cover-up mode, this has consequences far beyond the question of complying with the voluminous U.S. regulatory regime.
I am in favor of gain-of-function research. (Just as you do, I use gain-of-function research as a synecdoche for any risky research, such as collecting viruses from the wild and bringing them into labs filled with human researchers. Anything that amounts to proactively seeking out the kinds of things that cause pandemics instead of waiting for them to come to us.) We got lucky this time, in that the pandemic was basically just a standard flu, and standard countermeasures worked fine, so gain-of-function research didn't end up being relevant. (The fancy sci-fi mRNA technology was only needed to overcome the challenge of mass-manufacturing a huge number of vaccines quickly.) I have no confidence we will continue to be so lucky forever.
As far as I am concerned, the number that matters for the next pandemic is this one: https://ourworldindata.org/urbanization
The next pandemic will come. It is inevitable. The only question is how prepared we'll be when it does.
Gain-of-function research makes us more prepared. But it also (probabilistically) makes the next pandemic happen sooner. That latter makes us *less* prepared. (Not only because we'll have completed less gain-of-function research itself, but also because we'll have less development in other areas, like mRNA manufacturing.) Therefore the safety practices of labs are incredibly important.
That's what I think. I am not going to try to make an argument here in favor of gain-of-function research, tgof137-style. Maybe you don't agree with me. Maybe you want to shut it all down. Maybe you want to murder me in my sleep to stop me polluting the system with my vote. That's irrelevant. The fact is that my side won. (Even if a few programs, like DEEP VZN, become casualties of political spats, much like Donald Trump killing individual Ford Mexico plants Batman-style.) (Yes, granted no war is ever over, but HR5894 has poor prospects in the Senate, and in any case gain-of-function per se is just a synecdoche, not the whole.) We won *seven years ago*, gain-of-function research is being done even as we speak, and therefore YOU SHOULD CARE A WHOLE HECKUVA LOT ABOUT THE ETHICAL PRACTICES OF THE ORGANIZATIONS DOING IT.
What are those ethical practices? Well.
What are those ethical practices? Well. In February 2020, the Director of NIAID wrote internally that "we are doing fking gain-of-function research in the Wuhan lab bro." Okay, no, he didn't have quite the way with words that Binance's CCO had, what he wrote was "the scientists in Wuhan University are known to have been working on gain-of-function experiments to determine the molecular mechanisms associated with bat viruses adapting to human infection, and the outbreak originated in Wuhan." Same content. Unlike operating as a fking unlicensed securities exchange in the USA, doing gain-of-function research was not per se illegal. (Because, again, my side won that fight seven years ago.) They could have chosen to tell the truth about what they'd been doing. But it was...suddenly politically inconvenient. And so they lied.
Meanwhile, NIAID was leaning on the scientific establishment to make, and fast-track for publication, false statements.
On January 31, Kristian Andersen, a prominent virologist with Scripps Research, wrote internally that "Some of the features (potentially) look engineered." Separately, Andersen wrote internally that "the lab escape version of this is just so friggin' likely to have happened because they were already doing this type of work and the molecular data is fully consistent with that scenario."
On February 1, the Director of NIAID corralled leading scientists discussed the question on a conference call. We will never know exactly what was said on that call, but we do know that suddenly they started producing public statements inconsistent with their private statements. Andersen, who privately believed what he called the "lab escape" theory was more likely than not, began dismissing it as "crackpot".
We do know that the Director of NIAID directly brought together Andersen and others to produce what would become "The Proximal Origin of SARS-CoV-2", which appeared in Nature Medicine 17 March 2020. We now know that most of the authors privately believed that a lab leak was at least plausible. However, during the drafting of the paper, the leadership of NIAID and NIH (internally nicknamed the "Bethesda boys" after the NIH and NIAID headquarters in Bethesda) repeatedly pressed the authors to strongly dismiss the possibility.
The authors could have ignored this, but only at their peril. Andersen, for example, had an $8.9 million NIAID grant pending at the time. (The NIAID director signed off on it two months after Andersen's article "Proximal Origin" was published.) The virologists would have to be extremely stupid not to understand that they were expected to deceive and spin the public in order to earn NIAID money. And so that is what they did. The final paper said "we do not believe that any type of laboratory-based scenario is plausible." We now know this to be a lie; they did, in fact, believe a laboratory-based scenario was plausible.
As soon as the paper hit the presses, the Bethesda boys greeted it with feigned pleased surprise, as proof that they were right, while pretending that it had emerged independently from the process of "science" when in truth it was their own press release.
Then they went out, testified before Congress, and lied. Aggressively. In statements they were willing to have their own fingerprints on, they derided the lab-leak possibility as a "conspiracy theory" and "misinformation". (To encapsulate the overall tone of official messaging, on CBS's Face the Nation, the Director of NAIAD said "they're really criticizing science, because I represent science. That's dangerous.") That's what they were willing to be quoted on. Meanwhile, using backchannels, they called it racist and demanded that the truth be censored.
Whether any individual directly lied isn't the point. (Even though, again, it is now proven that they did in fact knowingly and directly lie.) What matters is the organizational culture. We would have the same problems even if the leadership team simply intimated what sorts of things their subordinates had better not say in their presence, and carefully siloed knowledge so that no individual person could be proven to have personally directly lied. (Though, again, they absolutely did personally and directly lie, and they got caught, because their practices for *not* getting caught were extraordinarily sloppy, because they weren't all that worried about being caught, because they expected to get away with it if caught, which they did.)
Imagine that you are a low-level "dish-washer" in the Wuhan lab, or any other lab. You see something concerning. Do you say something?
Of course not. But it's more than that. You seize any excuse to destroy the evidence (and in the normal operations of a biolab, there are always plenty of excuses to destroy the evidence). That makes you actively complicit in the cover-up. And so of course you have to destroy any further evidence that might shed light on what you've been up to. A naive person might think that this might open you up to risk of consequences if you are caught. But from within the system, you can easily see that there are no such consequences. Indeed you'd be rewarded, because absolutely everyone, all the way up the chain of leadership, is complicit in the cover-up.
Go back to the discussion of how actual-in-practice lab safety works. This situation is *utterly corrosive* to that process. Given the choice, obviously anyone in the lab would prefer to be actually safe to the extent that that does not conflict with the need to lie. But it does conflict with the need to lie. As long as the situation persists, lab practices get more and more unsafe with each and every year.
The current narrative on the left is that, okay, maybe we can't keep calling it "misinformation" without being laughed out of the room, but it's not like there was some kind of *cover-up*, certainly not one that could easily happen again, it was just a "difference of opinion" or a "disagreement", and shut up shut up shut up.
It's...quite a place to be living.
> Everyone is just going to say we lied to them. We’ll be accused of fraud. That sort of argument just bugged the hell out of Sam. He hated the way inherently probabilistic situations would be interpreted, after the fact, as having been black-and-white, or good and bad, or right and wrong.
> --- from Going Infinite
> Yes, it turns out that if you tell people everything’s fine but you have reason to know it very well might not be fine, often that would constitute fraud. You cannot, in general, simply not mention or account for things that you’d rather not mention or account for.
> --- from Zvi's review thereof
And now Gray Tribe thought leaders pivot to the idea that, look, does any of this really matter, what the hey, can't we all be brothers?
There is some extremely tiny technical sense in which it doesn't matter whether there was a leak from the Wuhan lab. Because in a well-functioning organization, an Incident of Concern is investigated regardless of whether there was a known disaster or not. We are so very far from living in the kind of world where it makes any sense to say that.
Similarly, there is an extremely tiny technical sense in which it doesn't matter whether any given risky bet made by Enron went well or poorly, but if we're still pouring $50 billion into Enron every year, and also the problem is virology safety instead of just money being lost, this is not a situation that we can just shrug off. The situation is not going to get better on its own.
We are currently moving, in a tiny limited area, from the "pathological lab" case to the "unhealthy lab" case. The bosses are currently moving toward throwing the Wuhan lab to the wolves as a sacrifice. Naturally, there is no postmortem, and certainly no looking into how a paper like "The Proximal Origin of SARS-CoV-2" could have happened and how to prevent such a thing from happening again. No John Ray. Enron is still a going concern, ethical practices unchanged. If the leadership of NIAID and NIH decided to do the same thing today, there is absolutely nothing to stop them. If anything, Enron may have been emboldened by learning that no matter what they're caught doing, their political backers won't greet it as a betrayal, but instead swing into action to suppress the truth. (I doubt it. Usually people already have a pretty good idea how their bosses will react, or they don't get very far.)
> So this hypothetical Bayesian, if they learned that COVID was a lab leak, should have 27.5% probability of another lab leak pandemic next decade. But if they learn COVID wasn’t a lab leak, they should have 20% probability.
I don't *care* what the exact correct Bayesian probability would be. The point of investigating is not to get a slightly-more-accurate probability, the point is to make the probability go *down*.
I did the lab leak calculation but am getting different numbers for the observed-leak update. Would someone mind sanity checking? I'm just getting the hang of things here, anyway.
Let A denote the 0.33 decade-rate hypothesis and B the 0.10 decade-rate one. Under our problem context C, observed data D of a single pandemic, and 2 sig figs, we have:
p(A|C) = p(B|C) = 0.5
p(D|C) = 0.5×0.33 + 0.5×0.10 = 0.22 (our prior: round to nearest even)
p(D|AC) = 0.33^1 e^-0.33 / 1! = 0.24 (Poisson distribution)
p(D|BC) = 0.10^1 e^0.33 / 1! = 0.09 (Poisson distribution)
p(A|DC) = 0.24 × 0.5÷0.22 = 0.54 (Bayesian update)
p(B|DC) = 0.09 × 0.5÷0.22 = 0.20 (Bayesian update)
So A:B odds just boils down to straight division of the distributions, i.e. 24:9 = 8:3, or roughly 73:27. This is within rounding error of 74:26, so I'm unsure whether Scott just Spoonerized the 6 and 4 to get 76:24 or I am doing something wrong. That said, the final expected base rate matches up: 0.73×0.33 + 0.27×0.10 = 0.27.
I think your math is formally more correct, but more complicated than what Scott did.
As I read it he merely took it as a binary incident (either lab leak or none), where you have used the full Poisson distribution.
Using the simple model we would have
P(A| D) = P(A) P(D|A)/P(D) = 0.5* =0.33/(0.5*0.33 + 0.5* 0.1) = 0.767 , so just a rounding error from the given numbers.
However the fact that you got almost the same result with your more correct model shows that the idea is robust. I tried myself to look at a model where there were multiply theories with lab leak chances going from 1% to 50%, the priors of the theories being inverse related to their predicted probabilities. And I also found that having one lab leak did not change the expected probability of future leaks very dramatically.
Ah, beautiful. Thank you very much. That is quite surprising how Bernoulli coincides so closely with Poisson. Have you published the analysis you did? I'd love to read it.
Oh, thanks, actually I just ran it for myself, but now I felt like writing it out a bit more. Here it is
https://christianzr.substack.com/p/on-updating-from-dramatic-news
I seem to remember a thread on Less Wrong about (IIRC) something of which Fermi had there was a 10% chance that turned out to be true, and whenever people in the comments pointed out reasons why that was not an unreasonable figure given what Fermi could possibly have known back then, EY would smugly say something like "pretty argument, how come it didn't work out in reality", which would have been a reasonable reply if Fermy had said something like 0.1% chance, but I found pretty ridiculous for 10%.
By the way, already back in mid-2020 I found lab-leak hypotheses much more plausible than the mainstream did, because of the epistemic good luck of simultaneously remembering that ESR had blogged about such a hypothesis back in early February which I had found convincing, and not remembering any of the details of his particular hypothesis, which had been pretty much ruled out by March at the latest.
"This strategy is mathematically correct..."
I'm pretty sure this entire post is rather, incoherent ramblings. You made up these numbers (sometimes you admit to this e.g. "fake bayesian math"). There was no objective starting point. This isn't how thinking works, let alone thinking rationally, though I know you disagree because you believe Bayes has ascended to divinity or whatever. But Astrology is literally more objective than everything in this entire post. At least one can point to the position of planets in the sky as a starting point before making "mathematically correct" calculations pertaining to them. As an example, I'll briefly discuss your example.
You made an argument that survey's suggest 5% of people admit to having sexually assaulted someone, this is something people would want to lie about, so 10% actually did it? Whatever happened to the "Lizardman’s Constant Is 4%"? Didn't Bush do North Dakota? Why would you assume anybody would care to answer this question honestly at all, why wouldn't they lie in the other direction to mess with you? Depending on circumstance, I probably would.
Actually, nobody admitted to anything! You can't point to them, you only have an abstraction, a number so low it's not worth anything at all in this case. And your entire post is basically just restatements of this made up math.
Here are some real reasons why one should care if Lab Leak is true. If it's true, it means some people with political and social power knew about it, but didn't tell the public. Not only did they not tell the public, they actively hid the fact. Not only was the fact hidden, an international instantiation of censorship pervaded the Internet, creating a situation where to this day people are concerned on what they can and can not say online regarding this stupid virus. And then this was used by moral busybodies to start attacking people and dismissing concerns, and feel righteous about it, when in fact they didn't know any better than anybody else who had no political power. This isn't even getting into the fact that whatever research was conducted, it seems to have done nothing to mitigate the health, societal, and economic damage from the virus itself.
Now you may note that your real point is some nonsense about future lab pandemics. This is what people call a "straw man". It's true that some people embody the straw man. But the straw man is a straw man because it's a carefully constructed argument that isn't the argument being made. While the possibility of future lab leaked pandemics is a concern, it's not the only, or even the primary concern to most people regarding Covid being a lab leak. Insofar as it is a concern, a normal person, as opposed to a good Bayesian, would weigh the usefulness of funding lab viruses/gain-of-function research in China in the first place, against the possibility of even one leak.
User banned for this comment.