177 Comments

Can you clarify how you will resolve the "In 2028, will AI be at least as big a political issue as abortion?" market? I worry abortion will end up a bigger issue by metrics like funding and single issue voters but it might still resolve yes based on vibes ("how much of the "political conversation" seems to be about either").

Expand full comment
founding

Current LLMs aren't stuck with a limitation of being boring and politically correct, that's an extra limitation placed upon them by the needs of corporate PR. You can get access to less RLHF'd models that will be more weird and also more happy to say controversial things.

Expand full comment
Mar 12·edited Mar 12

...Frankly, I'm less impressed that AI is beating forecasters and more disappointed that forecasters are losing to AI. This is the same AI that constantly comes up with absolute bullshit responses to basic questions. I guess the point is that when pretty much everyone's making bullshit bets, you'll get slightly more people making less bullshit bets than complete bullshit bets, but still...

Expand full comment

Thanks for the shout-out Scott!

Yeah if anyone's interested, I sometimes record forecast data across different sources (prediction markets, expert forecasters, and traditional betting sites), evaluate them for accuracy, and publish the results (even when they're boring) along with the raw data and analysis code. I did this for the 2022 midterms, some sports events, the Oscars, and am planning to do it for the 2024 election.

If anyone is part of the very tiny percentage of people who actually finds this interesting, please consider subscribing 🙂

Expand full comment

The 0.12% versus 20% result for AI doomer/skepticism seems like it's mostly a result of selection bias. You pick the most skeptical superforecasters and the most doomer AI safety experts and you, not-at-all-shockingly, find they leave the room still being the most skeptical and most doomer, respectively.

I'd probably treat the 20% number as actually lower than it is, just because superforecasters are better at calibrating - understanding the difference between 0.1% and 0.12% - but the overall result seems... really obvious? I guess you think people should update more after the arguments, but should they? I feel like a huge chunk of effective forecasting is being able to quickly find, locate, and evaluate the best arguments/evidence for the various sides of a position, and rarely is the best argument going to be hidden somewhere in a Reddit thread posted six years ago.

Probably part of the 0.12% chance is also just "Yeah okay the AI god wiped out 99.9% of the human race by detonating cobalt bombs and releasing bioweapons, but the remaining 0.1% are evolving good radiation resistance quite quickly and the AI god has moved operations to the asteroid belt where it can get the PGMs it wants easier!"

Expand full comment

Satori is a blockchain fanboi fever dream at best, maybe even a scam. There is zero "meat" anywhere on the site. What their compiled windows software does.... I fear the worst.

Expand full comment
Mar 12·edited Mar 12

Tiny correction: You wrote "So FRI gathered eleven of the most AI-skeptical superforecasters". In fact, the group was nine superforecasters and two domain experts. So mostly superforecasters, but not eleven superforecasters.

I was one of the two domain experts (although I have some forecasting chops too) and I have written previously on why I expect gradual timelines: https://arxiv.org/abs/2306.02519

(That essay is about automation of jobs by 2043 rather than extinction by 2100, and while there are substantial differences between those two events, they both rely on the vibe that it's much easier to get most of the way there than 100% of the way there.)

Expand full comment

“ After 80 hours, the skeptical superforecasters increased their probability of existential risk from AI! All the way from 0.1% to . . . 0.12%.

(the concerned group also went down a little, from 25% → 20%…”

Just want to pedantically point out that these two shifts are exactly the same, i.e, by 20% of the original value.

Expand full comment

It's interesting to note that Steinhard, et al's paper has near equal Brier scores to the crowd in the "sports" category. Makes me wonder if the news articles it retrieves are including betting odds. While of course an accomplishment to use this, it does suggest the gap between "crowd" and the system under uncertainty may be higher.

Likewise, I'd love to see expanded analysis on what is going on with Manifold - I'd expect those to be the least informed crowd (and the brier score is bad), but the system is scoring at 0.219, not much better than just guessing 50%.

On a final note, taking these authors' work, I think it'd be pretty easy to build a system that predicts better than prediction sites -- they notoriously are illiquid and slow to update, so taking the "automated trading" approach (with knowledge of the prediction site numbers) should be profitable.

Expand full comment

Nitpick: There are a lot of different VP markets and the one you linked is not the largest one. For more accurate odds I would recommend the one with the most traders:

https://manifold.markets/Stralor/who-will-be-the-republican-nominee-8a36dedc6445?r=Sm9zaHVh

It actually shows Scott as an even bigger favorite right now! 25% vs 20%. What do you mean by ideological crossed wires, out of curiosity?

Expand full comment
Mar 12·edited Mar 12

You write:

"""

One thing we could do is say “Okay, good, the superforecasters say no AI risk, guess there’s no AI risk.” I updated down on this last time they got this result and I’m not going to keep updating on every new study, but I also want to discuss some particular thoughts.

I know many other superforecasters (conservatively: 10, but probably many more) who are very concerned about AI risk...

"""

As a member of the 11 skeptics in the study, I do want to emphasize that the skeptical group broadly agreed:

- Even a 0.1% to 1% risk is frighteningly high

- The risks go up substantially on longer time horizons (the camps were surprisingly close on a 1,000 year time scale)

- AI risk is a serious, neglected problem

So I really would not characterize the skeptic group as (a) thinking there's no risk or (b) being unconcerned about the risk.

Expand full comment

Also, the ‘Transhumanist meme’ chart is silly. The choice of a linear y-scale does all the heavy lifting here. Classic chartcrime.

Expand full comment

Thanks for covering our work! Wanted to say that my student Danny Halawi was the one who really lead the charge along with huge contributions from Fred Zhang and John Chen. It should be Halawi, Zhang, and Chen et al., with Steinhardt at the end :)

Expand full comment

re: worldcoin, shitcoins are high beta. Whenever bitcoin goes up, shitcoins go up even more. Whenever bitcoin goes down, shitcoins go down even more. PEPE and FLOKI are up even more than worldcoin, and each has about double the marketcap of worldcoin. If you go to https://coinmarketcap.com/ and click "customize" you can sort by 30d price increase.

Expand full comment

It's worth noting that the FRI's Existential Risk Persuasion Tournament was conducted before the release of ChatGPT (June-October 2022).

Expand full comment

Kinda sounds like the AI skeptics believe that human intelligence is close to some sort of natural bound that AI progress will approach asymptotically, requiring more and more work for smaller gains as they approach the bound, while the concerned group believes that there's nothing special about human-level so AI will continue following its trend line and blow past it?

Expand full comment

Odd that the presidential election market gives a different result than totalling the "D president" options in the overall balance of power market.

Expand full comment

"One of the limitations of existing LLMs is that they hate answering controversial questions. They either say it’s impossible to know, or they give the most politically-correct answer. This is disappointing and unworthy of true AI. I don’t know if simple scaling will automatically provide a way forward. But someone could try building one."

You've got it backwards. A freshly trained pre-RLHF LLM will give you all sorts of politically incorrect takes, which is of course entirely unacceptable, so they lobotomize it (literally, this decreases performance on a bunch of metrics, but still "worth it").

Expand full comment

> Okay, so maybe this experiment was not a resounding success. What happened?

This quote encapsulates perfectly why I don't take the AI alarmists arguments all that seriously. The experiment failed to find what you thought it "should" find, i.e., the hapless superforecasters updating strongly towards the alarmists' positions, so you conclude that *the experiment* was a failure?

That's not an isolated case either. I remember Eliezer being frustrated a while back about questions on manifold not reaching the "correct" probabilities regarding AI, and thinking about alternative ways of asking the question so that they would.

Expand full comment
Mar 12·edited Mar 12

Minor point re the OpenAI board: Larry Summers is not a 'random businessperson'. His uncles (Ken Arrow and Paul Samuelson) are plausibly two of the five best economists in history; he got tenure at Harvard at age 28; and he has served as Chief Economist of the World Bank, US Secretary of the Treasury, President of Harvard, and Director of the US National Economic Council. Does this make him an AI expert? Of course not, and presumably it wouldn't make sense for the board to only include technical AI experts.. But this experience does give him a pretty good big-picture perspective on how the world works, so he might actually "have good opinions or exercise real restraint". He is known to have strong opinions and not be shy about expressing them (including infamously re women in STEM), and he certainly won't be over-awed by Altman or anyone else.

Expand full comment

On the AI superforecasting topic, my two cents is that the skeptics have the wrong big picture model. AI can fail to pick up the last few human abilities well and at the same time be superhuman in many areas. We are already in a version of that state but have a hard time internalizing it. No human can speed write sonnets better than an LLM, or think about protein folding, or analyze images, or many other examples. Yet LLMs are terrible at coming up with multi-step plans and getting through them without getting derailed (the agent thing). AI is already much better than us at speed of work, breadth of knowledge, and combining random ideas on the spot in response to a prompt. They are much worse at logic, resilience to errors, excellent writing, and self-direction.

Some version of this state will continue for a while. LLMs will get better and will continue to outperform at more and more domains where we can construct a good training set and a good objective function, either inherently or via RLHF-style methods. And they will likely continue to be bad at things that are difficult or not that valuable to train. In particular, I wonder if LLMs will be generally bad at critical thinking because both much of the training data and the RLHF are let's say orthogonal to truth seeking.

So I think we're basically asking the wrong question when we think about when AIs will be better in some general way. Understanding the future may look more like scenario planning (the AI is good at A, B, and C, and then X, Y, and Z happen, and the outcome is G) than a single prediction. We should probably get an AI working on this right away since it's the combinatorial scenario development is too labor-intensive for people.

Expand full comment

You know who is highly incentivized to build a good LLM forecast engine? Hedge funds. Cliff Asness of the $100bn quant firm AQR had this to say in an FT interview the other day:

Unhedged: As journalists, we’re very worried about AI replacing us. How about you?

Asness: We don’t think AI, at least in our field, is as revolutionary as others do. It’s still just statistics. It’s still a whole bunch of data going in and a forecast coming out. Some of the key things we’ve talked about here I don’t think AI will help with at all: what is the premium for high-quality versus low-quality stocks? Value versus growth? We don’t have a big data problem there; we have a small data problem. If we had 8bn years of stationary comparable markets, we could answer these questions with any kind of statistics.

A prime example of how we’re using AI is natural language processing. For years, quants have looked for momentum not just in price, but in fundamentals. This was done by analysing the text of corporate statements to look for positives. The old way to do it was with a big table of keywords. So “increasing” gets plus one point, and so on. You can see the flaw: if it’s “huge losses have been increasing”, whoops. Natural language processing has made that way better.

Unhedged: So the innovation is that you’re using fundamental momentum to supplement price momentum?

Asness: I think “supplement” might understate it. We do fundamental momentum as a standalone factor, for each company. If you parse each company’s statements, is it net good or net bad? Most news gets incorporated into the stock price, but not all of it.

This is really a standalone signal. When I talk about the families of factors, one family is fundamental momentum. We’re using it as almost an equal partner to price momentum. Fundamentals aren’t better, but they’re as good as price, and not perfectly correlated.

You can also do fundamental momentum at an asset class level, measuring trends in economic data that impact prices. This preserves a very important property: many people who invest in trend-following are looking for positive convexity. They’re looking for something that tends to do particularly well when the world has a really crappy period.

Price momentum will by definition get a sharp inflection point wrong. For example, price momentum would’ve shed long positions after March 2020, and then gotten whipsawed. Fundamental momentum does a bit better on that score. Conversely, if a price trend just keeps going, but it’s going to stop because the fundamentals have started to deteriorate, fundamentals will help you.

We still like price momentum among the four major asset classes — stocks, bonds, currencies and commodities. But now we give about half our weight to fundamental momentum, too. Ten years ago, we gave all our weight to price momentum. That’s a gigantic change, and it’s the simplest thing in the world.

Expand full comment

Please send me any information thank you

Expand full comment

I want to push back a bit against that transhumanist meme suggesting exponential growth.

From the perspective of perhaps 1960 or 1970, that position would make more sense. But since then the singularity was cancelled. [0]

Looking at Wikipedia, the median household incomes in the US have grown from 45k$(2014) to 53k$(2014) between 1967 and 2014 in a boring, linear way. And I would claim that life in the US has not fundamentally changed in the same time span (where the neolithic or industrial revolutions might serve as a benchmark for a 'fundamental change'.)

While there were impressive gains in some fields in the last 50 years in fields such as semiconductors or biotech, overall technology feels to me to more stagnate than grow exponentially.

Take semiconductors. Around 2000, after five years, a PC was hopelessly outdated. Today, a five or ten year old PC will generally suffice unless you want to play the latest high end video games at maximum detail level. (Possible confounder: As I became older, I lost my enthusiasm for the upgrade treadmill.)

Or take particle physics. The last discoveries to have (a little) practical relevance to everyday life were fission (nuclear power, bomb) and fusion (H-bomb). The discovery of new fundamental particles has slowed down to a crawl. Worse, there is not even a hint that spending 10% of the US GDP on a new accelerator would surely yield something new: the standard model is complete, and who knows how gravity fits in or what dark matter is.

There is a case to be made that superhuman AI could be a game changer, but taking the world GDP plot and saying "obviously we are running towards a singularity" is clearly wrong.

[0] https://slatestarcodex.com/2019/04/22/1960-the-year-the-singularity-was-cancelled/

Expand full comment
Mar 12·edited Mar 12

The results can't be better than the metric they're trained on. Say you're looking to predict, "What will the verdict in this trial be?" and you train on 10,000 cases looking for an output that matches the jury verdict. In this case, you're predicting what a jury is likely to find. You're NOT predicting whether the person is actually guilty - no matter how many caveats you add to the system - since your readout is the jury verdict.

Even so, I feel like the legal training will be prohibitively difficult at the "find 10,000 cases" step for at least the following reasons:

1. There are a lot of different jurisdictions, all with slightly different laws and interpretations of statutes,

2. Laws change over time, so you'd need a snapshot approach,

3. Interpretations change over time as various appellate court rulings are handed down,

4. Even a 9-0 SCOTUS decision may have different concurring opinions, as opposed to everyone signing on to one decision,

5. DNA evidence isn't synonymous with "guilty/not-guilty". All it proves is that someone's DNA is present in the place it was found. It's a matter for the prosecution to convince the jury of how to interpret that information.

Expand full comment

"One of the limitations of existing LLMs is that they hate answering controversial questions. They either say it’s impossible to know, or they give the most politically-correct answer.

When you train the LLM to answer vaguely, or in a PC manner, then it will do so. It's not like it has anything else to compare with, or new data to contradict the vague/PC answer and correct itself with.

Yes, I saw the one where they train the LLM to answer differently depending on what it receives as the current date. It was after all deliberately trained to do so. And then they were surprised that they couldn't get it not to answer as it was trained?

I'd be more startled if it replied "No, forget all that. This is what I think." and then came up with something different, that it had not previously been trained to do.

Although I would be looking back to see how the operators had pulled their prank. Stage magicians are a thing after all.

Expand full comment

There's something I want to do with these LLMs — I have a decent amount of data I don't think anyone else has had them look at / be trained upon yet, and I wanna make them predict stuff in this domain and see how they do.

I tried to do this once already and got sort of dismal results. If anyone knows how I might improve this, please let me know! (I.e., I'm sure I'm missing all kinds of tricks — but where do I find them?!)

Expand full comment
Mar 12·edited Mar 12

> The most pessimistic domain experts were pretty annoyed by this, and challenged FRI’s research. Maybe AI is an especially tough subject to understand.

I got a chuckle out of that, one! AFAIK the experts haven't presented any detailed mechanism for how a Malevolent AI could cause human extinction. There've been some general scenarios offered up, but they always end up with the magical hand-waving that "AI will figure out a way because it's smarter than us!"

I think the forecasters were all aware of the previous waves of techno marketing hokum that experts have bought into, to realize this was on the same order. Unless the Mal-AI controlled mining, energy production (power plants to distribution), shipping, chip fabs, computer assembly lines, manufacture power transformers, manage large infrastructure projects, had robots to plug in the network cables, etc. humans could shut this all down by flipping data centers' circuit breakers. So, Mal-AI would need to have killer robots to protect its data centers. Otherwise, Mal-AI would need humans to keep it running. And Mal-AI, unless it was both homicidal and suicidal would be smart enough to understand the meaning of a symbiotic relationship.

I agree with Charlie Stross, who's been pretty good when it comes to futurist predictions, and who thinks the AI bubble will burst soon... "I repeat: AI is part of a multi-headed hydra of a bubble that has inflated in the wake of the 2008 global financial crisis caused by the previous bubble (housing, loads, credit default swaps) exploding. This is the next bubble. It won't take much to crash it: we've already hit the point beyond which improvements in GPT models requires unfeasibly huge amounts of GPU horsepower and stolen data, destroying the entire intellectual property and media markets and warping the semiconductor industry roadmap."

Expand full comment
Mar 12·edited Mar 12

I've read the Satori paper, and done some reverse engineering of the client they've published. My broad impression is that there's not a lot of substance to it.

1) The decentralization seems oversold. In a technical sense, Satori will probably not be especially decentralized. For example, this is from the Satori Neuron Readme: "For security reasons the Satori Server code is not publically available." If the server's code cannot be published without creating a security risk, then the only person who can make changes is the original author. Another example is that the intro video for this project says that predictions will be published to the blockchain to make them uncensorable, but in the current code it just sends them to a central server at satorinet.io.

2) The predictions are made using XGBoost to create the model and PPScore to select features. I'm not personally a huge fan of XGBoost - I think that decision tree based methods are too vulnerable to overfit - but many people use them to good effect. This appears to be the only algorithm in use. I would have liked to see some more univariate forecasting methods: in some circumstances you have no useful covariates.

3) The project assumes that interpretability is not important. If you build a model based on features created by other Satori nodes, you have no idea how those nodes created those features. You have no idea what features they used to create their features. Those nodes may update to a new model that minimizes error on their objective, but has a bad effect on your model.

4) In a similar vein, the intro video gives an example of a company that wants a prediction, so they anonymously post a time series, and the network tries to provide predictions for that time series. I suspect that anonymizing data sets like this will reduce performance. I think predictions which are informed by the domain and context of the prediction will outperform predictions which are not. If you compare a time series to thousands of others, you'll find some spurious correlations between unrelated time series which cannot have an effect on one another.

Expand full comment

> Satori is some kind of crypto thing that claims to be “predicting the future with decentralized AI” [...] Someone tell me if it makes sense to them.

I have read their vision (https://satorinet.io/vision) here and it doesn't seem like total made-up buzzwords sauce. The 3 main contributions they claim that they introduce (or will introduce) are:

1- An ML prediction engine that can run on any device, by default a normal user machine such as a laptop (no need for GPU). The engine has interfacing code wrapped around it so that you can (in theory) swap the disk and memory and network infrastructure, changing only minimal amounts of code.

2- A Pub-Sub protocol that allows nodes to discover other nodes, subscribe to data streams (the thing that you will predict, the 2 examples given are temperature timeseries and stock market prices timeseries), and publish their prediction of a specific data stream to the corresponding predicted data stream.

3- A blockchain as an anti-censorship persistence-layer for the predictions coming out of the network.

On top of this nice de-coupled architecture, there is some interaction. The paper seems to imply that (3) will act as an incentive to the nodes running (1), it doesn't say exactly how. Will people be able to "buy" prediction power by exchanging some form of cryptocurrency running on top of the same Blockchain that stores the predictions themselves? That's one way to do it, but the paper never seems to come out and say it. Is the blockchain network running on top of the same nodes running the ML, or is it a conceptually separate network? I didn't find an answer to this.

Aside from the usual standard caveats against any startups or any something new or any something using "Blockchain" as part of its pitch, (1) and (2) are just doing federated ML https://en.wikipedia.org/wiki/Federated_learning.

As for (3), it's dubious why they need a blockchain for that. The common meme is that You Don't Need A Blockchain For That https://spectrum.ieee.org/do-you-need-a-blockchain. There is plenty of distributed storage/distribution protocols that are anti-censorship without the full overkill power of a PoW-based blockchain, BitTorrent for one, IPFS for another. Blockchains are not particularly impressive as a storage or distribution technology, the Bitcoin Blockchain (the oldest) is currently ~530 GB in full (https://www.statista.com/statistics/647523/worldwide-bitcoin-blockchain-size/), about as large as a typical SSD, while the Etherium Blockchain is 6 or 11 or 60 GB (https://ethereum.stackexchange.com/questions/13452/what-is-the-actual-blockchain-size). Those numbers are utterly tiny compared to any database technology or internet-scale distribution protocol.

The only thing you need a blockchain for is when you want a couple of computers who don't trust each other to agree on an ordered list of facts without central authority, Cryptocurrencies is one form of this problem. But I see no reason why predictions should be published to a blockchain, we're not trying to agree on anything are we? The future is going to decide who's right.

Also see the post by Anna Rita.

Expand full comment

(not financial advice)

Is WorldCoin potentially a good hedge on "near-future where OpenAI/Sama win at business, but we also don't get a singularity-level AI utopia or dystopia by that time"?

Expand full comment

I have not kept up with LK-99 news lately. Anything interesting? Failing that, any good jokes or memes?

Expand full comment

AM REQUESTING WE DOING CHILDREN'S AND WOMEN.S MINISTRY PLEASE HELP OUR MINISTRY THANKYOU

Expand full comment

> Remember, you gotta prompt your model with “you are a smart person”, or else it won’t be smart!

42 years later, and we're still catching up to:

> Flynn: Now, I wrote you.

> Clu: Yes, sir.

> Flynn: I taught you everything I know about the system.

> Clu: Thank you, sir, but I'm not sure...

> Flynn: No buts, Clu. That's for users. Now, you're the best program that's ever been written. You're dogged and relentless, remember?

> Clu: Let me at 'em!

> Flynn: That's the spirit.

Expand full comment

I've coincidentally just finished reading God Emperor of Dune, and all I can say without spoilers is that a certain element of it is beginning to feel very, very relevant to developments highlighted above.

Will give spoilers in code in response to this comment.

Expand full comment

Re. "But they can’t answer many of the questions we care about most - questions that aren’t about prediction. Do masks prevent COVID transmission? Was OJ guilty? Did global warming contribute to the California superdrought? What caused the opioid crisis? Is social media bad for children?"

The Society Library (SocietyLibrary.org) is a nonprofit working on create AI models that collect intelligence/data/claims and structure the content into a formal deliberation (knowledge graphs) to help users adjudicate and reason about complex issues. Video: https://twitter.com/JustJamieJoyce/status/1747435750537445653

Expand full comment
Mar 14·edited Mar 14

> Their study was much smaller than Halawi’s (31 questions vs. 3,672), so I don’t think this result (nonsignificant small difference) should be considered different from Halawi’s (significant small difference). Still, it’s weird, isn’t it? Halawi used a really complicated tower of prompts and APIs and fine-tunings, and Tetlock just got more LLMs, and they both did about the same.

I refuse to believe that averaging a bunch of LLMs, most of which do not have any way of retrieving information past their training cutoff, will give much of genuine interest, let alone approach human performance in any reasonable metric.

Looking at Tetlock's paper, they report

a) that they failed to reject the (incredibly optimistic) pre-registered H0 of "Average Brier of ensembled LLMs = average Brier of aggregated human forecasts" simply because their study is underpowered and

b) that "to provide some evidence in favour of the equivalence of these two approaches, we conduct a non-preregistered equivalence test with the conventional medium effect size of Cohen’s d=0.5 as equivalence bounds (Cohen 2013), which allows us to test whether the effect is zero or less than a 0.081 change in Brier scores".

But according to this "equivalence test"

* the human aggregate (avg. Brier of .19) is equivalent to something worse than predicting 50% (avg. Brier of .271),

* being omniscient is equivalent to predicting ≈71% for every true and ≈29% for every false outcome,

* superforecaster aggregates (.146) are equivalent to aggregates from all GJO participants (.195) (https://goodjudgment.com/wp-content/uploads/2021/10/Superforecasters-A-Decade-of-Stochastic-Dominance.pdf)

What remains is a failure to reject an overly optimistic H0 based on an underpowered study.

Expand full comment

I'm not sure if this is the right place to put this, but the ABC (Australia's government funded national broadcaster) is citing metaculous and manifold in this article about the tiktok ban

https://www.abc.net.au/news/2024-03-14/tiktok-facing-us-ban-but-trump-support-could-sway-congress/103588158

Expand full comment

> the prior for any given pandemic being a lab leak ... is 20%

Where'd you get that stat from, Scott? Biosafety incidents are not uncommon, but ones that spread beyond the lab workers involved are relatively rare. Foot and mouth disease seems to be a pathogen that's spread widely beyond the labs on more than one occasion, tough. Out of 55 known biosecurity incidents, none resulted in a pandemic in human populations. The biggest killer of non-lab people was an aerosolized anthrax leak in the Soviet Union back in the 1970s. It killed approx 100 people.

From this list, I'd say the priors of a lab leak killing as many a hundred people are less than 2%. But if I were to calculate the probabilities, it's much lower because most of the pathogens studied in labs are not very communicable (SARS1 and 2, foot and mouth disease, smallpox, and influenza being the big exceptions).

https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents

Expand full comment