338 Comments
Comment deleted
Expand full comment

buy all the toilet paper

Expand full comment

We got through fine without TP. Selling stocks was more profitable though.

Expand full comment

I actually stocked up plenty on toilet paper in March of 2020, with the reasoning "apart from taking up space in a way that doesn't matter to me and the minuscule opportunity cost of paying now rather than later, this costs me nothing as the toilet paper will be used up _anyway_ eventually, but ending up without toilet paper is a reasonably big deal".

Some still remains, but it's not like that was ever a problem.

Expand full comment

That's nice for you. Now look at the bigger picture.

When people started buying toilet paper in bulk, the supermarkets struggled to keep their shelves stocked. This caused other people to be unable to buy toilet paper when they went shopping. The negative consequences for them ranged from the mild inconvenience of having to go again at a later time, through the mild fear that they might run out before getting some, to the very "reasonably big deal" that you wanted to avoid for yourself.

Expand full comment

That actually didn't happen here - a benefit of living in a major paper-producing country, I guess. Supply thinned out a bit briefly (purely because of logistics), but shelves never went empty.

Further, securing my supply made for a minuscule, negligible part of the entire picture. Martyrdom - especially over something like toilet paper - doesn't seem very satisfying.

Expand full comment

Okay, lets look at the even bigger picture. Part of the TP shortage came from people spending more time at home and less time at restaurants. So there was more demand for the toilet paper that people used at home and less demand for the type that people used in restaurants. But if you weren't picky about the type of toilet paper that you got, you could order TP from a restaurant supply store and get your rolls. This situation was, weirdly, not as hard to solve as people made it out to be.

Yes, stockpiling contributed to the problem. But it wasn't the primary cause.

Also, TP manufacturers could have ramped up production. But that would have meant they'd have had to ramp down production below normal later. Which would have cost extra money overall that anti-gouging laws made it illegal to recoup. So anti-gouging laws also contributed to the shortage. And a general willingness to use the law to control costs, even if such control resulted in shortages, contributed to the shortages.

So part of the 'big picture' is, arguably, that the public favors controlling costs over preventing shortages.

Expand full comment

This is worth generalizing to *all* non perishable goods.

When downstream business owners complain about the fragility of JIT economics in their upstream supply chain, I ask them "well, why dont YOU hold stock in YOUR warehouse to smooth over the glitches?" Usually the answer is "I can't afford that!" Me: "Neither can your suppliers."

At the end customer level, always stay fully stocked up on on nonperishables. It's cheap. Stop treating the local grocery store has your personal good storage warehouse. If you say "But I don't have room..."... move someplace cheaper.

Expand full comment

I'd like to make amendments to anti-gouging laws which acknowledged that prices will understandably increase during a shortage. Room in a warehouse is cheaper than room in a residence so it makes perfect sense for people to want businesses to do their stockpiling for them, but we have a system that often makes it illegal for anyone but the end customer to stockpile.

Expand full comment

>It’s better to miss the chance at making an early fortune on Bitcoin than to lose what you do have chasing other hot new investments that fail.

Remember that investment is asymmetric. You can have an arbitrarily high profit (someone who invested in Bitcoin in 2011 could have realized a 50,000x return!) , while at worst you'll lose only 100% of what you put in.

This is what angel investors do: they seek out opportunities for huge multiples, and have deep enough pockets to ride out the failures.

Failure for an angel investor doesn't mean 70% of your portfolio falling flat. Failure means missing out on the next Amazon.

Expand full comment

Not always true, actually. There are investments (well, speculation) where you're on the hook for infinite losses but a maximum return. See Gamestop.

Expand full comment

If you knew about COVID a month early, you could stock up and maybe get some home repairs done-- not a lot, but something.

Expand full comment

N.N. Talib wrote a book about this.

Expand full comment
author

I read the book but I don't think it was about the thing where the existence of experts correlated with a heuristic causes people to update away from the heuristic. Maybe I missed it.

Expand full comment

I think the book itself was an update away from the expert-endorsed "assume stability" heuristic.

Expand full comment

Taleb's point was that lots of experts set rocks all the time by assuming away tail probabilities. "Almost everything happens within 2 standard deviations, so we'll set our risk tolerance at 3 standard deviations and assume nothing will ever happen outside that window." Except that guarantees that when something DOES happen outside the window, which it will eventually, you won't be prepared.

Expand full comment
RemovedFeb 8, 2022·edited Feb 8, 2022
Comment removed
Expand full comment

I’m starting to miss the young woman who likes to be photographed nude.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

I'd say it's less about ignoring tail probabilities than using the wrong probability distribution. Making a Rumsfeld analogy, the tails of a distribution are known-uknowns, whereas Black Swans are unknown-unknowns.

Expand full comment

I think Scott's exercise is one of converting known unknowns into unknown unknowns. How do you take something you already know and 'unknow' it? You ignore information you have about a frequency distribution and convert it into a simple heuristic whose approximation has less information than you started with.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

All heuristics are approximations that reduce information. That's the goal of generalizing. When I want to drive somewhere, I look at a map, not a 1:1 scale model.

The trouble comes from ignoring important information, the value from ignoring irrelevant information.

Expand full comment

The normal distribution has exceptionally-thin tails, so assuming 6-sigma events won't happen in a normal distribution is actually quite safe (1 in 500 million). As such, this is a severe mischaracterisation.

Taleb's *actual* point comes earlier in the reasoning chain than that. You can't observe the standard deviation of a whole class of entities, after all; you *derive* it from observations of a *sample* and from *assumptions*. One of the more common assumptions is "this class follows a normal distribution". The problem is that a lot of things in the real world do not follow a normal distribution, and your predictions both about "the standard deviation of the class" and the more relevant "likelihood of an event past a certain point in the tail" are going to be wrong if you put in that assumption and it's untrue. To take a particularly-obvious example, the real distribution might be Cauchy. The Cauchy distribution has infinite standard deviation, so using the standard deviation of your sample (which is finite) to approximate the standard deviation of the whole class won't work (https://upload.wikimedia.org/wikipedia/commons/a/aa/Mean_estimator_consistency.gif). Six-sigma events don't happen in Cauchy distributions, either - but 6*(what you would derive as sigma if you falsely thought it was normal and took a small-ish sample) happens orders of magnitude more often than "1 in 500 million" and thus you've set yourself up to fail.

Expand full comment

I didn't say anything about 6-sigma, and Scott's original example wasn't even to 4-sigma (except in the recursive example, where experts used their own reasoning to artificially inflate their own confidence), so I'm not sure where you're getting the "severe mischaracterization".

I think you have a good point about people assuming a random sampling is normally distributed, and that their sample exactly fits that normal distribution. I think you can make that point without attacking what I was saying, which was also true.

Taleb makes a lot of points about statistical distributions, especially about the way people make errors when considering how to treat the tails of the distribution. It's true that he makes the point you called out. But it's inaccurate and limiting to say that Taleb has one point in this area. One of his points is what I was talking about, where experts assume tail distributions don't matter because they're infrequent, but as Taleb points out that matters a lot if the infrequent - but very real - event wipes you out.

Another point he makes is that you can't assume the game will act independently from your own actions. In the case of the security guard, you're much more likely to be robbed if people see a rock sitting at the security station chair - especially if that rock is painted with the words, "THERE ARE NO ROBBERS". In that case, your probability goes from 99.9% (assuming the Scott's hypothetically-omniscient-for-the-sake-of-the-example numbers) that it's just the wind to a much higher robbery likelihood. In that case, your actions told the robbers to come to you.

Another problem is that even 6-sigma can fail, depending on how many times you go back to the probability well. A 1-in-500 million chance of a civilization-destroying meteor this YEAR might as well be zero. Take the same probability every SECOND instead of every year, and you have something to worry a lot about because there are enough seconds to make the event probability likely within your lifetime.

It's a nuanced subject, with a lot of points that could be made. The point I was making is that when someone assumes that a nominally low probability=no probability that guarantees they're unable to consider any of those nuances about the tail end of the distribution. It effectively assumes away much of the distribution. That includes your observation that the distribution may not be normally distributed. Once you simplify away the statistics, you become impervious to all statistical arguments.

Expand full comment

Fair.

Expand full comment
founding

I don't think you made a very good argument on that part, tbh. There's already a lot of noise around so having just a consensus of experts is unlikely. Much much more likely if it's just a bubble and just their public opinions. Like the set of famous virologists converging early on "labs cannot possibly be blamed". Or public policy converging on... anything. Yes, then it applies.

Expand full comment

I think you should be thinking of these scenarios as occurring in Extremistan, where most of the "weight" is concentrated in a single event, e.g., all "no eruptions" have relatively small positive value while "eruption" has a huge negative value. Taleb has argued that this asymmetry is what makes the "99.9% accuracy" label meaningless since the whole value in prediction is getting the 0.01% correct!

Expand full comment

I guess you are making sort of an opposite point. Taleb was saying that experts mostly consult the rocks while prudent, down-to-earth regular people get the risk analysis right.

Expand full comment

I guess you are making sort of an opposite point. Taleb was saying that experts mostly consult the rocks while prudent, down-to-earth regular people get the risk analysis right.

Expand full comment

I think there's some overlap but Scott did it better.

Expand full comment

No. Joseph Henrich wrote a book about this.

Expand full comment

Indeed. The underlying concept in every example stated in this article is ergodicity.

Expand full comment

A problem is that in some of these examples, there is a cost being incurred for a putative benefit (detect the ultra-rare event) and in other cases no cost is being incurred. For example, the security guard is paid a salary and could be employed in productive work. But the only cost the skeptic incurs is the time it takes them to post mean things on Twitter (and the hurt feelings of the people insulted).

I don't think your litany of examples establishes that the "worse" outcome of these heuristics is the "false confidence" (what if it's not false?) rather than "expending resources on something useless".

Expand full comment

My impression of the stories is that I thought something different about replacing the expert with the rock in each of these cases. There are some where we absolutely should just use the rock rather than the expert, but some where the costs of missing the real time are high enough that we should definitely consult the expert who tells us the volcano is erupting 10% of the time (provided that they almost always catch the real eruptions).

Expand full comment

The mayor of New Orleans has a rock like this. The state of Florida in charge of deciding to evacuate the Keys do not.

Part of the difference is not the likelihood of the event happening, but the cost of reactions - for instance, if a nursing home evacuates (and that's the population that you absolutely don't want trapped in a drowning city) some of the residents will almost surely die. (And they would not have died if they had stayed in place.) Doesn't take too many false alarms to encourage people to break out the rock. Same with the risk of looting, or lost business, or kids schooling.

'We should do this just in case it really is a crisis this time' is far easier said when one doesn't appreciate the cost of 'this'.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

I've been quite addicted to following fusion power news for months and I have noticed there are people who say "we will have fusion in 10 years" and there are the skeptics who say "fusion has been 30 years away for the last 50 years" (and they've been saying roughly that for probably a good 20 years)

The skeptics have a real effect on social dynamics. Government investment in fusion is hard because even the optimistic scenario says it won't pan out until your political career is likely over. In this case the only tangible incentives you have relate to the reactions of the booster voters and the skeptical voters and the effect their opinions have on the general opinion of voters in favour of fusion investment or against it

At this point in the US, despite a climate change is a big problem administration being in power, it appears that the point has now come where private investment in fusion rivals or is bigger than government investment despite 10 years being the likely minimum point of knowing if you have a potential success. It could be argued that the skeptic viewpoint has reduced the governmental time horizon to below that of the private investor time horizon in a way that it shouldn't be

Expand full comment

Why should government have a longer time horizon than private parties?

Planning over long time horizons is hard. Government is hard. Trying to do both is even harder.

I appreciate that sewer socialism (https://en.wikipedia.org/wiki/Sewer_socialism) can work for some well defined and understood problems, like sewers. But trying to pick winners in technology or business for the long run is not something government does well.

Btw, climate change concerns aren't really any reason to look into fusion power. From that point of view, fission power already does everything we could want.

(Fusion is interesting for other reasons. And mostly over much longer time horizons than our climate change concerns.)

The only thing I'd want the government to do about climate change is perhaps institute a CO2 tax (or a cap-and-trade regime), and then get out of the way.

Expand full comment

I agree government would do the best by putting in a CO2 tax and getting out of the way. I would add a bridge policy on the way to higher CO2 taxes mid century of a market-like payout for carbon reductions. So regarding fusion, if you innovate a design that comes on to the market this government quasi market system pays you a mid century carbon tax level benefit for tonnes of CO2 avoided as the technology rolls out. Focusing incentives on the maximum reduction in carbon at a fixed and rational cost rather than immediate reductions at variable and higher costs would be great

My argument isn't necessarily that government should be expected to have a higher time horizon but that at minimum it could be longer than what it is depending on the heuristic dynamic at work among voters. Does it make more sense to spend another billion a year on fusion for a muted voter interest or should we increase peanut subsidies (which outstrip fusion spending) with our limited budget which will have quite a strong interest for those voters whom it concerns? Government certainly does things with a very unclear time horizon like particle accelerators that would be either billionaire pet projects or non-existent otherwise so they have a larger maximum if not average time horizon than private entities and this could be applied to advancing technology to where it enters the private horizons

I'd like to believe that fission could be applied as much as it should but it seems too uncertain to rely on. Germany with all their reputation for maximum carbon reduction replaced their nuclear with largely coal. The deployment potential for fusion is vastly higher than fission based on proliferation risk alone - richer countries might choose to fund fusion plants in developing countries where they might never with regular nuclear. So I find fusion is interesting on the 'climate change and air quality in this century' horizon aside from being incredibly interesting in itself

Expand full comment

"Why should government have a longer time horizon than private parties?"

Because the maximum time horizons of most private parties are too short for some socially valuable projects.

The private parties that are capable of having longer time horizons tend also to be very large concentrations of wealth, even monopolies (think AT&T's Bell Labs), and these have their own socially undesirable effects.

Expand full comment

> Because the maximum time horizons of most private parties are too short for some socially valuable projects.

I am not sure that's true? Many people manage to save for retirement just fine.

And the stock market was perfectly happy with eg Amazon and Tesla taking their time to return money to shareholders, and still have big valuations. (You can find more examples, especially in the tech sector.)

See also how stock market valuations recovered very early in the pandemic, when the coronavirus was still raging; because markets could look beyond the current predicaments into the future.

(To be clear: I am not saying that all private parties have long time horizons. Just that there are many private parties are quite capable of having long time horizons.)

To add some speculation: what is often seen as a short term bias in eg the stock market is just an expression of investors not trusting managers:

For an investor it can be hard to judge how good a manager or her plans are. It's easy for a manager to defend vanity projects with some vague talk of longer term benefits. One way for investors to keep managers honest is to ask for steady quarterly results. Those are hard to fake and easier to judge than longer running plans.

Even easier to judge and harder to fake are steady quarterly dividends kept up over a long time. And if a company misses their expected dividend payment, that can be a sign that something is going wrong; even if you don't care about the short term at all, and invest purely for the long term.

As an analogy: if you are working at a company (especially a tech company), and they always used to provide free snacks to employees, but are now cutting back; that might be a strong signal that you should be jumping ship. Even if you don't care about eating snacks at all. It means things are going badly enough that they have to cut back on what used to be seen as trivial expenses.

Expand full comment

Someone saving for retirement is not anything like investment in long-term research projects. (Not to mention that many people didn't successfully do so because it was in the hands of companies that could not meet their long-term pension obligations.)

I mentioned Bell Labs. Look at what they did from around the beginning of the last century through the middle of it, and see if you can find examples of other companies that have done similar things over several-decade terms.

Your example of it being easier for investors to insist on continuously good quarterly results seems to support my arguments.

Expand full comment
founding

When a private party spends or invests, they pretty much have to consider whether they're going to have enough money to put their kids through college in ten years, retire in twenty, etc. When the government spends or invests, the people actually making the decision can't afford to consider much of anything beyond how this will impact the next election cycle.

Expand full comment

That's a reason to want government to have a long time horizon, not why it will. You are using "should" for "would be desirable to be true" not "can be expected to be true." I should remain in good health for at least another century, but I don't make plans on the assumption that I will.

If you give government tasks that require a long time horizon when it has a short one, it will use the authority to achieve its short-run goals. For a real example, the long-run task of slowing climate change was used as the justification for a biofuels program that converted something over ten percent of the world output of maize into alcohol, our contribution to world hunger — because that raised the market price of maize, benefitting farmers. Al Gore, to his credit, admitted that he supported the program because he wanted to be nominated for president and there was an Iowa primary.

We continue to do it long after environmentalists figured that it doesn't actually reduce climate change — because it does still get farm votes.

Expand full comment

Governments should be expected to have shorter time horizons than private actors. Long time horizons require secure property rights — you have to be reasonably certain that the slow growing hardwood trees you plant today will still belong to you, or someone you sold them to when they are ready to be harvested.

Politicians have insecure property rights in their political power. If you take a political hit today for a policy that will pay off in twenty years, you know that when it pays off the political benefit will go to whomever is in office then.

Expand full comment

That seems plausible in general.

Though luckily the real world isn't necessarily quite as bleak. Eg if a politician does something now that only pays of in twenty years, but real estate prices or stock prices reflect that today, the politician can potentially get their political payout straight away.

In this case, the politician needed only a short time horizon, but was piggy backing on the longer horizon of markets.

Expand full comment

I interned at a fusion lab in the late '90s; can confirm, the skeptics were saying exactly the same thing back then.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Economists call this the Peso Problem. https://en.wikipedia.org/wiki/Peso_problem_(finance)

The key here is that the price of an asset looks too low (say) because there is a tiny probability of a complete collapse. So what looks like a good trade (buy the asset) isn't really right because the catastrophe (that no one really predicts because there is no data to reliably predict it, so everyone relies on the heuristic of no collapse) every once in a while happens.

Expand full comment

Indeed, a really good example of this is that the risk models for mortgage-backed securities in 2007 all contained a rock that said REAL ESTATE PRICES NEVER FALL.

Expand full comment

That was a really really stupid rock though, because they had fallen plenty of times before. Wasn't like we came out of a 100 year period with only rising or flat real estate prices.

It was the establishment being arrogant that NOW they had figured it out, and the trend would actually be broken.

Expand full comment

I think it was technically "US average residential real estate prices have not fallen since the Great Depression, ie for 70+ years" (and we have changed our statistical measurements since then so we can say "ever", when we actually mean "for as long as our stats have been collected").

The problem with this is that both Japanese and UK real estate fell in recent times - Japan in the 1980s, Britain in the 1990s - and there was no principled reason for believing that the US market was different enough not to expect it to happen there eventually.

The separate UK market crash was almost entirely down to the very risky practices of one major UK lender (Northern Rock) which was run by someone who was high on crystal meth at the time and he managed to persuade lots of other lenders to take on some of Northern Rock's debt and also they adopted less risky but still too risky practices to compete with NR.

So Northern Rock was lending 120% mortgages; competitors were lending 105% mortgages to compete.

Expand full comment

>Northern Rock

Name checks out.

Expand full comment

I've heard a lot of bad things about Matt Ridley, but nothing about drugs. Are you sure you didn't mean Paul Flowers of the co-op?

Expand full comment

Oh, damn, yes I got them mixed up.

Expand full comment

Not really. Or rather, the real estate slowdown happened long before the recession, and wasn't such a big deal.

Later on, the Fed (and in Europe the ECB) let nominal GDP collapse. And a minor slowdown turned into the Great Recession.

It's a no-brainer that if total nominal spending collapses, loans will default.

So the rock said something like 'the Fed is half-way competent'.

Compare the Fed's much more pro-active policy in the corona-recession. Instead of a big long recession, we got a steep recovery and a bout of inflation. Compare also how Israel and Australia largely escaped the Great Recession thanks to competent central banks.

Expand full comment

My favorite version of this effect/question is "Why are there market orders?" That is, when I want to sell a share of stock, I have the choice of telling my broker "sell it ASAP at the current market price" (market order) or "sell it as soon as the market price rises to X" (limit order). If I place a limit order with some value of X that's, say, 0.1% above the current market price, I can be pretty confident that that order will execute quickly, just because of normal price volatility. That might make the extra 0.1% seem like "free money", and it might seem like market orders are just leaving that free money on the table. However, the problem with always using limit orders is that you're exposed to a large, rare downside: If you happen to place your order right before the price crashes, it might never get filled, and you'd end up riding the crash. In expectation, these rare losses should balance out the free money that you leave on the table by using just market orders. (Within some error bars related to transaction costs? I don't know how this actually works in practice.)

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

This is also the explanation why the Roulette strategy of "bet on red, then bet x 2 on red if you lose, and so on" doesn't work. Most of the time, you will make a small winning, but a tiny percent of the time, you will make an enormous loss (as there actually isn't an arbitrarily large amount of money in the world to bet, let alone in your checkbook).

Expand full comment

So in Ancient Persia or India or some other ancient place the king asked the inventor of chess what reward he would like.

“All I want is one grain of wheat on the first square of my invention, two on the second, four on the third…”

Expand full comment

I don't see why the expected gains and losses from limit versus market orders should exactly balance. There doesn't seem to be any "efficient market" reason for this. Why couldn't it be that market orders are best in some situations and limit orders are best in others?

A downside of market orders is that a temporary lack of liquidity in the market might lead to you getting a bad price by insisting of trading immediately.

Expand full comment

I wonder if there's such a thing as "secretary problem orders"? Something like:

1. Set a limit order to sell at the highest price that's been seen in the last X minutes.

2. If X more minutes pass without filling that order, replace it with a market order.

Expand full comment

GTC or Good 'Til Cancelled. Notionally, they could run forever, and you'll never get filled, but practice is to cancel (or the broker flags it up to you) within the same session. Or confirm it's good for the next. Or whatever.

Expand full comment

Strictly speaking "trade when market price reaches X" is a stop-limit order, i.e. the market price reaching the limit is just the trigger for a trade action, and you further specify what price you want to buy/sell at, be it market or some other value.

>"In expectation, these rare losses should balance out the free money that you leave on the table by using just market orders."

Most people are harmed more by a large loss than they are helped by a large gain. "These rare losses" are exactly what Taleb is referring to with his ergodicity/black swan model - the problem is asymmetric and 'balance' is not a terribly straightforward concept.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

This is also a little like the "economists have predicted nine of the last five recessions" joke. Being able to confidently advise governments and businesses that we're falling into a recession, so that businesses can try to make choices that protect them (though those tend to actually encourage the recession to accelerate), or so governments can loosen fiscal and monetary policy and head off the recession, would be really good! And this is actually a case where sometimes being wrong is probably OK. If you have good coordination of fiscal and monetary policy, then you ought to be able to run at full employment with a smooth path of NGDP growth, and no recessions. You get the odd burst of above-target inflation when you over-react to a signal that falsely makes you think you need to protect against recession. You balance it out by running slightly tighter policy for a few months to get the long-term average back down. It's not _perfect_, but it's much less costly in the long run than the ruined lives from allowing a recession to take hold.

Australia probably has the world record for doing this right -- before the Covid crisis, they hadn't had a recession in _decades_.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

I also referred to that one. :-) That's the opposite, though, constantly predicting the rare event because no-one cares when you get it wrong, rather than predicting the common outcome because the rare one comes up so rarely.

Predictions from psychics work this way as well. If you can land just one, you can keep pointing to that one, confident that no-one will care about the hundreds of low-publicity failed ones.

Expand full comment

You don't even need fiscal policy. Competent monetary policy is more than enough to avoid demand side recessions. Ie almost any recession that wasn't caused by covid or a war. And even the covid recession was rather short with a sharp recovery, because the Fed was competent this time, in contrast to 2008-ish.

Australia and Israel are good cases to study.

Expand full comment

I think monetary policy is probably sufficient in the large majority of cases, but in financial crashes trying to only use monetary policy ends up with the "pushing on a string" problem; you can flood the zone from the monetary perspective, but the money just sits around on bank balance sheets. I think this because we _saw it happen_ after the '08 crash. The '09 stimulus was undersized relative to the size of the hole in Aggregate Demand, and we got a sluggish recovery. The fiscal response to COVID was actually up to the size of the problem -- even perhaps a _little_ beyond. (And it would've been more effective / less inflationary if it had been better targeted, but some of the poorly-targeted stuff was probably necessary to make it politically viable.)

Expand full comment

Sounds like a good summary of Nassim Talib's "Black Swan" and David Graeber's "Bullshit Jobs" put together. NNT's take in "Antifragile" though (super oversimplified) is that we should try to organize our systems so that being wrong that 0.1% of the time is not so bad. There's a huge downside to being wrong about a volcano erupting when you live next to a volcano, not so much if you live 100 miles away!

Expand full comment
founding

Taleb basically says what you're describing is emphatically NOT antifragility- that's plain ol' robustness against risk/disaster. Antifragile systems are systems that get stronger with occasional stress / minor disasters, if I recall his metaphors well enough.

Expand full comment

Yeah, that's true. Still, the question of "how much do you weigh the vulcanologists vs the rock" is seemingly intractable, but the question of "how do you not get buried in lava and ash" is actually easy.

Expand full comment

The security guard has a real responsibility— to act as a liability sponge if a break-in does actually occur. Also, to signal to prospective thieves and customers that this is a Secure Establishment.

Expand full comment

I wonder how broadly applicable this line of thinking is. In medicine, it's commonly observed that patients with minor ailments feel better after seeing a doctor and being given something to take, even if it would be cheaper for them to just buy an NSAID at the drugstore. So in a sense the doctor is there to send a message that you've been cared for as much as anything. I guess the extension to experts is that having experts do media appearances or having the state or federal government employ an expert panel makes people feel like the issue is being addressed, even if the experts have very banal suggestions only or just aren't listened to.

Expand full comment

You'd like _The Elephant In The Brain_ by Robin Hanson and Kevin Simler. It has a whole chapter (no. 14) dedicated to this very idea.

Here's an Amazon link:

https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995

And here's a free PDF:

https://pdf.zlibcdn.com/dtoken/c6fe77f1028f5ac9b5c7e85d0e499ab9/The_Elephant_in_the_Brain_by_Kevin_Simler__Robin__17579654_(z-lib.org).pdf

Expand full comment

Thanks for the PDF!

Expand full comment

Patients likely feel better with time due to regression to the mean https://westhunt.wordpress.com/2016/03/31/medicine-as-a-pseudoscience/

Expand full comment

Regression to the mean is so often overlooked, although I also think the placebo effect plays a role. To whatever degree "having someone talk to you like they care" is a separable factor, that also seems important.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

But you could still replace them with a rock -- couched in a catapult aimed at the entrance whose door triggers the release mechanism to launch. Which is partly to say that I question whether the demoralizing position is a fair trade-off for whatever pilfered inventory might occur.

Expand full comment

But then who's gonna be the liability sponge :(

Expand full comment

The catapult vendor?

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

One of the large challenges here is not having a culture that solely maximizes rewards for those that choose to follow the Cult Of The Rock (because indeed this is an easy way to farm up prestige in most areas); but also trying hard not to miscalculate and over-correct too hard in the opposite contrarian direction.

This is hard too, because being a contrarian can be very fun, and in some cases much more rewarding, especially for skilled and intelligent contrarians in valuable or niche markets. While rat-adj people are not perfect at this calibration (and surely, no one is perfect, but there is always room for improvement), it does at least seem *more* well-calibrated than most mainstream areas, and I feel like I've cultivated even *more* meta-heuristics that Almost Always Work when deciding which contrarians I should and should not listen to.

Also I very much love the format, flow, and elegance of this post! It's constantly funny and soothing to read, even if I think I know what a given section is going to say before I read it.

Expand full comment

There is also the Heuristic That Almost Never Works, where you take an annoyingly contrarian position on literally everything until purely by chance you hit one out of the park and are feted as a courageous genius. Then you proceed to be wrong about everything else for the rest of your life, but no one will have the courage to contradict you. This is also an attractive strategy, to some people at least.

Expand full comment

If I remember right one of Alex Jones's original claims to fame was that he predicted 9/11, because he said something conspiratorial within a few weeks of the attack and happened to mentioned details that overlap with the circumstances of 9/11. This is supposed to improve his credibility, while all of his disproved claims don't decrease his credibility, at least for this sort of trap to work.

Expand full comment

This is particularly true if you're dishonest about what you predicted.

Dilbert's creator got a lot of kudos 2016 for predicting Trump's win. The fact that his exact prediction was that Trump would win "by a landslide" got forgotten somehow.

Expand full comment

> This is supposed to improve his credibility, while all of his disproved claims don't decrease his credibility

I think people interested in this sort of stuff are more lax about factual errors because conspiracies by nature are secretive, so the likelihood that you're getting a distorted picture of what actually happened is much higher than would be typical.

I'm not aware of anyone who's actually kept score on how often he's been right vs. wrong (and how right vs. how wrong), but that's definitely something I'd read out of morbid curiosity.

Expand full comment

I don’t consider myself an expert in something until I find a ladle (something that stops the drawer from opening all the way as expected) or until I believe something that makes me feel emotionally distressed. To do otherwise is to think that everything should work the way I expect without me ever having to get my hands dirty and that my emotional reactions are one hundred percent attuned to the universe.

Expand full comment

I’m looking at the OED and not seeing that meaning for ‘ladle’. Did I miss it in the fine print or a typo?

Expand full comment

Fair! It’s a personal term for “WTF is this I have to deal with now???” A little bit cribbed from Terry Pratchett.

Expand full comment

Is your ladle a useful thing that is for the moment hindering action/understanding or simply something holding you up? Either way I like this term a lot. I will be filing alongside yak shaving.

Expand full comment

I ran into bikeshedding on the way to learn yak shaving. It could come in handy too.

Expand full comment

I think of expertise as a “map of surprises” because otherwise any reasonably smart person could just figure out whatever the field is from first principles. No need to burn time unless being reasonably smart is the only criteria. A ladle is anything worth putting on the map. (This is a horribly mixed metaphor now and probably not useful outside of my own head.) But I do believe I am pretty dumb when I use the yard stick of “how much do I know compared to how much I need to know to get something done?” rather than the yard stick of other people and I use the ladle thing and the distress thing to check myself.

“Do I genuinely believe I am so smart that nothing should surprise me?” No. Therefore, if I know the truth, there should be a ladle.

“Do I genuinely believe the universe was built to my personal aesthetics?” Also no. Therefore, if I know the truth, something should bother me emotionally.

Expand full comment

Just want to say I really like these concepts.

Expand full comment

This is why you need a red team, or auditors. Security guard not paying attention? Hire someone to fake a break-in. Doctor not doing anything? Hire an actor. Futurist failing? Hoax them. etc. In general, this situation generally only occurs if there are only rewards for one kind of prediction. So, occasionally, make a reward for the other side, to see whether people can be replaced by rocks.

Expand full comment

This is of course a premise for a dystopian or sci-fi novel of one kind or another-- in the near future, corporations control the law, but they also control all crime, and they pay the criminals to A/B test different kinds of law enforcement. Real crime has been eliminated because it is too lucrative to commit crime as a white hat, so these law enforcement systems no longer have any purpose, but keep spinning on like a perpetual motion machine, far detached from any actual purpose.

Maybe I'm falling for an absurdity hueristic or something, but I think a) the ideal ratio of white hat cyber-attacks to real cyber-attacks is probably in the neighborhood of 10:1, and b) for most things in meatspace white hats are not very useful.

Expand full comment

Frank Herbert's very weird Whipping Star has an official Bureau of Sabotage .

Expand full comment

Or Dunes Bene Gesserit

Expand full comment

Yes, you've got it. Society needs error checking code that checks for (and incentivizes doing it) errors sufficiently more frequently (and hopefully cheaper) than the chance of the bad event happening.

Expand full comment

Parity or cyclic redundancy checking?

Expand full comment

What do you mean by "hoax them" here? You're right with the guard, idk with the doctor. For the interview problem it's actually really interesting! Like, hire a super-expert to play as a candidate with a mediocre CV and see if they get accepted. Does anyone do this? I would guess that the companies that most have this problem would be the least willing to use this solution.

But I think the problem remains for things that the upper level (who employs the potential rock cultist) doesn't understand well and/or are inherently hard to fake, like a society-wide problem (volcano, pandemic) or a weird solution to it (drug etc.) working.

Expand full comment

I recall that Doris Lessing, who was a famous and frequently published author at the time, submitted a new novel under a pseudonym to a bunch of publishers and got rejected by all of them. She would count as a recognized "super-expert", but it did her no good unless she dropped her own name.

Expand full comment

That's actually not a great example because in the case of a book author the recognized name is actually extremely important economically

Expand full comment

Right, to elaborate: The role of a publisher isn't to publish "The best books", but to publish "The best *selling* books", and I'm pretty sure "My shopping list, by J. K. Rowling" would sell better than a genuinely good (but not masterpiece) book by an unknown author.

Expand full comment

I don't think you need an actor to test the doctor. You need someone who's actually sick but doesn't look sick to a casual glance.

Expand full comment

They do hire people/actors to test doctors in training during medical school. The key component would be ensuring that the doctors telling the actors what to say/express as symptoms test a sufficiently wide range.

Expand full comment

But the fact is, society actually does it. The mall is full of relatively transportable and cashable goods, there is a clear incentive to break in. If nobody still does it, it means that the rock is right (still, might be valuable to hire a human because it creates more deterrence than the rock, even when it just follows the rock).

You failed to detect cancer? I really hope you have a good malpractice insurance, because your ass is getting sued (and if you are in the US, you might be sued either way).

You are able to detect outliers that outperforms the Ivy graduate? That juicy referral bonus is all for you!

You have a contrarian position? Just for packaging it well, you can be paid handsomely. If you are even remotely right (in the sense that you predicted something somehow similar to what happened in the last decade), you are a guru!

We provide handsome incentives to defy the rock, so if people with skin in the game still follow it, one has to admit that it is impossible to do much better

Expand full comment

Feels like Chaos Engineering on a societal scale which I think could be very useful but it would be a hard initial sell to the population. "We are going to constantly break things until they get better"

Expand full comment

Incumbent and regulated industries *hate* this, and use their political power to prevent it. (I've seen it shot down when proposed, for reasons no better than "we depend on the appearance of security to give assurance to our customers and regulators, and this will damage that".) Government rats even more so. Remember the pentest of court security case a while ago?

Expand full comment

The expert gives some serious value added over the rock because the expert provides the human interface. We’d all feel absurd looking at a rock directly. But a well dressed calm adult who listens to NPR and behaves identically to the rock? Now THAT we can trust.

Expand full comment

I was sure you were gonna end with "And that's why we need prediction markets".

Expand full comment

"and not reputation systems"

Expand full comment

In recent comments describing healthcare systems different from the US's, some people said the Dutch medical system gatekeeps healthcare like this:

"She just says 'It’s nothing, it’ll get better on its own'."

They mentioned the Dutch system doesn't keep doing this if you persist: if you're importunate enough, your problem is likely to be taken seriously.

But American doctors seem to believe American patients dislike "It'll get better on its own" as an answer. Patients demand antibiotics for colds, after all! And, as mental-health issues become less stigmatized (as they should), referrals to mental-health care, with some stock kindly words that the mind-body connection is mysterious, and one shouldn't feel ashamed if one's poor mental health manifests physically, proliferate. Then mental-health providers, who'd already be overstretched without treating patients with undiagnosed physical problems, get the patients with undiagnosed physical problems, too.

Expand full comment

The Dutch system has a bad habit of waiting too long, and then prescribing benzo IV until the problem goes away. Did I say bad habit? I meant "practice of clinical excellence".

Expand full comment

Now I'm envisioning warehouses of people on benzo drips for life because they got stuck with one of the problems that doesn't go away.

Expand full comment

No need for warehouses, benzo drips are self limiting, so to speak...

Expand full comment

Some examples are more plausible than others. Sure there are security guards who do nothing and journalists who add less value than the rock. But an investor who always says “no” isn’t generating outsized returns, and people who do this consistently definitely exist. From my perspective, prediction markets already DO exist, in venture and angel investing. Maybe instead of making narrativey arguments that people should value rationality, rationalists should all try to build massive fortunes by placing good bets and then use these fortunes to produce even better prediction markets, laws, etc.

Expand full comment

Not completely related, but your post reminds me of how Kaiser Permanente seems to run everything by simple algorithms. You don't need doctors at KP -- while they can't quite be replaced by rocks, perhaps, they can be replaced by the computers that make all the decisions.

If you have one of the 99.9 percent of problems that are solved by the computer algorithm, you think Kaiser is great. If you're a person with some kind of rare or tricky condition; or a person who only goes to the doctor once every several years, when you've eliminated 99.9 percent of things that the algorithm could have suggested, you're going to think Kaiser is crap and their doctors are useless.

Not that they are idle -- they have to churn out patients every 15 minutes, but fortunately their computer tells them a decent answer much of the time. What would happen if the computers just told the 99.9 percent of patients what to do, and doctors were not computer tapping drones but rather highly-trained problem-solvers kept in reserve to solve trickier problems through their own brainpower?

Expand full comment

"What would happen if the computers just told the 99.9 percent of patients what to do"

You would have already solved a big part of the problem if the computer/rock could tell the 99.9% from the 0.1%.

Expand full comment

Well, I'd suppose the computer solves it better than the rock does. If the patient is like "no, it's not that; no, it's not that; no, I can google too" then Kaiser needs to figure out what to do with those outliers.

But I think it pretty much doesn't. I think Kaiser is satisfied with handing out antibiotics and doing mammograms and colonoscopies, and letting the outliers die.

So in a sense, their computers are fancy rocks.

Expand full comment

That was pretty much the premise of House, wasn’t it? A team that solved the really weird medical mysteries.

Expand full comment

Do security companies ever employ Secret Robbers? That'd be like a Secret Shopper but instead of reporting back to the company how nice the sales people are it would be on how burglerable their security is

(Yes I realized I just described pen-testers but I think Secret Robbers is a better name)

Expand full comment

They do. One of the most visible is called - I shit thee not - Deviant Ollam. That's his name. He does presentations about breaking into buildings as a white hat.

https://www.youtube.com/watch?v=S9BxH8N9dqc

Expand full comment

A friend of mine was going for a meeting at an IT company he did security work for, and decided to see how far he would get if he just shadowed people, went in through open doors, and so on. He ended up making the phone call "I'm currently in your server room without any credentials, maybe we should talk about this?"

Expand full comment

I once walked on to the trading floor of a major investment bank without any ID, and walked up and chewed out a very confused young intern for absurd reasons. All I had to do was:

-wear a suit

-take the elevator to the floor

-walk with a group of traders and let them hold the door for me

If you look like you belong in a place, people will think nothing of just letting you in. One of the easiest ways to do this is to get work coveralls and a toolbelt - nobody ever wants to get in the way of a technician with a job to do. I've heard of people robbing stores and shopping malls in broad daylight by just walking in wearing service uniforms and pushing wheeled dollies, and wheeling furniture, carpets, etc. right out without anyone looking twice.

Expand full comment

One of the best ways to get past the doorman into an overcrowded nightclub is to dress in distressed stage blacks, walk right up to the doorman at the back ally entry, show him a bag of LPs, CDs, beatup mac laptop with stickers, etc, and say "I need to get this to the DJ *right now*.

Expand full comment

My gut feeling: the key to developing better heuristics is to find ways to move beyond the binaries. There is a wide spectrum between "the ache went away after a few days" and "the patient died a horrible, painful death"; there is a wide spectrum between "the technology completely upended the way society works" and "the technology tanked completely, and everyone who believed in it looked like a moron". Earthquakes and storms follow power law distributions IIRC, so there should be plenty of "noticeable, but not Earth-shattering" events to train your algorithm on before the big one hits.

Expand full comment
founding

A lot of this comes down to understanding risk and reward. Our minds generally do not do well with very large and very small numbers. So a small probability event with a catastrophic result is doubly challenging for our wiring.

Expand full comment

Steve Coogan was that security guard in the 90s comedy show The Day Today:

https://youtu.be/zUoT5AxFpRs

Full version:

https://youtu.be/ob1rYlCpOnM

Expand full comment

Not sure what the point is, or if it's what I think it is if I buy it.

It's a) not that these cases are literally 99.9% heuristics, and b) not surprising that using the heuristic continuously puts you at a disadvantage.

Not all "is almost always right" heuristics are created equal. Some are more 99/1, 95/5 etc. ... which results in an entirely different risk profiles & unit economics.

The hiring heuristic: "comes from good college, has lots of experience" is more like 80 / 20 maybe? It also means those candidates are more expensive.

The people with brains add another variable, e.g. "Googliness" and experiment to what degree they change the odds & cost for hiring a good candidate.

Investors choose an area (their thesis) where they can increase the odds from maybe 1% chance of success to 2-10%.

Their "thesis" simply means they add variables to the heuristic that give them an advantage over the market.

You can think of the additional variable (that is not part of the "almost always right" heuristics) that detects the signal as the "thesis".

If you have a good thesis, you can increase your expected rate of return vs. "the market" if the market represents the default heuristic (the 99.9%).

No news that you can't beat the market if you use the same heuristics that the market uses (which is by default the one that is almost always true).

What's surprising about this? (I'm thinking this at least 50% of the time when I read NNT)

Expand full comment

This is why it's important to go beyond a simple right/wrong percentage, and look at precision/recall (or sensitivity/specificity, or however you like to label your confusion matrix).

Also, relevant xkcd: https://xkcd.com/937/

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Reminds me of a joke I heard:

"An American walks into a bar in Scotland and sits next to a old man drowning his sorrows in beer. After a while the old man turns to the American and says “Did you see the iron gates on the way into town?” “Yeah.” says the American. And the old man’s like, “I built those gates with me own two hands. But do they call me Seamus the Smith? Noooo." They both fall silent, but after a little while, the old man pipes up again: “Did you cross the bridges on your way into the town?” “Yeah,” says the American. And the old man’s replies, “I build those bridges with me own two hands. It took me years. But do they call me Seamus the bridge-builder? Noooo.” Not knowing what to say next, the American and the old both turn back to their beers, until finally the old man says, “Did you see that school on your way into town?” “Yeah.” says the American. “I built that school with me own two hands." says the old man "So all the wee children of this town could learn to read and write. But do they call me Seamus the school-builder? Noooo. But you f*ck ONE goat..."

Sometimes one bad thing >>> many good things

Expand full comment

Strangely, I think I took the opposite moral, which is that zero tolerance for bestiality leads to less good public infrastructure. Would you scape a goat to end a global issue? How many goats and how deeply scaped? Don't have a strong answer here, just adding my 2c.

Expand full comment

Also-relevant xkcd: https://xkcd.com/325/ (see titletext)

Expand full comment
User was indefinitely suspended for this comment. Show
Expand full comment

Your pseudonym is fooling nobody, Dio.

Expand full comment
author

MODERATOR ACTION: Banned indefinitely.

Expand full comment

I liked this post a lot, but was surprised you were the one to write it, because this is exactly why I *don't* put a strong emphasis on prediction markets like you do. The common grammar of all of these examples is: we used the law of the excluded middle to make a formulation "X either happens or doesn't", defined in a way where X almost never happens. Because most phenomena we actually care about are long-tailed, the cases where X does happen have disproportionately large outcomes that people very sensibly care a lot about. So the excluded middle frame (you must justify yourself as a probability that you're on the right side of some line) is a silly way to approach these problems: the impact matters much more than the frequency, and it's devastating to wait for "evidence" on the terms of the excluded middle frame, because that is necessarily retroactive. What you *actually* care about is evidence on the *plausibility of the mechanism that can cause the non-normal scenario*, which more often than not has absolutely nothing to do with probability theory.

So I think the natural conclusion of your very excellent post is that an emphasis on probabilism, prediction markets, and yes, Bayes Theorem is a bad way to deal with fourth-quadrant uncertainty. (Which means I wouldn't include the dark-horse candidate winning, since the excluded middle frame works fine when that is the literal rule everyone is agreeing to.) If your tool to manage uncertainty is "X or not X can happen, I will look each time X or not X happens and adjust my probabilities accordingly", you're exactly the sort of sucker this post is on about! The rock cultists would get absolutely rich on a prediction market! And even when the volcano erupts they'd stay rich because it's not like they can lose millions of times more than they gained even when the calamity of their wrong answer was millions of time worse than the convenience of their "right answers". So the rock cultists are obviously immoral and wrong to demand to be evaluated in that way, and you clearly understand why, so please let this part of your brain talk to the part that likes using Bayes Theorem on binary statements about fourth-quadrant phenomena :)

Expand full comment

Rock cultists would not "get rich" because everyone else can also consult the rock. You get rich based on information not already reflected in the market.

Expand full comment

Big doubt on this one.

Just standard investment problem: if I predict that soandso will be elected Governor of Florida, and nobody else is running for Governor of Florida, then I have to wait until the Florida election actually happens - and very possibly the time horizons there will be sufficiently long that turning 99c into $1 with transaction fees is not worth it. Regardless of volume or fees, this is still a potential problem: if a wonderfully perfect prediction market comes to exist, and we're asked to predict if there will be another recession by 2032, and I (correctly) figure with 100% probability there will be one in 2029, then it's still not worth it to me to buy at greater than 87 cents, since I could just buy 10-year treasury bills for a 1.96% rate of return and get more by 2029.

The horse race problem: bettors have a bias towards unlikely events. In horse races, this means that the best bet is basically always the best bet (i.e. the horse with the best odds), since bettors aren't as interested in doubling their money as they are in increasing it twentyfold, even if the odds are 50-50 and 95-5 respectively (technically the latter is 19fold). Thus, even though the Economist and 538 are publicly available information, they outperformed PredictIt's Brier scores in 2020: https://docs.google.com/spreadsheets/d/19NvyPRguCa9QYuuL2ayhxuZVgYqKmfDgix8TYWDdGKY/edit#gid=0 . One could have beaten the market by some (small) margin by just arbitraging the difference.

Expand full comment

If you think you can "get rich" that easily, have you done so?

Expand full comment

I don't think you can "get rich" that easily. You can beat the other bets, but the RoI on that has to be significantly higher than bond yields and the vig has to be very low. Also, I find betting very stressful, so it's not worth the money to me, in the same way a high stress job would not be worth the added income to me.

Plus, it's a difference between being off on correct bets of, say, 88-12 versus 96-4 or 99.8-0.2 (PredictIt's odds of CO going for Biden versus 538's and The Economist's). That might be worth it, even with transaction fees, but it's not exactly going to make me rich. With zero transaction fees, it would make me 12c on the dollar; with 5% withdrawal and 10% of gains, then that would be about 6c on the dollar. I'd have to bet a hell of a lot of my money to win anything substantial.

Expand full comment

> I liked this post a lot, but was surprised you were the one to write it, because this is exactly why I *don't* put a strong emphasis on prediction markets like you do.

I think this could actually be a case where prediction markets actually work to defeat the heuristics. Consider some new technology predicted to change the world. It can only change the world if people get excited about it and start using it.

The futurist denying that it will change the world will almost certainly not be one of those people, but the early adopters of that technology would almost certainly get so excited about this innovation that they'll flood the prediction market with bets that it will, thus shifting the marker. Other people who may not use the tech will see others who are using it successfully and think, "there's something there", and bet on success too.

Expand full comment

A technology that can be consciously "adopted" by individual human beings, and whose success is easily measured in the number of adoptions, has a completely different informational mechanic that the examples here, though. That's exactly the critical difference I want to highlight - the difference between momentum based, gradual changes and situations that are characterized by long periods of normality followed by sudden breaks from that frame with disproportionate impact.

Let's take the Next Big One. The American West Coast is absolutely going to have a magnitude 8 earthquake within the lifetime of my generation or a close contemporary. Suppose it's coming in 2042 - but I don't know that. I just know the mechanism that makes a Big One inevitable *eventually*. I go to a prediction market that asks "Will there be a Big One hitting the American West Coast this year?".

The prediction market has a lot of rock cultists who are looking at "BIG EARTHQUAKES DON'T HAPPEN" and will bet against me. Me and some other people who have taken a geology class flood the market, "moving the marker" like you say. But is this going to cause a "There's something there?" reaction? Probably not - because for the next two decades, the prediction market is going to be a direct wealth transfer *from* the people who are right towards the rock cultists who are lethally wrong!

And what about people who using the prediction market to make decisions? Maybe you can argue that the initial flood of correct people do it's job, but how about the intervening two decades? Anyone silly enough to take a Bayesian perspective about earthquakes will index on each wealth transfer to the rock cultists, adjusting down their probability of a big one because the excluded middle probabilist frame essentially takes each lack of big earthquakes as evidence against the existence of big earthquakes - which! they! are! not!! - and so when it actually comes in 2042, our responses will only be whatever we managed to muster after two decades of wealth was transferred from correct people to rock cultists and two decades of Bayseian thinkers mistook silence for safety. And sure, if the odds were strong enough maybe a lot of wealth goes back to the geology class attenders in 2042, but that's small consolation when the things the market was supposed to do - strengthen our responses - hollowed them out instead.

The market approach can work for something high-touch, where there's lots of information on *both sides* of the excluded middle frame coming in, and you're trying to nail down the exact location of the line. Your "new technology" could well qualify, if the measure is adoption related. The point of this post, though, is that many critically important phenomena do not follow this sort of dynamic. Instead, they have a state of implied "normality" everything settles around, and then suddenly normality breaks. And because probability is retroactive, and because an excluded middle frame can't help but take each tick of silence as evidence against noise, prediction markets and Bayseian frames are actively misleading ways to interpret these phenomena. Does that make sense?

Expand full comment

I've seen it observed several times on ACX that prediction markets don't work well for questions whose answers are almost certain. The obvious antidote is to ignore those questions and look at questions with more ambiguous probability, like "will there be a giant earthquake in the next 20 years" - except that prediction markets *also* have trouble with long time horizons (but the latter problem might be fixable by arranging for escrowed betting capital to itself be invested in another market.)

Expand full comment

This seems to be a heuristically generated article.

The negative examples are all incredibly simplistic - the skeptics never base their skepticism on reasons or facts, the vulcanologists are honest about a potential event happening as opposed to listening to the secret rock that says The World Is Ending - Find An Excuse To Justify It. The Futurists are never talking their own book. etc etc.

Expand full comment

>the skeptics never base their skepticism on reasons or facts

If they were they wouldn't be following a heuristic, and the article would not apply to them.

Expand full comment

Hah funny - a generalization based on what? Oh, a heuristic. Thanks for playing.

Expand full comment

Yeah, much of this seems like a horde of strawmen.

Expand full comment

The examples in math books are foolishly simplistic, too. This doesn't mean they're a bad choice. It's part of what makes them a good choice.

This article is giving you a bunch of examples that you are supposed to deduce a common theme from. For this purpose it's important that they be uncomplicated.

Expand full comment

I'm not clear on how rationality checks the black swan event other than to say "a black swan event is possible". E.g., you are acting as if the .01% event is worth *all* possible expenditures of energy to investigate, that it is cost-less to consider, evaluate, or investigate every highly unusual possibility. But that's often not the case at all; it's why there's a significant class of natural processes that achieve "good enough" outcomes but not perfect ones, because the energetic costs of perfect are far too high even given the potential catastrophe of a black swan event (which is inevitable in a large enough possibility space).

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

I think the basic distinction here is between people who attempt to model the underlying phenomenon (the conspiracy or volcano or thieves or whatever) vs. people who use the default answers, which are usually correct but might fail catastrophically in some situations.

Trying to model the actual phenomenon might make your average guess less accurate, and there is no guarantee you will model it well, but if the importance of deviations from the default is large relative to their frequency, it might still be worth it.

Also, it's cool to see you commenting here, I read your blog when I was in college.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

Yeah I liked Taleb's, book but that is the problem I have always had with the Black Swan thinking when it comes to living your life. I can think of an infinite number of Black Swans and it would be costly or impossible to evaluate their actual likelihood in order to know if they were worth taking seriously given my risk adjusted utility function. It seems that it devolves into a type of Pascal's Mugging. For a lot of possible events it seems to me that the right strategy is to say I don't know the probability of this event but I am pretty sure it is really small so I am going to round it off to zero and if it happens in my lifetime I guess I am screwed.

Expand full comment

Let us consider the security guard. What happens is that he gets bored with starting at shadows. If a robbery occurs, which he considers unlikely, he'll probably either lose his job, unless he detects it and sounds the alarm. The company is in position to lose a lot more, so they have a stronger incentive to invest. (OTOH, if he confronts the intruders, he might get hurt or killed. If they sneak past him that's a much less likely event.)

Notice that even this toy example can expand considerably once you start figuring various costs and benefits. This defeats the purpose of it being a toy example, so those are ignored.

So, yes, in actual application various entities need to consider a raft of costs and benefits, not merely those listed in the examples offered. But delving into those costs and benefits (many of which depend on details that would also need to be explicated) would defeat the purpose of the example.

If you want to really consider how to handle rare events you need to consider the costs and benefits of each suggested method of dealing with them. And this will still only cover the known unknowns.

FWIW, one of my reasons for supporting long term fully independent space colonies is that sometimes dangerous things happen, and the only way to deal with them it to be somewhere else. So by independent I mean eventually that they'll take off across interstellar space at the staggering velocity of (very approximately) 0.001c. Perhaps less. Probably driven by religion or politics. But when the solar system is irradiated by a gamma ray burster, they'll be somewhere else. This is a very rare event. I can assume there are others (wandering neutron stars?) without needing to know what they are.

So to me *this* is the way rationality deals with "low probability events". One supports things that will eventually deal with them. One doesn't excessively worry if it's going to take awhile to achieve that goal, but one does things that forwards that goal, or that one believes will do so, provided that they aren't too expensive. (Lots of subjective valuations going on there.)

It's fairly certain, however, that eventually there will be a disaster in the solar system. There's a reasonable probability that one will occur that wipes out all life on earth. So we need to have independent existence elsewhere.

Expand full comment

The security guard is more useful than a rock. Even if he is just sitting there not paying attention, he is intimidating wannabe burglars who might rob the building if they thought it was completely unattended, homeless people who might move in, and kids who might get in and have a party.

Other examples are worse than rocks, because someone trusts them to be providing value. By way of more examples, I get the feeling that just about all factcheckers turned into such rocks a while ago, or perhaps were rocks to start with.

Expand full comment

To make the point work better, imagine the security guard monitoring security cameras in a windowless room.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

It's true, he is those things, and also has value as a "liability sponge" for insurance purposes, as someone else mentioned.

I mean, now CVSs in NYC employ completely passive guards who they instruct to do nothing as the store is being robbed blind. By evidently unarmed shoplifters, that is.

Expand full comment

Arguably, the security guard was never providing value. Except, perhaps, as a box that needs to be ticked to reduce insurance rates. Whether or not he checks those noises, he’s still providing whatever value he was.

Expand full comment

Then the "Efficient Market Hypothesis"-rock must be wrong, given that insurance reducing rates because of a value-less guard is not efficient market behaviour.

Perhaps we also need a "Inefficient Market Hypothesis"-rock, to get this right somehow.

The model is getting too contrdictory and involved. Consulting my "this is not worth thinking about further"-rock compels me to stop writing.

Expand full comment

Efficient Market Hypothesis is super interesting in this context - it may be that it's a heuristic that very often works, but some people take that as evidence that there's no point in trying to gather information and try to do better, because The Efficient Market Hypothesis. But the whole *point* is that you get a (reasonably) efficient market *because* people all over the place do their best to gather and analyze data to make a profit, and you might become one of them.

(Also, the corollary of the Efficient Market Hypothesis would be that it's impossible to systematically *underperform* the market, and this is blatantly false.)

Expand full comment

I don't think insurance of that sort is available as a speculative commodity.

Expand full comment

Exactly as the "Inefficient Market Hypothesis" rock would have predicted :)

Expand full comment

The market being efficient does not mean that the market is omniscient. Assume that the insurance company cannot check the effectiveness of guards (which is pretty realistic: what are they gonna do, pay robbers?). Then, as long as the AVERAGE guard lowers the probability of a store getting robbed, an efficient market will offer a "with guard" and "without guard" rate.

Expand full comment

It's never lupus

Expand full comment

Note that that time it was, Dr. House found it out.

Expand full comment
Feb 17, 2022·edited Feb 17, 2022

Wait what? It was *actually* lupus once? Damn I missed that!

Dr. House is supposed to be smart, but he never notices that his first three theories are always wrong so that the episode reaches 40 minutes long. If he noticed that, maybe he'd eventually figure out he's a character on TV. Or maybe he'd just go insane. Again.

Expand full comment

This made me think of Bryan Caplan's perfect betting record that is based to a large degree on just predicting that the status quo won't change much. Here's one candidate for a bet he could lose nevertheless because he dismissed something based on the absurdity heuristic: https://www.econlib.org/archives/2017/01/my_end-of-the-w.html

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

How could he possibly lose it? If the world ends, it doesn't matter that he would have had to pay the bet were he alive. It's even CPI-adjusted.

I guess the potential opportunity cost of investment, but 100% until 2030 isn't something you're likely to get out of an average investment, and certainly not at essentially zero risk.

Expand full comment

"– Bryan Caplan pays Eliezer $100 now, in exchange for $200

CPI-adjusted from Eliezer if the world has not been ended by nonaligned AI

before 12:00am GMT on January 1st, 2030."

He's already paid. The way he loses is if he doesn't get his money back before the world ends.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

That doesn't work, does it? He has *already* accepted that he will have $100 less in his account until 2030, regardless of outcome, and decided that this is fine because he doesn't have any other better investment opportunity (he could potentially be mistaken here, but a zero-risk investment paying 5.5% inflation-adjusted is not something anyone acting legally is in a position to beat).

So either the world ends and it doesn't matter, or it doesn't end and he gets his money.

Expand full comment

I feel like the trick here is "already accepted"--yes if we ignore the cost of $100, there is no cost.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

We shouldn't ignore that cost - for instance, Caplan very likely shouldn't take the 110-for-100 bet (as 10% return CPI-adjusted until 2030 is worse than plenty of investment opportunities), but he already *has* decided that he likes the zero-risk "investment" of 200-for-100 because it's the best use of his money until 2030. Yes, it's a virtual certainty that some possible investment will have paid off better until then, but if he's not in a position to know about it, it doesn't matter.

Basically, the question is "is getting a 5.5% yearly return CPI-adjusted at zero risk while locking up the money until 2030 a good investment?" And the answer for most people is "yes". The _actual_ risk here is desperately needing that $100 before it pays out, but he's probably safe there. I would invest in that in a heartbeat (but not everything I own).

Expand full comment

The other risk is not being able to enjoy the $100 before the world ends.

Expand full comment

For those who think Caplan is wrong, and there is a plausible chance that the world ends before January 1st, 2030: How does this effect your life? Do you choose to have children? Are you preparing in some way?

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Will the world end in in a painful or otherwise unpleasant way? If it doesn't, I should only refrain from investments - for instance, if everyone involved is happy with 8 years of having kids and then the world just blinks out of existence, that's perfectly fine (although the way I know that this will happen might affect my psychological state, which should be taken into consideration - a depressed parent will likely reduce the kids' quality of life).

Also, "plausible" is a big word. At 100% risk and a known end-date, I obviously blow through my savings and make sure to use drugs at the end. At 10% risk and an unknown date even if it happens, I probably just trim my savings for consumption a little, and I don't start doing heroin. At 1%, I don't do anything - many of us will already have more than 1% risk of dying from something unexpected before 2030.

Unless the end of the world promises to be really painful, in which case I should prep for euthanasia.

Expand full comment

The big attractors for misaligned AI are in the "kill all humans relatively quickly" camp. I don't think it's pointless to live merely because you will die, or else I'd be VHEMT.

The reasons to refrain from children would be either in the "dystopia worse than death likely" camp for the sake of those children - which can't be ruled out, but AI isn't an amazingly-obvious source of them - or the "costs of raising children outweigh the benefits given that the benefits are deferred and are therefore dubious" camp for either the sake of yourself (though given the erosion of filial responsibility a selfish player who doesn't like child-raising won't be having kids even if the world doesn't end) or the sake of society (and in this case the chance of world-end would have to be extremely high for a plausible benefit as the West needs babies).

Expand full comment

Yeah but in Scott's examples the odds in favor of the status quo changing are like 1-100 or 1-1000. Caplan's skill seems to be in convincing folks to bet on the status quo changing at almost even odds or at least at odds way worse than say 1-10.

Expand full comment

I enjoyed the one about AI safety.

Expand full comment

There's something to be said from an expert analysis that looks at the 0.1% of edge cases and tries to understand them more concretely. Experts don't need to be able to guess the exact 0.1% of cases to still be useful signal over the noise. If they can rule out catastrophe 98% of the time and confine the uncertainty to the other 2%, the heuristic no longer holds. Now, 49 times out of 50 you don't check because you know it's not hurricane season. The other 1 time you watch closely and 5% of those times there will be a hurricane. Still not frequent, but frequent enough not to ignore, or prefer a blind heuristic.

There's a difference between eliminating uncertainty altogether (an impossible problem) and reducing uncertainty to a manageable level.

Expand full comment

I get paid to do this, so I think I can explain what's happening here.

The "rock-based experts" are using a 0-intelligence model that predicts with 99.9% precision, but 0% recall. That's a bad model, but you might not know it if you never directly measure recall.

But let's say that you start applying some intelligence to the problem. The rationalist has a slightly smarter model that can optimize for either 99% precision and 50% recall OR for 99.9% precision and 20% recall depending on where he sets his threshold. So if you have enough events to start measuring recall, then the rationalist should be able to eventually beat the rock-based experts by matching their precision, but with higher recall. For super rare events (think extinction level), it's impossible to measure recall. But for slightly more common events, it might be possible, albeit difficult.

Expand full comment

This essay also feels like it's hinting at the distinction between maximising hitrate (which experts are regularly graded on) vs maximising EV (which requires a view of amplitude, not just frequency). Our discourse on expertise generally overindexes on hitrate, especially as those that are more focused on EV (financial investors, say) look at the world differently. The trouble might be that if you do end up predicting doom (as Dr Doom Roubini did) it has a reputation hit, even if it's EV maximising.

Expand full comment

Exactly

Expand full comment

An anecdote, secondhand so details murky, apropos of your 2nd example. A doctor friend in his ninth decade, still working a day or two a week, was ill, thought perhaps he had Covid, and went to get tested. It seemed instead it was some sort of myocardial infection - I forgot the name, I think it started with "t" - or probably it was - they did some work and the results were shared with a specialist, a cardiologist, I believe.

The cardiologist called our doctor friend to discuss the case with him - as a professional matter, due to his breadth of experience, not knowing they were his results.

He described the case to our friend, winding up with "What should I tell this guy?"

Our friend said, well, the guy is me, and I'd tell him to take a couple aspirin.

Expand full comment

OTOH, it's worth noting in this particular case that aspirin is a blood thinner. So that's not exactly a do-nothing treatment.

Expand full comment

This is a classical example of training data that do not capture the variance of the process generating the data. For the parents here I suggest a fun experiment:

Take a child ca. age 5 and hand them a jar of sweets. Tell them them that 3 of those sweets do not taste well and they should spit them out once they find them ( of course all of them are delicious ) . Measure the inter-sweet time and plot as a function of the running number of sweet eaten.

What tends to happen is that the kids start out slow and speed up. Which is paradoxical, the probability of the next sweet being terrible is always increasing, that however is in disagreement to the previously observed data ( which are used for training).

What to make of this? Choose a Guard who experienced a robbery before ( or an ex soldier), pick an old physician who saw people needlessly die due to insufficient vigilance, search for a futurist who saw great technological revolutions and failure, above all try to estimate mean AND variance ( in the broader sense, I know there are distributions that are weird).

Expand full comment

It's not paradoxical if the kid is realizing they've been lied to about any of the sweets not being good.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Yeah. An adult would probably go one, then one a little more carefully, then hesitate before three to decide if they think the experiment is lying to them (it's a psychology experiment, so it probably is in *some* way, that's just common sense). Then after 3 is fine, conclude that it really was a lie and it has to be a pretty weird experiment for one or two sweets to taste bad AND they just happen to be the ones remaining, and quickly go for the remaining ones.

Expand full comment

That argument holds if the number of sweets is small / the fraction of bad ones is large.

In this case one would conclude after a few samples that there are no bad sweets.

But given the estimation that there are 20 good ones and 2 bad, you cannot make that conclusion because your results are actually in line with that.

Expand full comment

Most machine learning algorithms are known to get their accuracy largely by locking on to these types of heuristics which is why self driving cars don't work... in this context, I suppose I have to add, until they do.

Expand full comment

And the people who do this professionally have noticed that. And thus so now better ones work by 1) fighting against another one who's trying to beat or trick it, and 2) being fed in simulation bad events much more often than happen in the real world.

Teaching a self driving car or drone to operate itself without external bad events is a mostly solved problem. All the fun now is in thinking up or discovering new kinds of bad events to throw at it while it learns.

Expand full comment

Well two cheap shots here:

1. To favor a give expert over a given rock, you will need to establish some relation between the expert disagreeing with the rock and reality disagreeing with the rock. This is rather salient if you want to use the argument to listen to "rationalists".

2. Q: What is the proper name for a rock reading tool? A: prediction market.

Expand full comment

The protestant rock god metaphor was one of the greatest things I've ever read.

Expand full comment

Agreed. :-)

Expand full comment

More often than I would like, you manage to actually describe something that I am feeling, but can't describe on my own. Troubling, since I want to be a writer when I retire from my day job.

In any case, yes, this is why I put up with a following contrarian people on social media. I accept that the conventional wisdom usually, but not always holds up, and the only people who are going to actually see it coming are the hard-core contrarians.

The biggest problem that I have is that, in many cases, I find people who are 0.1% contrarians (for example, big on 'cryptocurrency will utterly change economics') are also 0.1% contrarians on at least one other thing as well (for example, impending farming yield collapse , pending Yellowstone eruption, climate change catastrophism, hyper-fatal bioweapon plagues are coming, plastic pollution will kill the entire marine ecosystem, governments cause disasters to control us, corporations have perfected advertising to the point of mind control, birds aren't real, mRNA vaccines are timebombs) . And my internal heuristic is: "people who are super-contrarian on multiple dimensions are kooks". Which is, I would imagine, a 99.9% effective heuristic.

Expand full comment

That's very nicely put. People who think everything will be pretty much the same as yesterday are probably wrong 1% of the time, people who think everything will be pretty much the same except for this one big deal thing are probably going to be wrong 10% of the time, but might be right in an interesting way. People think 2-100 things are going to be big deal things are probably wrong 80% of the time, and are just the sort of people who think giant big changes are just around the corner all the time.

Are there really people who think birds aren't real? I am kind of scared to search for that...

Expand full comment

It's an anti-conspiracy meme amongst the youth. Basically a pretend conspiracy theory, created primarily as a way to mock "legitimate" conspiracy theories

Expand full comment

Ahh, thanks. I was trying to imagine if people thought birds just didn't exist because, hey, have you ever touched one, or if they were supposed to be spy robots, or what.

Expand full comment

At least one group of supporters of (believers in?) the theory claims that they used to be real, but were all replaced by government operated spy drones. Whether this is serious or not I haven't a clue. It *sounds* like humor, but so do many of the "conspiracy" theories that are taken seriously.

OTOH, I've never met a true believer in the Flying Spaghetti Monster.

Expand full comment

It appears that there really are such people, though I'm not certain. It was started as a spoof of various "interesting" conspiracies, but there's some evidence that it has been taken seriously by some. How valid is the evidence? How many people? I haven't a clue.

Expand full comment

I had a bunch of thoughts while reading this, since it's pretty closely related to my research. Here are a few:

- As you point out, how you should aggregate expert predictions depends a ton on the extent to which the evidence that the experts have access to overlaps. If the experts are all looking at the same rock, then beyond the first expert, each additional expert adds nothing of value and you can just ignore them. If they all get *independent* 999:1 evidence against the event, now you have super strong evidence. I'd say that in the real world, experts' evidence tends to overlap quite a lot (they're all looking at basically the same core evidence and then maybe each have some small additional bits of evidence). For example, in election modeling every (reasonable) model considers polls and historical election results; this gets you the bulk of the way toward a good prediction. Then various models consider various other factors which update their probabilities but not very much. So if you have two different forecasters giving Biden 4:1 odds, the aggregate should look a lot more like 4:1 than 16:1.

- Let's talk about the volcano example. What exactly happened here: who made the mistake that led to doom? (I'm going to think of the Cult of the Rock people as non-agents who can't be assigned blame.) I think this basically depends on what the vulcanologists you labeled "honest" are doing. One thing they could be doing is "being overconfident". In particular, how frequently can a vulcanologist assign a >10% chance to an eruption without being overconfident? The answer is: only 1% of the time. Because if they assign a >10% chance >1% of the time, that's >0.1% in total. If they're in fact being overconfident, and the Queen gets enough data to be convinced of this, then the Queen is right to trust those experts a lot less.

On the other hand, suppose that the honest experts are calibrated. Then the issue is with the Queen. If the Queen hears "There's a 10% chance of an eruption" once per century for five centuries and -- over the course of those five times -- decides to get rid of these experts, then the Queen is updating *way* too aggressively. If these experts are in fact correct, there's only a ~40% chance of there having been an eruption one of these five years, so throwing them out because that 40% didn't happen is unreasonable. Instead, every time this happens the Queen should trust these experts just *slightly* less. After a thousand years, the Queen should still trust them enough that, when they say 10%, the Queen thinks there's a substantial chance of an eruption, and should plan accordingly.

This is actually all a metaphor for a branch of computer science called "learning from expert advice", where you're the Queen and are trying to learn which experts to trust each year by looking at their track records. Speaking of learning from expert advice, I'm writing a paper on this topic and the deadline is this Thursday, so -- back to work :)

Expand full comment

There's two classes of things being conflated here.

1. Basically random events that are genuinely super low probability, such that they have ~never happened before, like the volcano erupting and killing everyone. Or, a super-deadly yet also super-infectious global pandemic, Don't Look Up style meteorite disasters etc.

2. Events that are high probability, mundane and would happen all the time if not for people mitigating it, like the security guard, the cynical futurist etc.

These two are fundamentally different and it's wrong to treat them as if they're all homogenous examples of the same probability distribution. When people start working against a common problem it will (hopefully) reduce or even eliminate that problem, and make it look as if they're being useless, as if the heuristic "there is no problem here" is almost always right. But there actually is a real problem and if you took away the security guard, you'd very quickly get San Francisco circa 2022.

But many events aren't like this. AGI takeover is in this category. These are events that have never happened before. They might be theoretically natural/uncontrollable, or they might be hypothesized outcomes of human behaviour, but irregardless they cannot have a probability calculated for them because any truly objective calculation would yield a division by zero. In this case the correct heuristic is not a simple extrapolation of past trends but a very complex and case-by-case deep analysis that can't be reduced to a simple analogy or set of stories. There's no way to generalize from the creation of Bitcoin to lessons for life. Any such lessons would be so specific and nuanced they'd require a book to explain. So ... I guess in the end I don't feel like this essay has left me with any deep insights. Nonetheless it's exploring an important area.

Expand full comment

The interviewing example strikes me as a third class that falls more into "no one got fired for buying from IBM" which isn't about a heuristic at all, its a misalignment between what is being evaluated (will I be judged for my mistake) and what would be good (do I make good decisions).

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

The security guard seems to have a fine heuristic, assuming no-one finds out?

Perhaps in that case, the cost of checking isn't so high, though. But compare to "you should believe the scientific consensus" - the cost to achieve sufficient expertise to be able to tell when the consensus has screwed up is extremely high FOR EVERY INDIVIDUAL CASE, and utterly impossible for for every consensus. At this point, it's not that the scientific consensus is universally perfect, it's that there's no way for you to perform better unless you're putting in a crap-ton of work (and even then, you probably screw up - how often does the statement "I did my own research and..." end well?)

With regards to the skeptic, an easy and high-quality position to take is "that is very likely nonsense, and if it actually isn't and someone does the work properly, I will find out and change my mind". You emphatically DON'T personally have to put in the research to reject Bigfoot, and even keeping an open mind about Bigfoot will result in a worse epistemological state for you.

Expand full comment

I slightly disagree. Bigfoot doesn't impinge on anything I do, have done, or plan to do, so keeping an open mind seems to me the reasonable position. When I need to do something that might be affected by the existence of Bigfoot, then I'll make a bet...but I still won't take a firm position. Until then I'm not going to even bother to look at the evidence, because I don't need a position.

Expand full comment

I don't think I get it.

I mean, if the Rock really has higher Brier scores than everyone else, then "What about that time the Rock was wrong?" should be squarely defeated by "What about those multiple times the humans were wrong?"

Unless somehow when the Rock was wrong it had significant costs, but those other times that brought down the humans' Brier scores didn't have significant costs?

I feel like the main important point is the information cascade, which is not a problem solely of heuristics. Imagine I believe something is 90% likely, and I find that experts also say things like "it's pretty likely." Even if they do actually have new information, if they're saying "it's pretty likely" because they think it has an 80% chance, and I update to 95% (because experts agree it's likely), I think I'm going the wrong direction.

Expand full comment

I think the idea is that the rock will stop having a higher score as soon as the rare thing happens one time, but for the time interval from whenever you start using the rock until the next time the rare thing happens, the rock has a literally perfect record.

Expand full comment

I think you've got the right idea, the Brier score doesn't capture the impact of the outcome itself like the expected value could.

Expand full comment

I thought black swans were a metaphor for something with literally NO precedent, rather than something that is rare but known to have occurred at least once?

Expand full comment

This seems similar to Tetlock on Inside View vs. Outside View. Good Bayesians start with an outside view prior and update it with inside view detail. "Experts" can fail in two ways on the inside <-> outside view axis:

1. All inside view: over-index on noisy detail and forget about base rates ("these 15 lava variables I track changed in a novel fashion, so I _know_ an eruption is coming"

2. All outside view: the cult of the rock

The novel take-away for me is that there are self-reinforcing biases that can push you from the Good Bayesian position all the way to (2). Of course, there are also biases that can push you toward (1) instead.

Expand full comment

One thing that I am having a hard time wrapping my head around is the enormous salaries we pay to these rocks.

Expand full comment

Sometimes it may indeed be well worth having a nice rock that is pretty and shiny and just sits there and doesn't do much, rather than a non-rock that meddles and plunges everything into chaos.

Think of modern constitutional monarchies where whoever is wearing the crown (for state occasions) doesn't really *do* anything. But you have to have some sort of head of state, and maybe a figurehead that presides over spectacle that you can monetise as tourist attractions is better than a president or premier or leader who has some sort of power and makes decisions that divide the country.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

It's not actually true that old monarchies make a net profit from tourism, in any country I know of. Most people don't even tour to see the monarchs, they tour to see their castles, which they really don't need to be in for that to happen. It's also not like they don't do anything, most non-functional monarchies still hold immense power relative to the average person, oftentimes even relative to the actual politicians. In the analogy of this post, old monarchies are more like rocks that weigh the whole country down, rather than cute pebbles.

Expand full comment

Ceremonial and pagentary are elements in the whole mystique of monarchy, and things like this depend on the excuse for existing being the monarch. No, you don't actually need the queen to be living in Windsor Castle to get a tour there, but part of the selling point to distinguish it from other tourist traps is "still occupied by the monarch":

https://www.rct.uk/visit/windsor-castle

Same with the Changing of the Guard:

https://changing-guard.com/dates-buckingham-palace.html

If there's no monarch in residence, eventually someone will ask "so why do we have this big expensive parade for no reason?" and answering "to get the tourist money" is a bit too realistic. It's like magic tricks, the pretence of something actually being real is what sells it (these are the monarch's guards) rather than the reality (this is a display for tourism more than anything else).

Expand full comment

As someone pointed out earlier (above?), using a person instead of a rock does help with blame and liability and buck-passing. A patsy, a fall guy, insulation. Someone you can fire, and if it's worse than that, you can also announce that HR will flog themselves and do penance.

Expand full comment

Many have pointed out the similarity of ideas to Taleb's Black Swan. Instead in going to highlight a book that provides the "oposite" perspective: Gerd Gigerenzer's Rationality for Mortals. Just as extermising a heuristic that is 99% accurate to 100%, underestimating such a heuristic in favour of carefully consideration all the time is also unwise.

Expand full comment

I agree with you about the importance of rationality over mere rocks, but I think your cost free analysis misses a few things. You are being too hard on rocks!

1: Most people don't have a copy of the rocks that say things like "No, the world won't end tomorrow," but they totally should. The Baysean Priors should all be really low for the sort of hyper tail disasters described, but most people have much higher priors. Anyone trying to be a Rationalist, or just more rational, could go a really long ways towards that goal by first getting a bunch of those rocks (preferably by study and actual analysis of how often the relevant extremely rare events occur, but for most humans just a rock to read in times of worry would be an improvement.)

2: Your examples tend to touch on the costs of doing nothing, either paying someone seemingly useless or getting covered in lava, but you ignore the costs of doing things with a mind towards the 0.1% probability things. How expensive is it to evacuate the island every time a volcanologist gets worried? Considering you are going to be doing that incredibly often, far more than otherwise, that's an important question. If you don't the relative costs you can't make a good argument for one pattern of behavior or another. As an example, Paul Graham makes the point that with his Y-Combinator start up business he puts so little money into each company that the one in 100 or whatever that actually do well cover things, much less the 1 in 10000 that pays off in billions. So treating every case like a 0.1% is smart. You don't want to try that at a casino betting on roulette, however. Some examples of experts vs rocks are more like roulette, and some are more like tech startups.

3: Experts consulting the rock vs Experts who actually know things vs Experts who have a different rock that says "OH SHIT! EVERYTHING IS GOING TO END IN FIRE TONIGHT! KILL YOUR LOVED ONES FOR THE LIVING WILL ENVY THE DEAD!" vs Experts who really don't understand things well albeit a little better than other people. How do you know which experts you have? We like to think that our experts are all just honest truth seekers and they have managed to actually accumulate a little bit of truth, but they all have their own problems, and possibly their own rocks and reasons for using them that you might not appreciate. How can you tell, and what is the cost of thinking your experts are honest and expertly giving good advice when they are one of the scary rock worshippers, or just incompetent. Hence, point 1.

Maybe we really are limited in what we can know and foresee, and we call experts people who happen to be right sometimes even if it is for very wrong reasons. Maybe we could call this being between a rock and Scott Alexander. (I'm sorry, I really want that job about Scott being too hard on rocks to work, but I just can't right now. I am going to pass on the tattered shreds here in the hopes that someone can repair it.)

Expand full comment

Don't know if this is particularly useful or relevant, but as long as no one knows what heuristic your security guard is using, places with a security guard will probably get robbed less frequently than places without as long as ones with security guards have big signs that say "we have a security guard."

Expand full comment

There's a thriving demand for, and supply of, fake security cameras for low risk places with low budget. If you can scare off most of the already low number of thieves/nogoodniks, you'll come ahead even with that one-in-two-decades who finds out or just wears a ski glasses and a hoodie.

Expand full comment

This is also related to why Wal-Mart has greeters at the front door to say "Welcome to K-Mart" when you go inside. It actually does something, as people are less likely to shoplift after they have been "noticed" and acknowledged. It apparently trips a part of your brain that says "Ok, you are not anonymous anymore, at least one person paid attention to you. YOU ARE BEING WATCHED" and pushes you to act accordingly. Not foolproof certainly, but it helps and is a small price to pay for the benefit. Plus you get to employ people who might not be able to get a job because of the whole can't reliably remember which store they work at thing. (The Wal-Mart near my folks' place had a greeter that always said "Welcome to K-Mart" with great enthusiasm. She was quite friendly and popular with the locals, and it was always amusing to see the out-of towners stop and take a step back to look for a sign to see if they had gone into the wrong store. There wasn't much else going on.)

Expand full comment

I feel like I have slipped into some alternate dimension where things are incoherent. Why on Earth are your Walmart greeters saying 'welcome to Kmart'?! It's so weird, because you said it twice, not just once, so it feels like it can't possibly be a typo. And yet you didn't mention how weird it was, even though your whole point was talking about the effects on the people who hear the welcome message

Expand full comment

"Plus you get to employ people who might not be able to get a job because of the whole can't reliably remember which store they work at thing."

I'm not sure how many Walmart greeters are like that.

Expand full comment

Yes, she was more special than average I suspect. Still provided a useful function, so good on her.

Expand full comment

Sorry for the spiraling feeling :) that was just it, the lady wasn’t all there and apparently was only partially aware that she worked at a different store. That was good enough, however, because apparently the brain’s “I’ve been spotted “ trigger is easy to hit. Just having someone respond to your presence is enough.

Expand full comment

It's the age-old question: in scenarios where the same thing can happen literally a hundred times in a row, how do we tell genuine expertise from placebo?

It's like the old joke about elephant repellant. "See any elephants around? No? Then it's working!"

Expand full comment

The solution is to have a better model of the underlying situation.

If you go study elephants, you'll find out that they are wild only in Africa and parts of Asia, that they cannot teleport, that they can swim only short distances, that all the ones that live on my continent are confined to zoos, and that it's almost impossible for them to escape from zoos. From that you can conclude that your magical elephant repelling talisman is probably not what's keeping the elephants away.

For burglars and volcanoes, you can study crime statistics and vulcanology to be able to come up with a better probability estimate than zero.

It's okay to use heuristics, but people should be able to explain their heuristics in terms of the underlying mechanisms, rather than appealing to pure induction.

Expand full comment

i think youre identifying the wrong effect here. the problem is negatively tailed distributions, that being wrong in one way (in these scenarios) is much much worse than being wrong the other way. assuming youre not going to get more than 99.9% accuracy (id be pretty surprised if your model was giving you more than 99.9% accuracy! modelling is hard!), youre not gonna be right more often than the heuristic, but you can still get better outcomes than the heuristic by intentionally being more cautious than you strictly need to be. the heuristic is good! the heuristic is really really good, if all we care about is being right (and thats often all we care about here online!) then we should love the heuristic! rationality is systemized winning, the heuristic isnt systemized winning if we care about outcomes, but is if we care about accuracy

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

There are really two categories in the text. In one, it really *doesn't* matter to you too badly if the rare event comes up. What are they going to do to the Security Guard, beyond firing him? And the Interviewer is only going to get in trouble if *both* the 0.1% bad candidate gets hired *and* it's something someone else can demonstrate he should have noticed. For these people, it's a lazy, mostly safe, pretty reasonable heuristic.

This is also why there may be major consequences for The Doctor, so that he *doesn't* benefit from just doing this. Also when The Doctor screws up, it's much more likely that someone can point to him and demonstrate the error.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Without this heuristic you risk being vulnerable to a Pascal's mugging. Sure, 99.999999...% of the time when someone told me that they were in control of the simulation and would torture me for eternity if I didn't give them $5 it worked out if I ignored them, but...

Expand full comment

Whenever I see a probability trailing off like that, I'm reminded of the classic Stop Adding Zeroes. I think it addresses your point nicely?

https://slatestarcodex.com/2015/08/12/stop-adding-zeroes/

Expand full comment

The opposite is probably a lot more relevant, really. Paul Samuelson said "Economists have predicted nine of the last five recessions." There's a certain set of economists and stock-market experts that are "permabears", constantly predicting a recession or market crash. They then get celebrated for correctly predicting the last three downturns.

Expand full comment

How many of those four "misses" are permabears, and how many of them didnt happen because they were predicted?

It's like sneering at a collision avoidance system for all the impending collisions that it noticed and fed the warning back up to steering. "You keep alarming, but I never hit anything."

Expand full comment

Very likely 0 - recessions aren't the kinds of things that get avoided because an individual analyst predicts them.

Expand full comment

The guard is more than a rock, as long as he keeps his mouth shut.

He's a scarecrow, there to scare robbers away.

Expand full comment

You have a hidden assumption - that false positives are less costly than false negatives. A comment on HN points out that in the doctor example a false positive can be quite costly (damaging patients health by treating something they don't have). The effect you're describing might be the desired outcome in cases where false positives are more costly, even if it occurs regardless of the cost balance between false positives and false negatives.

Expand full comment

That's why medical tests are rated both for the number of false positives and for the number of false negatives. The palpitation is merely an early level of screening. After that comes the additional testing (which tests depends on which disease). Even then there can be questions, of course. A decade or so ago I read that the blood tests for prostate cancer were so inaccurate that they could not be usefully used. So they did a biopsy instead. (Ugh! but it did the job.)

But before they decided to do the biopsy I think three different doctors palpitated. They still weren't certain, but they decided that mode definitive tests were needed.

In the toy example there pretense was that there was only one level of examination. This was unrealistic, but appropriate for the example being discussed. I think you were supposed to understand that all of the examples were oversimplified for clarity in presentation.

Expand full comment

Yes, I understand, but my point isn't that the example is bad - my point is that the effect might be the desired outcome if false positives are more costly than false negatives.

IE - in situations where false positives do more cumulative damage than false negatives relying on heuristics might be a good thing, saving more time/energy than wasting.

For a counter example, Fraud in financial systems (I've just been reading Lying for Money) - Looking at macroeconomic systems, 0% fraud is not the ideal level because of the high cost of checking everything damages the overall system, so a heuristic of trust that veers into high false positives is actually a good thing.

Expand full comment

Heuristics are inevitable. One rarely has or can get complete information. Even Alpha-beta pruning is a heuristic that can fail, and that tends to generate so many options to consider that you drown in them.

I think the point of (many of?) these examples is to notice who bears the cost of the heuristic, and who puts them into practice. And to notices that the two are often misaligned.

Expand full comment

This seems to be a good argument for skepticism.

Expand full comment
Feb 8, 2022·edited Feb 8, 2022

Some people are bad at heuristics (as in they are relatively worse than others at identifying the real life indicators that differentiate the 0.1 and the 0.99% and categorize two values as one value) whether in general or in specific situations, fields, etc. - should the conclusion not be that the individuals with the most "developed" heuristics are the only true experts? Given that all rationalist principles are based upon heuristic observations.

Or at least, that a well rounded expert places similar importance on heuristic and rational thought. A purely rationalist "expert" is the personified equivalent of secondary research. Perhaps a critical role to society, but not in useful in situations where decisiveness, speed or novelty is concerned.

The problem, then, is that we overemploy rationalists as experts because we overvalue empiricism.

Expand full comment

The security guard at least provides value even if he never investigates, as long as it is not common knowledge that he never investigates. Casual burglars looking for a building to rob will see that Pillow Mart has a security guard and go look for an easier target - they don't know the security guard never investigates. The owners enjoy peace of mind because their building is being guarded. In the event of a robbery, they can tell the insurance company - look, we did our due diligence, we even had a security guard. None of these benefits are offered by the rock.

Expand full comment

Alright. How much weight should we give the rock in the "It's never Vitamin D" heuristic?

Expand full comment

As stated, not much. There are lots of cases were Vitamin D is the cause, either because it's absent or because too much is present. But if you were talking about the value as a hair creme, I might think it had little likelihood to be applicable in that instance (vitamin E might have noticeable effects, though). It all depends on what you're proposing.

Expand full comment

Of the whole list,

> If you are often tempted to believe ridiculous-sounding contrarian ideas, the rock is your god. But it is a Protestant god. It does not need priests.

does not ring as true. There is value in *actively* just repeating the reasonable claims (i.e. signal boosting) when there is a contrarian faction that is *actively* fighting for the public's attention. Not that it's necessarily good, just that a passive rock wouldn't have the same effect.

(Unrelated note, I think this is the kind of stuff that feels a bit more SSC-like)

Expand full comment

Seems like most people are making their decisions in direct contradiction of the heuristics that almost always work.

Expand full comment

What I’m saying is “I would like to buy your rock.”

Expand full comment

Sure, but rocks are usually better than paranoia when it comes to statistically wise course of action

- I am better off buying theft insurance for a low crime risk building than hiring a guard

- I am better off taking two aspirins for an occasional ache and going to gym rather than sacrificing opportunity cost of gym for a doctor visit

- I am better off not making bets I can't afford to lose on new things that are mostly probably fads

- I am better off taking the vaccine

- I am better off consulting a lawyer with a degree from a good college

- I am better off protecting myself against other disasters I can predict and mitigate better than spending resources on evacuating in case of unlikely volcano eruptions

- I am better off keeping emergency rations, weatherproofing my home and buying hurricane insurance than fretting about hurricanes which are unlikely in my area

The value of an expert is simply telling me to not expand my limited ability to panic when it's better applied elsewhere. Invest in market index funds to make sure you are likely to have a comfortable retirement, not gold bars for unlikely case markets collapse and don't come back for decades.

Expand full comment

I use the rock heuristic to determine the quality of your posts, it just says "Scott wrote a great post". This heuristic still seems to be working.

Expand full comment
Feb 8, 2022·edited Feb 9, 2022

(First paragraph edited for clarity)

I agree with other commenters who pointed this out - no alternative is costless. You can bet everything on that the volcano never ever erupts, and die when it does. Or you can keep evacuating the whole island every time a volcanologist says that maybe something might be happening (give the island competitive media and watch this become THIS IS HAPPENING WE ARE GOING TO DIE WATCH THIS FAMILY KISS THEIR KITTENS FOR ONE LAST TIME BUNGLED RESPONSE BY THE QUEEN every time), and you'll be so busy evacuating that you'll have no resources left to live on the island while the volcano is dormant.

This paints a rather depressing picture of just how bad we are at forecasting. I don't disagree, but I think being in denial about it doesn't help either.

Also, when this concerns people making decisions for other people, this is effectively principal-agent problem. It's also a hard one! I, for one, don't know about any simple incentive structure hacks that help here (I'm not convinced prediction markets would, especially if investors in traders require results by the financial quarter).

Expand full comment
Feb 8, 2022·edited Feb 9, 2022

If experts can generally get away with this, isn't that a sign that we as a society would likely be fine with the cost benefit of lacking any experts at all, assuming we got past the subconscious bias of wanting experts? It seems like the heuristics of 'this war is a bad idea' and similar are much cheaper than the equivalent think tanks.

It seems like your conclusion is that we should demand better experts that actually bring us from a 99% heuristic to a 99.99% heuristic, but my conclusion is that we just fire the weatherman, buy insurance, and eat the 1/10000 hurricane.

Expand full comment

I got sucked in by the title but after reading this I don't think these are really good examples of heuristics. It could just be the definition but for me these are techniques that provide a simple shortcut to what would have been a longer, more involved process to get to a more optimal solution. The heuristic is effective because it is "good enough" but it's in the context of solving a problem. These examples are not really solving any problems - just going with the most likely answer to a question or set of inputs. For me heuristics are more like this: What is a good price target for a public stock? You can build an elaborate model to try and figure it out or "add up all the analyst price targets, take the average, divide by 2."

Expand full comment

"This is a great rock. You should cherish this rock. If you are often tempted to believe ridiculous-sounding contrarian ideas, the rock is your god. But it is a Protestant god. It does not need priests. If someone sets themselves up as a priest of the rock, you should politely tell them that they are not adding any value, and you prefer your rocks un-intermediated. If they make a bid to be some sort of thought leader, tell them you want your thought led by the rock directly."

Okay, this one made me laugh, because um. Catholic rock literally. "And I tell you, you are Peter, and on this rock I will build my church, and the gates of hell shall not prevail against it." We are the Cult of the Rock! 😁

Tu es Petrus

https://www.youtube.com/watch?v=EsusZr2QnfU

Expand full comment

Yeah, I like a good 2,000 year old Latin pun too.

Brings me back to my altar boy days phonetically reading my responses to the priest. Didn’t stick with it long enough to memorize the entire mass though.

Expand full comment

The security guard's value doesn't come from halting an in progress robberies, it comes from deterring robberies. So even with the guard's faulty heuristic, they still provide value.

Expand full comment

One way to deal with this in a machine learning context is to use mathematical techniques to create a bunch of fake examples of the rare positive cases. Then we create a new dataset with these artificially-produced positive cases as half the total cases, so that the classifier can't get any predictive advantage by blindly guessing "no."

Finally, we turn around and apply this classifier, trained on cases where positive cases are abundant, on a real dataset consisting only of real (rare) positive cases. If it still gets a usefully high sensitivity and specificity, then hooray!

Expand full comment

Reads like a piece by Morgan Housel

Expand full comment

As someone else mentioned, the standard solution to checking sensitivity to rare faults is to inject the faults deliberately for testing.

Expand full comment

Great post! I wrote an article on my Substack that touches on this somewhat.

https://questioner.substack.com/p/trust-the-experts

Basically I think our entire leadership caste is worshipping the Cult of the Rock at this point. That's a problem, so I decided that our ignorant leaders needed to be overthrown and spread conspiracy theories to see if I could make it happen. People this dumb don't deserve power.

I wish more rationalists would follow my example and be more assertive about toppling worthless leaders and elites and taking power away from them. Leaders only deserve to lead because they make good choices for society. If they're making decisions using these kind of worthless heuristics - basically "tomorrow is always going to be the same as today" - then they're worthless leaders, qnd they should be demoted and replaced with more capable ones. Elites generally don't like to give up power, particularly when you point out how unfit they are to have it, which is why a bit of conflict theory may need to be applied here.

Say what you like about Yang, but at least he tried to practice this philosophy by running for office. If we don't at least TRY to take power away from our ignorant leaders, then truly we deserve all the disasters that befall us as a result.

Expand full comment

the scary thing is that a while AI industry is built on exactly this, and more and more it will make decisions for us..

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

Wow, the doctor story really hits close to home for me.

My mother was fat. She was feeling especially tired for several months. She went to her doctor. The doctor was historically kind of embarrassed that my mother was fat, told her to lose weight, and didn't palpate her swollen belly.

My mother went to the dentist. The dentist had known my mother for years, and palpated her belly. She sent her immediately to the emergency room.

Happily, my mother survived and has been in remission for over a decade, from metastatic lymphoma after the removal of the 9" tumor in her belly, a heavy dose of chemo, and an autologous stem cell transplant. Modern cancer treatment is really impressive!

But I'm still really mad at her general practitioner a decade later.

Expand full comment

Don’t blame you for the anger. Glad her dentist was on the ball.

Expand full comment

> But actually the experts were just using the same heuristic you were, and you should have stayed at 99.9%. False consensus via information cascade!

This seems like the wrong update heuristic. If you ask the same expert 10 times (in the same hour, with no randomness in their process) and they (not surprisingly) gave the same answer, would you update more than if you only asked them once? Probably not. What if you ask them and 9 of their current assistants? A bit more but still not that much more. The problem is the independence of your results. If they are not that independent then you should update less.

Similarly, seeing many years "no volcanic eruption" shouldn't change your view by that much if your initial prior for an eruption each year was already a very low 0.1%.

And so for someone who is using the correct prior and updating correctly, they should have only, say, one false positive their entire career compared to a rock user. And so there wouldn't be strong pressure to select them out.

If they have a high false positive rate, then they (likely) had a much higher prior and then they should be selected out since rock is closer to the actual probability.

> First, because it means everyone is wasting their time and money having experts at all.

This sounds like micromanager's anxiety. I think other commenters correctly ask what the end goal is rather than try to get the experts to run their tests. For example, buying insurance may work better.

Expand full comment

Ok, let's take the volcano example. Let's say you're the Queen, and your volcanologists tell you that the lava is starting to look a bit off. In practice, what do you do ? You know that the volcano had never erupted before, so you have no direct probability estimate for how likely it is to do so. The volcanologists have many competing models for how the volcano works, but thus far every model besides "read what the rock says" had been consistently wrong. Meanwhile, evacuating the island will cost a million cowrie shells, a price so high that it will essentially plunge your nation into poverty for years. So, what do you do ?

One possible answer is, "I'm the Queen, so when I say we evacuate, we evacuate or else off with your head", but you won't be Queen for long with that attitude; at least, not a Queen of anything worth ruling. Another answer is to maintain a certain level of volcano readiness every year, thus spreading out some of the cost of the evacuation. But this is a tough proposition as well, because if you allocate some surplus yearly cowries toward evacuation caches, and your neighbours on the next island over allocate their surplus to buy spearpoints, then at some point they'll just sail over and relieve you of the burden of leadership. And, unlike volcano eruptions, hostile takeovers definitely happen all the time.

I don't think there are any easy answers here, and for once Bayes is not as big of a help as he usually is.

Expand full comment

There are a lot of things that happen more or less automatically:

- Some people are naturally nervous and will do their best to evacuate as soon as they hear a rumor about the lava changing. This saves time, effort, and money for you, the Queen.

- Some people, like fishermen, have a very low cost of evacuating, and they will!

- Everyone, without exception, will believe that an eruption is imminent as soon as the earth starts shaking. Scott really went beyond the bounds of plausibility there.

- The eruption will generally leave some parts of the island unscathed. People there who failed to evacuate will be fine.

The evacuation doesn't really cost the Queen much. It's not paid out of the royal treasury. The costs of the eruption come mostly from the property destroyed and land made unusable.

Expand full comment

The Cosmological Principle!

Expand full comment

I have an heuristic: when kooks and grifters promote HCQ and ivermectine, I don't believe them. When non-kooks and non-grifters promote fluvoxavine, I tend to believe them. It's a good rock.

Expand full comment

How do you know who's a kook, and who's not ?

Expand full comment

Seeing them on Fox News or Russia Today (oops that's the same network).

Expand full comment

Sure, "Fox News is always wrong" sounds like a heuristic that should almost always work.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

Until the one time it doesn't:

https://www.youtube.com/watch?v=fT0AjmeJ_sg

https://www.youtube.com/watch?v=Z6Oczyk6nCw

Great selection of rocks saying "Never ever gonna happen".

Expand full comment

Yesterday's New York Times had an article which is, in fact, exactly the Volcano example, in real life, and happening now. There's a major fault off the coast of the US Northwest that is due for a major earthquake. That quake would spawn a very quick tsunami that could easily be over 20 feet. So many people live near the coast, and there are no places high enough to run to that there could be tens of thousands of casualties. In general nobody's doing much about it. In particular not building towers that folks could run to. https://www.nytimes.com/2022/02/07/us/tsunami-northwest-evacuation-towers.html

Expand full comment

That may be in yesterday's New York Times, but it's been known since at least sometime in the 1960's. (I'm not certain it was new news then.) And there is a network of earthquake faults that run through Walnut Creek, Berkeley, Hayward, Livermore, etc. They're interconnected, so if one goes off, several will probably go off. That would close off all land based connection to the SF Bay area. And a major earthquake on several of those faults is overdue. It would probably also close water based connections via the Sacramento River. The only plausible way to get food in is via air...and the quantities needed are prohibitive.

There's no good solution, but something that could ameliorate the problem would be to build a major freeway on the surface from San Jose (well, just North of San Jose) to I5. It would require leveling part of a mountain, and probably destroying a major park. If it's even being planned, I haven't heard of it.

The place that I live is surrounded by several dormant volcanoes. They've been dormant for a long time, but there's no reason to think they're dead. If that fault off the coast of the US Northwest goes off, it may also cause some of the volcanoes to come to life. (Shaking can stimulate volcanoes.) People talk about it, but nothing much that I know of has been done.

The water table in the San Joaquin Valley has been falling steadily, as more water is removed than is replaced. This has lead to some subsidence, and more often to wells going dry. I've heard plans to deal with this, but I haven't heard that any have been adopted. Temperature increases have meant that more water is needed to get the same level of crop growth as happened previously.

There are LOTS of examples. Some are "in progress" rather then "someday". And we still don't have a good way of dealing with this kind of problem.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

"Whenever someone pooh-poohs rationality as unnecessary, or makes fun of rationalists for spending zillions of brain cycles on “obvious” questions, check how they’re making their decisions. 99.9% of the time, it’s Heuristics That Almost Always Works."

One of the central reasons that people need rationality, and what a lot of these examples boil down to, is that most people's Heuristic That Almost Always Works is "trust what my intuition tells me" which worked fine in the ancestral environment but works less and less now.

Expand full comment

2 thoughts:

- This is part of a general pattern of people conflating probabilities with expectation values. I started noticing this a while ago and now can't unsee it: people are doing this all the time, in everything from mundane convos, to planning research projects, to geopolitics.

- I'm generally thinking about how using only the mean of a distribution is too simplistic. Surely I care about the shape of the distribution too in some cases (probably because of the previous point).

Expand full comment

I liked the Scott that wrote Burdens better than the one who "profitability" replaces people with rocks. One more reason on the "Why Do I Suck" pile, I guess.

Expand full comment

I don't think Scott is literally advocating killing experts and replacing them with rocks. Burdens is literal; this is rhetorical.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

But, this is almost what he is advocating. The other day he wrote a post about how sad he is that a prediction market was fined. He is sad because, he wanted digital betting markets to replace experts. He literally wants to replace experts with rocks, rocks that can perform logic, aka silicon, aka computers.

To be more charitable, im sure he wants both (someone needs to bet in those markets, why not experts and non-experts together?), but I'm not sure you can take this post as anything but an anti-expert post.

Expand full comment

Bobby's presumably seeing a contradiction with "Burdens" because of this passage:

>Still, she takes up lots of oxygen and water and food. You know what doesn’t need oxygen or water or food? A rock with the phrase “YOUR RIDICULOUS-SOUNDING CONTRARIAN IDEA IS WRONG” written on it.

I'm saying that Scott is not literally advocating removing Rock Cultists from air, water and food i.e. killing them; that was almost certainly a rhetorical flourish. He certainly *is* literally advocating removing Rock Cultists from the levers of power, but that doesn't really contradict "Burdens".

Expand full comment

I don't think your comment is very nice.

Expand full comment

Why not just be Bayesian? You have a strong prior against the volcano erupting, etc, and act accordingly.

Expand full comment

Hiring from 'top colleges' is a great way to ensure all your workers think alike and have no diversity in opinion.

Expand full comment

Reminds me of the concept of "overfit" in statistics and machine learning modeling

Expand full comment

Unfortunately, there are only heuristics and none of them ever work all of the time.

At first I missed the point completely, After all, one hires a security guard to deter robbers, that is, to change the odds. It isn't so much about detection. It's about the threat of detection.

Then I thought this was a critique of machine learning and artificial intelligence written as a parable. After all, machine learning is all about heuristics and probabilities, and we've all read enough stories about Teslas plowing into the sides of trucks and the like.

The best I can come up with as a point is that heuristics can be very useful, but one has to check one's priors on a regular basis and have a way to update them. This is hard enough to do technically, but often politics makes it even harder.

Expand full comment

The heuristic works for some people, not for society or for the curious and/or innovative.

Expand full comment

Somehow this article seem to make the mistake that "experts" are all just spending their time making predictions, and not spending anytime doing research and experimentation to acquire more data and evidence.

Most experts in most fields spend their time doing research and experimentation, in order to acquire knowledge and build a corpus of understanding that makes that 99.9% into a 80%, a 50%, a 20%, a 1%, etc.

The only "experts" making projections tend to be fake experts, they'll actually be policy makers, investors, marketeers, etc. (yes sometimes they'll hire an expert statistician to waste his time help them with such foolishness)

And ounce those people enter the game, they'll pester the real experts ad nauseam for estimates and for predictions, and at first the expert will say well more research/experimentation is needed. But the fake experts will say, ok, but ballpark, just an estimate, what do you think is most likely happening here? So the expert will say, ok, give me some time to really run the numbers and make sure at least I'm giving you accurate statistics. But the fake experts will pester some more, I need it by end of day, just tell me now, why would it take you so long. Eventually the experts will just make it up so that the fake experts leave them alone and they can go back to doing real work like research/experimentation/development, etc.

And this in my opinion invalidates the claims in the article. Because those experts cannot be replaced by a rock. The reason they'll be doing the same work as the rock, is because non-experts are going to want them to do so, by asking them the question the rock could answer, and refusing any answer that is probabilistic, they want certainty, not possibility. And to those people, it matters very much that the expert said so, because in the expert they trust, in the expert they can scape goat their failures, they did not make the decision, the expert did. A rock does not provide them with plausible deniability.

Expand full comment

I don't believe this is the same thing as a black swan. The idea of a black swan is that it cannot be predicted with current knowledge. Overconfident heuristics can be predicted more correctly. If the doctor uses all available diagnostics or the weatherman uses the best available modeling, the false negatives can be reduced

Expand full comment

This is a common thing when you try to create a binary classifier but your base rate is highly imbalanced. Accuracy as a metric for scoring favours a classifier that always predicts one outcome. You should use a more elaborate metric like precision and recall separately, and if you care about both, maybe F1 score. F1 score as an harmonic mean of precision and recall will push your total score down a lot if your recall is 0 (as in your stories).

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

There is another problem with very rare events, apart from the difficulty of predicting them that Scott illustrate well: Regardless if you predicted them right or just happen to be in the middle of one and have time to react, how to react? How do you know you took the right course, the one that optimized for whatever you are looking for (minimum number of victims, of property loss, maximum QALY?).

The event are rare, mostly non repeating so multiple approach can not be tried and assessed. You can look into models (yes, the ones that were so bad at prediction) or create a narrative a posteriori about how great you did / how the other in charge sucked, i.e. a pure politic exercise.

That's already done for recurring events exploiting the differences which are almost always present out of hard science fields, but at least there are discussions and the trick does not work all the time.

I think when it's not recurring, it's much much worse: political fights are the only thing happening, regardless if any term like evaluation/optimal response/careful balance is mentioned.

I think it's usually a worse issue than the prediction itself, in many cases (exception mostly being very black and white events leading to total destruction and evacuation being the only option: a pompei volcanic eruption, Dinosaur-like asteroid impact (although there one can wonder if there is anything to do, and there will likely be little left willing to discuss what has been tried anyway....so maybe a smaller heading to a big city is a better example)

By polical fights I mean the goal of the discussion is not at all to get the optimal response, or improve for the next time this will happen (which is probably never, at least not in a way without at least a few differences significant enough to change the optimal reaction - else we would be in recurring events), the goal is to gain or maintain power.

And finally, I clearly have COVID in mind when writing this. I think it falls exactly in my category "non-recurring event with no clearly optimal action", so current complaints about reactions being too political are non-sequitur. It's political from the start, and experts are political pawns (or are playing the political game themselves)

Not all emerging pandemics will necessarily be like that (Ebola starting to spread for example), but COVID (and HIV before) clearly are.

BTW, just though of it but comparing HIV response to COVID response is I think quite interesting: if my premises are correct (reactions are purely political and have little to do with epidemiology), I remember that at the beginning (around 1985, when the virus was broadly recognized as the origin of the symptoms), they were demands of HIV-free pass, segregation, forced use of condoms....But it mostly did not pass, instead reactions mostly focused on treatments and advises for protection, with very little mandatory measures. I think it's a lesson about relative strength of individual freedom v.s. public control in the mid-eighties and now.

Expand full comment

Finally my domain: Me still providing value as security guard, if I never check (esp when Scott has a new post), cuz the robbers do not know my heuristics. Rock would still do, too - if they dunno it is only a rock (ever heard of fake-cams? good value!). - 2. Fun fact: the first tweeter on your link https://forbetterscience.com/2021/03/26/die-with-a-smile-antidepressants-against-covid-19/ who tweeted : "Gute Studie" has now become Germany's new minister of health. And is often thought to be a rock inscribed: "We must be careful, careful, careful". - 3. Your post is missing those rocks who say: We are doomed. Capitalism is to blame. Money is evil. We must stop consumerism NOW.

Expand full comment

ever since 14. September 1867 (German Ashkenazi K.M.). With precursors (Malthus 1798), 1494 https://en.wikipedia.org/wiki/Girolamo_Savonarola . I do not claim those 3 guys were rocks. Some followers may refuse updating. - Similarly: Bjorn Lomborg no rock. Matt Ridley no rock. Greg Cochran no rock. Even Caplan. They may not change often (does Eliezer?) - but they do give reasons and are much more open to updating than Greta/Ehrlich/Gore/NYT.

Expand full comment

Isn't that what "calibration graphs" are for?

Expand full comment

The thing is. The average person is not told these heuristics, they get excited over every issue that is presented to them in the correct way. So there is some value in reiterating them.

Expand full comment

I am wondering how this would apply to something like fundamental physics. Because the rock heuristic would seem to be "don't build the experiment, you won't find nothing". Yet up until the discovery of the Higgs boson this heuristic failed spectacularly every single time - that is, up until LHC, which did not find any of the "predicted" beyond standard model physics.

It seems to me that in that case the situation was reversed with respect to the given examples: the rock heuristic was "just build it, you will find something " just for this heuristic to fail badly with LHC. With the very very very bad effect that now we are overcorrecting (coff Hossenfelder coff) and the contrarian viewpoint "nothing is there" is becoming the new fashionable heuristic.

Coming to think of it, these heuristics followed a sort of barber pole model of fashion.

Expand full comment

I think often there is a weaker version of this, where the heuristic only works 80% of the time.

Expand full comment

Eh. I don't think it's a great argument, because it's all scaled arithemetically, as conscious reasoning usually is. But certainly the real world, and as far as I understand it our inborn unconscious and preconscoius heuristics, work more commonly in logarithmic scales.

So for example I might need a heuristic that is good 9 times out of 10, or I might need one that is good 99 times out of 100, or 999 times out of 1000, et cetera, and as far as my instinctual judgment of quality goes, I'll judge each to be about the same effort, because they're each good to about +/-1 in the mantissa, once my understanding of the precision needed in this particular case sets the exponent.

So if I'm a stranger wandering by the factory, I probably only do need a 99 times out of 100 heuristic for whether a noise is a robber, because it's not my day job, but if my actual career and retirement pension are depending on my being a good security guard, I'll probably feel I'll need one with a few more 9s, maybe 999 out of 1000, or 9999 out of 10000. However, the interior experience of "I'm being careful" will have similar emotional magnitudes for both the passerby and the security guard.

You see this in all kinds of human experience. A casual weekend hiker requires a lot fewer 9s in his heuristic for outdoor safety than a professional mountaineer. A random citizen requires fewer 9s for being aware of violent threats than a cop in South Central, or a Ranger in Afghanistan. A weekend carpenter is OK with fewer 9s in his attention to measurement precision than the skilled lathe operator in his regular shift. But nevertheless each person tends to *feel* like he's exercising a roughly similar level of care -- which is what perhaps leads to the mistaken impression of the outside observer that they actually are -- only because our instincts and feelings are in logarithmic scales, while we consciously reason in arithemetic scales. (That's probably also one reason our conscious reasonsing routinely gives badly wrong answers to puzzles about inherently exponential processes, like pandemics or compound interest.)

Expand full comment

Similar more broader experiences: what constitutes a "good" annual bonus? It depends on your salary. How big an improvement in physical comfort is, say, a clean dry pair of socks? It depends on whether you're watching TV and spilled a glass of water, or are hiking down from a glacier in a sleetstorm. How much are you willing to gamble in Vegas? It depends on what your history of gambling losses is, and also your wealth. How exciting is an increment in intimacy with a new partner? It depends on whether this is the first date or the first anniversary. How outraged are we by a politician's lie? It depends on what our recent experience with political honest is, how rare or common.

Humans just naturally respond emotionally to dx/x more than dx, so off we go to logarithms and exponentials. The strange thing is we have such a hard time thinking *consciously* this way, although emotionally we do it routinely.

Expand full comment

Health trends could also be included as an example. But maybe thats why first principle thinking is important. Not volcanoes dont erupt usually but this is how rocks must look before erupting.

And maybe, we can even forecast unforeseen behaviors.

Expand full comment

This is combining several very different sorts of situation. The guard has a well-defined threat and a straightforward way of checking on it. The only costs are his effort and (not mentioned) the risk of confronting a dangerous burglar. I've read that people who check things need to have some percentage of problem items so they'll stay alert.

Some of the others have weaker models (the volcanologists don't understand volcanic eruptions very well).

No one understands the economy very well.

The doctor needs to be checked against sick people, since recognizing ailments through palpating is part of her job. Admittedly, actors are enough to find out whether she's palpating at all, though not whether she's paying attention when she does it.

Fat people have a substantial chance of running up against Doctor Rock-- they are frequently told to just lose weight (or to also lose weight) regardless of their symptoms. There are also Doctor Rocks (I don't know if they're the same ones) who have problems taking exhaustion or pain seriously.

I've also heard that experienced mountain climbers and such are in more danger than less experienced ones. They've been climbing for years, and all the onerous precautions against relatively rare events don't seem to make any difference.... until something goes wrong. For all I know,*some* of the precautions aren't worth it or even make matters worse, but just noticing a lack of disaster isn't enough to know what to not do.

Expand full comment

You’re drunk, go back to bed. Lol

Expand full comment

This is very good. I feel like I've seen something similar expressed in terms of payoff matrices, where there's a fairly good payoff if you do the obvious safe thing (e.g. hiring the credentialled candidate, investing in an index fund, maybe predicting that the status quo will continue) and get the expected result, an only-moderately-bad payoff if you do the safe thing and it goes badly, and a very bad payoff if you do the unusual thing (hire the weirdo, invest in an unproven startup, maybe predict the black-swan disaster) and it turns out badly.

But the closest thing I can find is Scott's review of Inadequate Equilibria (which is closer to what I think I'm remembering than anything in Inadequate Equilibria itself):

"... central bankers are mostly interested in prestige, and for various reasons low money supply (the wrong policy in this case) is generally considered a virtuous and reasonable thing for a central banker to do, while high money supply (the right policy in this case) is generally considered a sort of irresponsible thing to do that makes all the other central bankers laugh at you. Their payoff matrix (with totally made-up utility points) looked sort of like this:

LOW MONEY, ECONOMY BOOMS: You were virtuous and it paid off, you will be celebrated in song forever (+10)

LOW MONEY, ECONOMY COLLAPSES: Well, you did the virtuous thing and it didn’t work, at least you tried (+0)

HIGH MONEY, ECONOMY BOOMS: You made a bold gamble and it paid off, nice job. (+10)

HIGH MONEY, ECONOMY COLLAPSES: You did a stupid thing everyone always says not to do, you predictably failed and destroyed our economy, fuck you (-10)

So even as evidence accumulated that high money supply was the right strategy, the Japanese central bankers looked at their payoff matrix and decided to keep a low money supply."

...and a commenter (Sniffnoy) adding "The Bank of Japan situation mentioned generalizes to the whole “nobody ever got fired for buying IBM” idea — in cases where you’ll be blamed if you try something new and it goes wrong, and won’t be blamed if you try conventional wisdom if it goes wrong, this disincentivizes going against conventional wisdom even if that’s the right thing to do."

Expand full comment

This is the basis for all of Bryan Caplan’s bets I believe.

Expand full comment

I think the distinction between individuals and organizations/institutions is important. All of your fables posit an individual whose predictions fail at some point. My feeling is that 99.9% predictions are just too hard for pretty much any individual, and the only way to make those kinds of distinctions is to have systematic study and an institution to enforce and implement the disciplinary knowledge gained thereby.

Obviously that leads to lots of people having the same knowledge, as they've learned it in the same institution, so they are not independent experts.

Expand full comment

>If you want you can think of a high school dropout outperforming a top college student as a “black swan”

You can, but I give 99% chances of making a very muscular mediteranean writer angry for misusing his concept. But he also probably didn't do a lot of cardio, so you can safely run away from him. He'll have to take a taxi to chase you, and then will get derailed from taking investment advices from the driver.

Expand full comment

I Am a Rock (https://youtu.be/O4psVQHsUq8)

This post makes me wonder about how we might design prediction markets and questions to get useful information in cases where something is extremely unlikely in the short term, but relatively likely over extremely long periods of time. For example in the case of the Queen, it seems hard to design incentive systems that will reward the honest volcanologists at the expense of the rock-worshippers. About once in a volcanologist's lifetime the probability of the volcano exploding goes from 1/1000 to 1/10, and it seems like 90% of honest volcanologists will die without ever "winning" in a prediction market of whether the volcano will explode next year. It seems like a sad and expensive career to be an honest volcanologist.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

Worse than that: prediction markets have a notorious hole that they can only predict P(X|prediction market still exists AND currency wagered is still valuable AND better is still alive), which makes them useless for a lot of the important questions.

First condition is because if you bet correctly that the prediction market will cease to exist before the question resolves, you can't collect your winnings - this one can be almost fully alleviated with clever failsafe arrangements, but you do have to use them.

Second condition is because if you bet correctly that the currency wagered will suffer hyperinflation, you can't collect anything of value - this one can be partially alleviated by use of a betting currency that's independent of the question (e.g. when betting whether the US dollar will crash, take bets in rubles or gold or bitcoin), although currencies' values are rarely totally uncorrelated (the correlation can be negative, but it's rarely zero) so you'll still have some kind of skew to these sorts of questions (though not as bad as betting on USD hyperinflation *in USD*).

Third condition is because if you bet correctly on your own death, you can't collect your winnings. This one is the one that makes prediction markets useless for dealing with X-risk, because all X-risk outcomes get discounted 100% by a selfish player (i.e. they duplicate The Rock; insert "you were the chosen one" meme here), and it's almost impossible to work around. The only way I could see to circumvent this one would be betting years in Purgatory or something, which would require a common metaphysical framework and a common religion that doesn't ban gambling with your immortal soul (good luck with that one).

Expand full comment

Mostly we're just bad at measuring accuracy. We tend to think "percentage of correct responses" tracks accuracy, even though it does a poor job at it. We then reward experts according to this measure of accuracy.

One adjustment we could consider in these yes-or-no situations is to weigh positives and negatives equally. Let's say the volcano is about to erupt 5 times out of 100. The traditional method of estimating accuracy gives the rock a score of 95%. The better method gives the rock a score of 50%, as expected from a rock. Contrast with the expert who can predict the eruption 80% of the time, but is flipping a coin when there's no imminent danger. The old method gives him a score of 51.5%. The new method gives him a score of 65% - better than the rock. (Add some more statistical magic and you quickly get to signal detection theory).

The situation above also suggests that cost-effective behavior could look like relying on the rock in some situations. Evacuating the city everytime the fictional coin-flipping volcanologist's coin hits tails may well be more expensive than the one-time destruction of the town.

It also doesn't quite work if the event is too rare. What if the volcano has never been about to erupt in your lifetime? Then an expert's accuracy is indeterminate according to the better method. Not enough data. By taking more volcanoes and more experts into account, it becomes possible to calculate a collective accuracy score, I suppose. This does very little for our ability to evaluate individual experts, though.

But all this talk of better experts might well be a massive shortcut for a better rock. What we want might be a rock that says something like "using the composition of the lava, recent seismic activity, and 24 other variables, calculate a score; if the score is above 1, probably you've got a volcano about to erupt". The psychologist Paul Meehl has already shown that actuarial tables (complicated rocks) tend to be better at predicting stuff than experts, and that was in the 50s, way before machine learning. But we're back at the starting point : incentives sometimes say "stick to the simple rock".

Lots to think about in this one.

Expand full comment

Very nice essay!

The examples make this behaviour inescapably clear to anyone paying attention.

To this 'laziness' add greedy players encouraging when it suits them, then add institutional capture and we get close to an explanation of today's kakistocracy.

Expand full comment

Fine. But what’s the difference between a talking rock and Just Another Fucking Rationalist (JAFR)?

Consider:

JAFR security guy starts his shift believing that the probability of a robbery in the next 8 hours is very low. After 2 hours of working on the crossword, he hears a noise that causes him to update his priors and increase his assessment of the probability of a robbery. He then calculates that if he goes to investigate, there is a chance he will not be available to give CPR should one of the cleaning crew collapse and that the expected costs of investigating are lower than the expected benefits of saving the janitor’s life. After double-checking his math, he goes back to the puzzle.

JAFR primary care physician starts the exam believing there is the low probability that this unremarkable patient has some rare cancer. During her exam the patient complains of indigestion, leading the doctor to update her assessment of the probability of cancer. She considers sending the patient for a CAT scan but then remembers that the facilities are very busy and that sending patients like this one for further tests will make it harder for people with more obvious symptoms to schedule a scan. The delays and hassles increase the likelihood that someone will not receive a timely diagnosis and that the expected benefits of the test are less than the expected costs. After double-checking her math, she advises her patient to avoid Taco Bell after a hard night of drinking.

JAFR skeptic, JAFR interviewer, JAFR Queen…well, you get the idea.

But now consider this:

A bunch of talking rocks are feeling guilty about taking money for following heuristics and so they sign up for a rationalist seminar on Bayes Rule. During the first break, they begin to talk.

The first rock says, “Math is hard. If I do this sort of thing, I’ll be too tired to work the crossword.”

The second rock says, “No, once you get into the habit of thinking like a Bayesian, the calculations are easy. The problem is figuring out all those conditional probabilities”.

The third rock says, “Wait, what? Isn’t that we’re doing already. We can only know those conditional probabilities from our experiences on the job. We’re smart rocks and so we base our heuristics on those experiences. Of course some of our fellow rocks are really dumb and they’ve given up paying enough attention to the true conditional probabilities. But some of these JAFR’s are really dumb and don’t pay attention to experience either.”

They rocks skip the rest of the seminar and spend the afternoon in the hotel bar watching a basketball game and betting on whether players will make their second free throw.

Expand full comment

"Cynicism is a theory of everything" - Rutger Bergman

Expand full comment

meta comment: since this article was posted, has anyone else experienced this substack changing its layout radically? It seems to have happened across devices for me...

Expand full comment

Yes. It's closer to what SSC used to look like, so I think it was a deliberate choice.

Expand full comment

Not quite the same thing but this calls to mind having someone designated to count the sponges inserted into a patient during surgery. The surgeon would *almost* never get it wrong, but it’s a major problem if he doesn’t.

Or the nurse verifying DOB each time before administering medication. Yeah same patient as yesterday but this seemingly pointless check can prevent the rare catastrophic error.

Expand full comment

Gawande's _The Checklist Manifesto_ is excellent about that sort of thing.

Please note that a good checklist takes a lot of thought and checking (are there checklists for checklists?). It's not just making up rules.

Expand full comment

That kind of checking did help when my father was in hospital, they wanted to give him antibiotics but checked with us first "is he allergic to anything?" and we went "yes, penicillin". So they changed what antibiotics they were going to give him.

That's information that should already have been on his records, probably was, but without the re-checking he would have ended up having an allergic reaction while already unwell.

Expand full comment

We can, without loss, replace every bioethicist with a rock that says "Don't."

Expand full comment
founding

Are you saying that bioethicists *should* say "Don't" to every proposal (and so we should not do those things), or that they *do* say "Don't" to every proposal (and so we should ignore them)?

In the former case, consider e.g. challenge testing for COVID vaccines. Or the simple harmless and potentially useful research proposal that Scott couldn't get past the IRB during his residency. I'd say >>1% of proposals that are run past bioethicists, are things we should do.

In the latter case, here's bioethicist saying that we absolutely should secretly lace vaccines with mind control drugs and then force everybody to take those vaccines without exception.

https://philarchive.org/archive/CRUCMB

Expand full comment

“NOTHING EVER CHANGES OR IS INTERESTING”, says the rock, in letters chiseled into its surface.

Eccl. 1:9

Expand full comment

I recall a short aside in The Big Short (movie) when they introduced Brownfield Capital. It went something like, people usually underestimate the chance of a rare, bad thing happening. So Brownfield bet that many bad things would happen. They didn't win often, but when they did, they won big.

It seems that they way to defeat this heuristic is to increase the number of times you play. Don't be one security guard, have a security company that guards 10,000 stores. Don't monitor one volcano, monitor all volcanoes. Obviously that's not always possible.

Also, it illustrates that it's important to have odds, bet sizes, etc so the metric is "Am I making $ and/or value over time?" not "Am I right the vast majority of the time?".

Expand full comment

Venture capital solved this by making very many, very unlikely, very potentially important decisions. It seems that these problems can be solved by similar dynamics in prediction markets. While It may take 500 years for a volcano on one island to explode, an international firm can bet against 100 islands saying their volcano won't explode, and in 1/(1-(499/500)) = 5.5 years, one such volcano will explode.

And even if we limit the market's scope to *just* the island, there's likely very many 1/500 events which everyone just rounds to 0/500, which someone could make a similar killing on if they were calibrated enough.

Expand full comment

Though this doesn't solve much of the problems associated with the anthropic principle much.

Expand full comment

What's the point of expertise if the expertise isn't used to test the drug, analyze the lava, etc.?

Why should an expert have an opinion not motivated by deductive reason, but instead engage in loose pseudo-inductive speculation?

Expand full comment

I'm working on a proposal for "social scoring rules", in which people get rewarded for their contribution's value after accounting for everyone else's contributions.

Expand full comment

The market can stay irrational longer than you can stay solvent

Expand full comment

Reading this on my phone, it cut off at “ The Queen died, her successor succeeded, and the island kept going along the same lines for let’s say five hundred years.” It felt like a more darkly comedic ending, I didn’t realize there was more article at first. Fun post :)

Expand full comment

I think that the Cult of the Rock might have secret ties to the BETA-MEALR Party.

Expand full comment
Feb 9, 2022·edited Feb 9, 2022

But is it totally true that the security guard who never checks to see if the noise is wind or robbers, "provides literally no value"? If the robbers do not know he never checks, they may be less likely to rob buildings with a security guard than buildings that clearly have no security guard. Maybe kinda like putting up a sign that ADT protects your property when in fact you have no contract with ADT may deter some robbers. Maybe not much value . . . but not "literally no value."

Expand full comment

Suppose you are an honest vulcanologist who sees weird rocks. What is the correct rhetorical strategy?

Expand full comment

aka, Nobody gets fired for hiring McKenzie?

A finance guy was telling me the other week about reputational herding problems that seem maybe not identical but related.

You have info some company will fail, but all your friends are smart investors and are buying. The incentives are shifted, if you follow the other guys, even if you are wrong you tell your bosses hey everybody missed this. If you strike out on your own... well, you'd better be right.

Expand full comment

EDIT: At a meta level I think you just hit a conversational signal flare.

You wanted to talk about x, one of your sub-examples was y (black swans), and many replies flocked to that, because the conversational wagon ruts are deeper on that adjacent issue.

I would call these conversational wagon ruts but that sounds too pejorative. It's good for people to talk about interesting related issues.

This is more about how hard it is to channel a conversation to smart topics very adjacent to well trod territory.

Expand full comment
founding

Reminds me of link prediction in networks. (https://arxiv.org/pdf/1505.04094.pdf)

Say you are trying to predict if 2 accounts in a social network will become friends. The baseline of them becoming friends is so low (ie most people are not friends), that using traditional evaluation methods it is really hard to beat the method that just says 'nobody will ever become new friends'

Expand full comment
Feb 10, 2022·edited Feb 10, 2022

I found this utterly unconvincing. The essay concludes:

"Whenever someone pooh-poohs rationality as unnecessary, or makes fun of rationalists for spending zillions of brain cycles on “obvious” questions, check how they’re making their decisions. 99.9% of the time, it’s Heuristics That Almost Always Works."

There is a perplexing imbalance between weight of conclusion (almost all not-rationalists / people who poh-pooh rationalists could be replaced by a rock that says the same thing all the time) and the weight of the evidence presented in the argument. Because, you know, all the examples presented are *invented make-believe stories*.

The story that hits closest to real life is maybe the interview strategy of "look at name of the school and length of resume", which I believe is actually a pretty okay strategy unless you have better ones. (As long as you do the due diligence and check the prospective employee isn't lying about their credentials and you keep your internal list of "good schools" up-to-date.) Because it is a systematic strategy that picks a genuine signal and where you have less opportunities to mess up by giving in to your ad hoc sentiments, which one should not trust unless one knows they have well calibrated skill at judging people.

The other essay about authors experiences in grant-making is much more persuasive because it is based on something that looks like an actual experience that happened in real life.

Expand full comment

I was reading your original post and looking at the tax issues, especially your gift tax issue (I'm an attorney, but not a tax/charity one). Did you ever think about forming a charitable org yourself because it would be cheaper to do that than pay the gift taxes? Secondarily your supporters could make deductible donations to that charity to increase your giving power.

Expand full comment

As long as security guard doesn't use time gained by his heuristic to draw eyes on paintings: https://www.bbc.com/news/world-europe-60330758

Some people have mentioned Talib, but I am also wondering what a guy like W. E. Deming would have said about this.

Is the 99.9% heuristic being used in a state of "statistical stability" or not? Is getting up to check "every" sound really just 100% inspection? What can we learn from Chapter 15 of "Out of the Crisis".

What does the loss function really look like if we include the 4,5,6+SD event? And is the 99.9% heuristic really a 99.9% heuristic or is it a 90% heuristic masquerading as a 99.9% heuristic?

As an attorney, who has cross-examined experts for 30+ years, we generally should rely on expertise SUBJECT to vigorous cross-examination by experts in the cross-examination of experts.

There is always a risk of being wrong.

Expand full comment

Topeka is in Kansas, not Ohio

Expand full comment

I think this is too hard on rocks. In many of these cases, the rock is genuinely doing a good job and deserves the accolades.

Consider the interviewer: if the rocklike interviewer can pick candidates with a better expected value by hiring whoever has the best credentials than their colleagues can using nuanced methods, the rocklike interviewer is matching a rock while their colleagues are underperforming the rock. Sure, there's no point to having a person, but that might just mean that the company should actually replace their interviewers with a rock--i.e. stop using interviews to evaluate candidates.

The heuristic "the volcano never erupts" will be catastrophically wrong occasionally, but it's not obvious that it underperforms actually trying to predict when the volcano erupts. After all, eruptions are so rare that nobody can use systematic measurements to predict them (in this hypothetical), so it might actually be better to stop worrying about volcano eruptions than to evacuate the island every time someone gets nervous.

In fact, I've productively hired rocks for many of my most important tasks. Instead of trying to beat the stock market, I give my money to a rock with "just buy a total stock market ETF" painted on it. It works great, and is very cheap. Instead of looking for novel nonprofits, I consult a rock with "just give to the givewell maximum impact fund" on it. It's simple, and I save several lives a year.

Expand full comment

this is what hedging is for

you make the usual bet on the ordinary happening

and you make a small side bet on the 0.01%

Expand full comment

I'm surprised nobody has mentioned this so far. Accuracy is a bad metric to use in a classification problem when the classes are highly unbalanced. In prediction problems like for predicting click through rates on online ads where the clicks are almost 0.1% or so, often the metric used is logarithmic loss where the prediction gets penalized more if it gets a prediction confidently wrong.

Expand full comment

The Skeptic does do at least one valuable service: they know what the conventional wisdom actually is and can explain it. Which is more than a rock can do.

Expand full comment

Super interesting post and the comments look very fascinating. Haven’t gotten a chance to read them yet because I scrolled all the way to the bottom to see if/when someone pointed out that:

Volcanologists study volcanoes.

Vulcanologists are into Star Trek.

Expand full comment

Soooo.. checklists?

Heuristics + Process discipline?

Expand full comment
Apr 6, 2022·edited Apr 6, 2022

Scott Alexander, this is a gross exaggeration using terms like 99.9%. How did you even arrive that number? And frankly you're literally saying there is no point to their existence. That's callous, unfair and perhaps even wrong if you are missing something. I think you are probably right about the scientific part of the problem but you discount something. That would be ok if you talked about some rock. And I feel you built a straw-man argument by cherry picking certain things related to some professions. The security guard might be useless but you can't know whether he is in fact a deterrent to small-time hooligans or small-time burglars. Such actors might get dissuaded by his mere presence there. The doctor performs many other duties besides crap people come up with, and can actually be very helpful if you already have a diagnosis, something your rock can't do. Regarding the interviewer, well if he thinks like you say he does, he's an idiot and a badly trained one at that. There are recruiters who are able to filter certain people unfit for a duty based on simple conversation by asking questions. Very aggressive candidate? Out of the list. They can figure certain things that are obvious. So they are not entirely useless. You can't claim their existence is pointless.

Expand full comment

But the security guard could use a slightly better heuristic. He could say he will not investigate a sound, but he will investigate a 2nd sound that happens within 15 minutes of a previous one. The question then comes down to when the Pillow building does get robbed, how often is the robber able to limit the sounds he makes to only one?

A robber that can make no sound is irrelevant to this dicussion. The guard cannot catch him as he is basically invisible. The guard could do random patrols but then the odds of catching the robber who makes no sound depends on how often he patrols.

If 99% of robbers who make 1 sound also make additional sounds, this heuristic will work well for the guard.

1. He will be able to avoid getting up almost all the time.

2. The rare times there is a robber in a multi-decade career, he will catch him.

3. It would take centuries before one encounters a successful robber who evades him.

4. No rock could employ his revised heuristic therefore to the degree stopping the theft of low margin goods that almost never happens in a decade is of value, he has it.

A slight tweak to a almost always work heuristic can scale can shrink the difference between always and almost always to microscoptic levels for min. effort.

Expand full comment

Wrote some 16-months-late thoughts on ideas adjacent to this: https://scpantera.substack.com/p/why-shouldnt-some-heuristics-always

tl;dr what do you do irl if these kinds of situations are a regular part of your job but also the extremely rare failure mode is also really severe but also also it's just not practical to behave as though this is a black swan situation

Expand full comment