337 Comments
Comment deleted
February 8, 2022
Comment deleted
Expand full comment

buy all the toilet paper

Expand full comment

We got through fine without TP. Selling stocks was more profitable though.

Expand full comment

I actually stocked up plenty on toilet paper in March of 2020, with the reasoning "apart from taking up space in a way that doesn't matter to me and the minuscule opportunity cost of paying now rather than later, this costs me nothing as the toilet paper will be used up _anyway_ eventually, but ending up without toilet paper is a reasonably big deal".

Some still remains, but it's not like that was ever a problem.

Expand full comment

That's nice for you. Now look at the bigger picture.

When people started buying toilet paper in bulk, the supermarkets struggled to keep their shelves stocked. This caused other people to be unable to buy toilet paper when they went shopping. The negative consequences for them ranged from the mild inconvenience of having to go again at a later time, through the mild fear that they might run out before getting some, to the very "reasonably big deal" that you wanted to avoid for yourself.

Expand full comment

That actually didn't happen here - a benefit of living in a major paper-producing country, I guess. Supply thinned out a bit briefly (purely because of logistics), but shelves never went empty.

Further, securing my supply made for a minuscule, negligible part of the entire picture. Martyrdom - especially over something like toilet paper - doesn't seem very satisfying.

Expand full comment

Okay, lets look at the even bigger picture. Part of the TP shortage came from people spending more time at home and less time at restaurants. So there was more demand for the toilet paper that people used at home and less demand for the type that people used in restaurants. But if you weren't picky about the type of toilet paper that you got, you could order TP from a restaurant supply store and get your rolls. This situation was, weirdly, not as hard to solve as people made it out to be.

Yes, stockpiling contributed to the problem. But it wasn't the primary cause.

Also, TP manufacturers could have ramped up production. But that would have meant they'd have had to ramp down production below normal later. Which would have cost extra money overall that anti-gouging laws made it illegal to recoup. So anti-gouging laws also contributed to the shortage. And a general willingness to use the law to control costs, even if such control resulted in shortages, contributed to the shortages.

So part of the 'big picture' is, arguably, that the public favors controlling costs over preventing shortages.

Expand full comment

This is worth generalizing to *all* non perishable goods.

When downstream business owners complain about the fragility of JIT economics in their upstream supply chain, I ask them "well, why dont YOU hold stock in YOUR warehouse to smooth over the glitches?" Usually the answer is "I can't afford that!" Me: "Neither can your suppliers."

At the end customer level, always stay fully stocked up on on nonperishables. It's cheap. Stop treating the local grocery store has your personal good storage warehouse. If you say "But I don't have room..."... move someplace cheaper.

Expand full comment

I'd like to make amendments to anti-gouging laws which acknowledged that prices will understandably increase during a shortage. Room in a warehouse is cheaper than room in a residence so it makes perfect sense for people to want businesses to do their stockpiling for them, but we have a system that often makes it illegal for anyone but the end customer to stockpile.

Expand full comment

>It’s better to miss the chance at making an early fortune on Bitcoin than to lose what you do have chasing other hot new investments that fail.

Remember that investment is asymmetric. You can have an arbitrarily high profit (someone who invested in Bitcoin in 2011 could have realized a 50,000x return!) , while at worst you'll lose only 100% of what you put in.

This is what angel investors do: they seek out opportunities for huge multiples, and have deep enough pockets to ride out the failures.

Failure for an angel investor doesn't mean 70% of your portfolio falling flat. Failure means missing out on the next Amazon.

Expand full comment

Not always true, actually. There are investments (well, speculation) where you're on the hook for infinite losses but a maximum return. See Gamestop.

Expand full comment

If you knew about COVID a month early, you could stock up and maybe get some home repairs done-- not a lot, but something.

Expand full comment

N.N. Talib wrote a book about this.

Expand full comment

I read the book but I don't think it was about the thing where the existence of experts correlated with a heuristic causes people to update away from the heuristic. Maybe I missed it.

Expand full comment

I think the book itself was an update away from the expert-endorsed "assume stability" heuristic.

Expand full comment

Taleb's point was that lots of experts set rocks all the time by assuming away tail probabilities. "Almost everything happens within 2 standard deviations, so we'll set our risk tolerance at 3 standard deviations and assume nothing will ever happen outside that window." Except that guarantees that when something DOES happen outside the window, which it will eventually, you won't be prepared.

Expand full comment
Comment removed
February 8, 2022Edited
Comment removed
Expand full comment

I’m starting to miss the young woman who likes to be photographed nude.

Expand full comment

I'd say it's less about ignoring tail probabilities than using the wrong probability distribution. Making a Rumsfeld analogy, the tails of a distribution are known-uknowns, whereas Black Swans are unknown-unknowns.

Expand full comment

I think Scott's exercise is one of converting known unknowns into unknown unknowns. How do you take something you already know and 'unknow' it? You ignore information you have about a frequency distribution and convert it into a simple heuristic whose approximation has less information than you started with.

Expand full comment

All heuristics are approximations that reduce information. That's the goal of generalizing. When I want to drive somewhere, I look at a map, not a 1:1 scale model.

The trouble comes from ignoring important information, the value from ignoring irrelevant information.

Expand full comment

The normal distribution has exceptionally-thin tails, so assuming 6-sigma events won't happen in a normal distribution is actually quite safe (1 in 500 million). As such, this is a severe mischaracterisation.

Taleb's *actual* point comes earlier in the reasoning chain than that. You can't observe the standard deviation of a whole class of entities, after all; you *derive* it from observations of a *sample* and from *assumptions*. One of the more common assumptions is "this class follows a normal distribution". The problem is that a lot of things in the real world do not follow a normal distribution, and your predictions both about "the standard deviation of the class" and the more relevant "likelihood of an event past a certain point in the tail" are going to be wrong if you put in that assumption and it's untrue. To take a particularly-obvious example, the real distribution might be Cauchy. The Cauchy distribution has infinite standard deviation, so using the standard deviation of your sample (which is finite) to approximate the standard deviation of the whole class won't work (https://upload.wikimedia.org/wikipedia/commons/a/aa/Mean_estimator_consistency.gif). Six-sigma events don't happen in Cauchy distributions, either - but 6*(what you would derive as sigma if you falsely thought it was normal and took a small-ish sample) happens orders of magnitude more often than "1 in 500 million" and thus you've set yourself up to fail.

Expand full comment

I didn't say anything about 6-sigma, and Scott's original example wasn't even to 4-sigma (except in the recursive example, where experts used their own reasoning to artificially inflate their own confidence), so I'm not sure where you're getting the "severe mischaracterization".

I think you have a good point about people assuming a random sampling is normally distributed, and that their sample exactly fits that normal distribution. I think you can make that point without attacking what I was saying, which was also true.

Taleb makes a lot of points about statistical distributions, especially about the way people make errors when considering how to treat the tails of the distribution. It's true that he makes the point you called out. But it's inaccurate and limiting to say that Taleb has one point in this area. One of his points is what I was talking about, where experts assume tail distributions don't matter because they're infrequent, but as Taleb points out that matters a lot if the infrequent - but very real - event wipes you out.

Another point he makes is that you can't assume the game will act independently from your own actions. In the case of the security guard, you're much more likely to be robbed if people see a rock sitting at the security station chair - especially if that rock is painted with the words, "THERE ARE NO ROBBERS". In that case, your probability goes from 99.9% (assuming the Scott's hypothetically-omniscient-for-the-sake-of-the-example numbers) that it's just the wind to a much higher robbery likelihood. In that case, your actions told the robbers to come to you.

Another problem is that even 6-sigma can fail, depending on how many times you go back to the probability well. A 1-in-500 million chance of a civilization-destroying meteor this YEAR might as well be zero. Take the same probability every SECOND instead of every year, and you have something to worry a lot about because there are enough seconds to make the event probability likely within your lifetime.

It's a nuanced subject, with a lot of points that could be made. The point I was making is that when someone assumes that a nominally low probability=no probability that guarantees they're unable to consider any of those nuances about the tail end of the distribution. It effectively assumes away much of the distribution. That includes your observation that the distribution may not be normally distributed. Once you simplify away the statistics, you become impervious to all statistical arguments.

Expand full comment

I don't think you made a very good argument on that part, tbh. There's already a lot of noise around so having just a consensus of experts is unlikely. Much much more likely if it's just a bubble and just their public opinions. Like the set of famous virologists converging early on "labs cannot possibly be blamed". Or public policy converging on... anything. Yes, then it applies.

Expand full comment

I think you should be thinking of these scenarios as occurring in Extremistan, where most of the "weight" is concentrated in a single event, e.g., all "no eruptions" have relatively small positive value while "eruption" has a huge negative value. Taleb has argued that this asymmetry is what makes the "99.9% accuracy" label meaningless since the whole value in prediction is getting the 0.01% correct!

Expand full comment

I guess you are making sort of an opposite point. Taleb was saying that experts mostly consult the rocks while prudent, down-to-earth regular people get the risk analysis right.

Expand full comment

I guess you are making sort of an opposite point. Taleb was saying that experts mostly consult the rocks while prudent, down-to-earth regular people get the risk analysis right.

Expand full comment

I think there's some overlap but Scott did it better.

Expand full comment

No. Joseph Henrich wrote a book about this.

Expand full comment

Indeed. The underlying concept in every example stated in this article is ergodicity.

Expand full comment

A problem is that in some of these examples, there is a cost being incurred for a putative benefit (detect the ultra-rare event) and in other cases no cost is being incurred. For example, the security guard is paid a salary and could be employed in productive work. But the only cost the skeptic incurs is the time it takes them to post mean things on Twitter (and the hurt feelings of the people insulted).

I don't think your litany of examples establishes that the "worse" outcome of these heuristics is the "false confidence" (what if it's not false?) rather than "expending resources on something useless".

Expand full comment

My impression of the stories is that I thought something different about replacing the expert with the rock in each of these cases. There are some where we absolutely should just use the rock rather than the expert, but some where the costs of missing the real time are high enough that we should definitely consult the expert who tells us the volcano is erupting 10% of the time (provided that they almost always catch the real eruptions).

Expand full comment

The mayor of New Orleans has a rock like this. The state of Florida in charge of deciding to evacuate the Keys do not.

Part of the difference is not the likelihood of the event happening, but the cost of reactions - for instance, if a nursing home evacuates (and that's the population that you absolutely don't want trapped in a drowning city) some of the residents will almost surely die. (And they would not have died if they had stayed in place.) Doesn't take too many false alarms to encourage people to break out the rock. Same with the risk of looting, or lost business, or kids schooling.

'We should do this just in case it really is a crisis this time' is far easier said when one doesn't appreciate the cost of 'this'.

Expand full comment

I've been quite addicted to following fusion power news for months and I have noticed there are people who say "we will have fusion in 10 years" and there are the skeptics who say "fusion has been 30 years away for the last 50 years" (and they've been saying roughly that for probably a good 20 years)

The skeptics have a real effect on social dynamics. Government investment in fusion is hard because even the optimistic scenario says it won't pan out until your political career is likely over. In this case the only tangible incentives you have relate to the reactions of the booster voters and the skeptical voters and the effect their opinions have on the general opinion of voters in favour of fusion investment or against it

At this point in the US, despite a climate change is a big problem administration being in power, it appears that the point has now come where private investment in fusion rivals or is bigger than government investment despite 10 years being the likely minimum point of knowing if you have a potential success. It could be argued that the skeptic viewpoint has reduced the governmental time horizon to below that of the private investor time horizon in a way that it shouldn't be

Expand full comment

Why should government have a longer time horizon than private parties?

Planning over long time horizons is hard. Government is hard. Trying to do both is even harder.

I appreciate that sewer socialism (https://en.wikipedia.org/wiki/Sewer_socialism) can work for some well defined and understood problems, like sewers. But trying to pick winners in technology or business for the long run is not something government does well.

Btw, climate change concerns aren't really any reason to look into fusion power. From that point of view, fission power already does everything we could want.

(Fusion is interesting for other reasons. And mostly over much longer time horizons than our climate change concerns.)

The only thing I'd want the government to do about climate change is perhaps institute a CO2 tax (or a cap-and-trade regime), and then get out of the way.

Expand full comment

I agree government would do the best by putting in a CO2 tax and getting out of the way. I would add a bridge policy on the way to higher CO2 taxes mid century of a market-like payout for carbon reductions. So regarding fusion, if you innovate a design that comes on to the market this government quasi market system pays you a mid century carbon tax level benefit for tonnes of CO2 avoided as the technology rolls out. Focusing incentives on the maximum reduction in carbon at a fixed and rational cost rather than immediate reductions at variable and higher costs would be great

My argument isn't necessarily that government should be expected to have a higher time horizon but that at minimum it could be longer than what it is depending on the heuristic dynamic at work among voters. Does it make more sense to spend another billion a year on fusion for a muted voter interest or should we increase peanut subsidies (which outstrip fusion spending) with our limited budget which will have quite a strong interest for those voters whom it concerns? Government certainly does things with a very unclear time horizon like particle accelerators that would be either billionaire pet projects or non-existent otherwise so they have a larger maximum if not average time horizon than private entities and this could be applied to advancing technology to where it enters the private horizons

I'd like to believe that fission could be applied as much as it should but it seems too uncertain to rely on. Germany with all their reputation for maximum carbon reduction replaced their nuclear with largely coal. The deployment potential for fusion is vastly higher than fission based on proliferation risk alone - richer countries might choose to fund fusion plants in developing countries where they might never with regular nuclear. So I find fusion is interesting on the 'climate change and air quality in this century' horizon aside from being incredibly interesting in itself

Expand full comment

"Why should government have a longer time horizon than private parties?"

Because the maximum time horizons of most private parties are too short for some socially valuable projects.

The private parties that are capable of having longer time horizons tend also to be very large concentrations of wealth, even monopolies (think AT&T's Bell Labs), and these have their own socially undesirable effects.

Expand full comment

> Because the maximum time horizons of most private parties are too short for some socially valuable projects.

I am not sure that's true? Many people manage to save for retirement just fine.

And the stock market was perfectly happy with eg Amazon and Tesla taking their time to return money to shareholders, and still have big valuations. (You can find more examples, especially in the tech sector.)

See also how stock market valuations recovered very early in the pandemic, when the coronavirus was still raging; because markets could look beyond the current predicaments into the future.

(To be clear: I am not saying that all private parties have long time horizons. Just that there are many private parties are quite capable of having long time horizons.)

To add some speculation: what is often seen as a short term bias in eg the stock market is just an expression of investors not trusting managers:

For an investor it can be hard to judge how good a manager or her plans are. It's easy for a manager to defend vanity projects with some vague talk of longer term benefits. One way for investors to keep managers honest is to ask for steady quarterly results. Those are hard to fake and easier to judge than longer running plans.

Even easier to judge and harder to fake are steady quarterly dividends kept up over a long time. And if a company misses their expected dividend payment, that can be a sign that something is going wrong; even if you don't care about the short term at all, and invest purely for the long term.

As an analogy: if you are working at a company (especially a tech company), and they always used to provide free snacks to employees, but are now cutting back; that might be a strong signal that you should be jumping ship. Even if you don't care about eating snacks at all. It means things are going badly enough that they have to cut back on what used to be seen as trivial expenses.

Expand full comment

Someone saving for retirement is not anything like investment in long-term research projects. (Not to mention that many people didn't successfully do so because it was in the hands of companies that could not meet their long-term pension obligations.)

I mentioned Bell Labs. Look at what they did from around the beginning of the last century through the middle of it, and see if you can find examples of other companies that have done similar things over several-decade terms.

Your example of it being easier for investors to insist on continuously good quarterly results seems to support my arguments.

Expand full comment

When a private party spends or invests, they pretty much have to consider whether they're going to have enough money to put their kids through college in ten years, retire in twenty, etc. When the government spends or invests, the people actually making the decision can't afford to consider much of anything beyond how this will impact the next election cycle.

Expand full comment

That's a reason to want government to have a long time horizon, not why it will. You are using "should" for "would be desirable to be true" not "can be expected to be true." I should remain in good health for at least another century, but I don't make plans on the assumption that I will.

If you give government tasks that require a long time horizon when it has a short one, it will use the authority to achieve its short-run goals. For a real example, the long-run task of slowing climate change was used as the justification for a biofuels program that converted something over ten percent of the world output of maize into alcohol, our contribution to world hunger — because that raised the market price of maize, benefitting farmers. Al Gore, to his credit, admitted that he supported the program because he wanted to be nominated for president and there was an Iowa primary.

We continue to do it long after environmentalists figured that it doesn't actually reduce climate change — because it does still get farm votes.

Expand full comment

Governments should be expected to have shorter time horizons than private actors. Long time horizons require secure property rights — you have to be reasonably certain that the slow growing hardwood trees you plant today will still belong to you, or someone you sold them to when they are ready to be harvested.

Politicians have insecure property rights in their political power. If you take a political hit today for a policy that will pay off in twenty years, you know that when it pays off the political benefit will go to whomever is in office then.

Expand full comment

That seems plausible in general.

Though luckily the real world isn't necessarily quite as bleak. Eg if a politician does something now that only pays of in twenty years, but real estate prices or stock prices reflect that today, the politician can potentially get their political payout straight away.

In this case, the politician needed only a short time horizon, but was piggy backing on the longer horizon of markets.

Expand full comment

I interned at a fusion lab in the late '90s; can confirm, the skeptics were saying exactly the same thing back then.

Expand full comment

Economists call this the Peso Problem. https://en.wikipedia.org/wiki/Peso_problem_(finance)

The key here is that the price of an asset looks too low (say) because there is a tiny probability of a complete collapse. So what looks like a good trade (buy the asset) isn't really right because the catastrophe (that no one really predicts because there is no data to reliably predict it, so everyone relies on the heuristic of no collapse) every once in a while happens.

Expand full comment

Indeed, a really good example of this is that the risk models for mortgage-backed securities in 2007 all contained a rock that said REAL ESTATE PRICES NEVER FALL.

Expand full comment

That was a really really stupid rock though, because they had fallen plenty of times before. Wasn't like we came out of a 100 year period with only rising or flat real estate prices.

It was the establishment being arrogant that NOW they had figured it out, and the trend would actually be broken.

Expand full comment

I think it was technically "US average residential real estate prices have not fallen since the Great Depression, ie for 70+ years" (and we have changed our statistical measurements since then so we can say "ever", when we actually mean "for as long as our stats have been collected").

The problem with this is that both Japanese and UK real estate fell in recent times - Japan in the 1980s, Britain in the 1990s - and there was no principled reason for believing that the US market was different enough not to expect it to happen there eventually.

The separate UK market crash was almost entirely down to the very risky practices of one major UK lender (Northern Rock) which was run by someone who was high on crystal meth at the time and he managed to persuade lots of other lenders to take on some of Northern Rock's debt and also they adopted less risky but still too risky practices to compete with NR.

So Northern Rock was lending 120% mortgages; competitors were lending 105% mortgages to compete.

Expand full comment

>Northern Rock

Name checks out.

Expand full comment

I've heard a lot of bad things about Matt Ridley, but nothing about drugs. Are you sure you didn't mean Paul Flowers of the co-op?

Expand full comment

Oh, damn, yes I got them mixed up.

Expand full comment

Not really. Or rather, the real estate slowdown happened long before the recession, and wasn't such a big deal.

Later on, the Fed (and in Europe the ECB) let nominal GDP collapse. And a minor slowdown turned into the Great Recession.

It's a no-brainer that if total nominal spending collapses, loans will default.

So the rock said something like 'the Fed is half-way competent'.

Compare the Fed's much more pro-active policy in the corona-recession. Instead of a big long recession, we got a steep recovery and a bout of inflation. Compare also how Israel and Australia largely escaped the Great Recession thanks to competent central banks.

Expand full comment

My favorite version of this effect/question is "Why are there market orders?" That is, when I want to sell a share of stock, I have the choice of telling my broker "sell it ASAP at the current market price" (market order) or "sell it as soon as the market price rises to X" (limit order). If I place a limit order with some value of X that's, say, 0.1% above the current market price, I can be pretty confident that that order will execute quickly, just because of normal price volatility. That might make the extra 0.1% seem like "free money", and it might seem like market orders are just leaving that free money on the table. However, the problem with always using limit orders is that you're exposed to a large, rare downside: If you happen to place your order right before the price crashes, it might never get filled, and you'd end up riding the crash. In expectation, these rare losses should balance out the free money that you leave on the table by using just market orders. (Within some error bars related to transaction costs? I don't know how this actually works in practice.)

Expand full comment

This is also the explanation why the Roulette strategy of "bet on red, then bet x 2 on red if you lose, and so on" doesn't work. Most of the time, you will make a small winning, but a tiny percent of the time, you will make an enormous loss (as there actually isn't an arbitrarily large amount of money in the world to bet, let alone in your checkbook).

Expand full comment

So in Ancient Persia or India or some other ancient place the king asked the inventor of chess what reward he would like.

“All I want is one grain of wheat on the first square of my invention, two on the second, four on the third…”

Expand full comment

I don't see why the expected gains and losses from limit versus market orders should exactly balance. There doesn't seem to be any "efficient market" reason for this. Why couldn't it be that market orders are best in some situations and limit orders are best in others?

A downside of market orders is that a temporary lack of liquidity in the market might lead to you getting a bad price by insisting of trading immediately.

Expand full comment

I wonder if there's such a thing as "secretary problem orders"? Something like:

1. Set a limit order to sell at the highest price that's been seen in the last X minutes.

2. If X more minutes pass without filling that order, replace it with a market order.

Expand full comment

GTC or Good 'Til Cancelled. Notionally, they could run forever, and you'll never get filled, but practice is to cancel (or the broker flags it up to you) within the same session. Or confirm it's good for the next. Or whatever.

Expand full comment

Strictly speaking "trade when market price reaches X" is a stop-limit order, i.e. the market price reaching the limit is just the trigger for a trade action, and you further specify what price you want to buy/sell at, be it market or some other value.

>"In expectation, these rare losses should balance out the free money that you leave on the table by using just market orders."

Most people are harmed more by a large loss than they are helped by a large gain. "These rare losses" are exactly what Taleb is referring to with his ergodicity/black swan model - the problem is asymmetric and 'balance' is not a terribly straightforward concept.

Expand full comment

This is also a little like the "economists have predicted nine of the last five recessions" joke. Being able to confidently advise governments and businesses that we're falling into a recession, so that businesses can try to make choices that protect them (though those tend to actually encourage the recession to accelerate), or so governments can loosen fiscal and monetary policy and head off the recession, would be really good! And this is actually a case where sometimes being wrong is probably OK. If you have good coordination of fiscal and monetary policy, then you ought to be able to run at full employment with a smooth path of NGDP growth, and no recessions. You get the odd burst of above-target inflation when you over-react to a signal that falsely makes you think you need to protect against recession. You balance it out by running slightly tighter policy for a few months to get the long-term average back down. It's not _perfect_, but it's much less costly in the long run than the ruined lives from allowing a recession to take hold.

Australia probably has the world record for doing this right -- before the Covid crisis, they hadn't had a recession in _decades_.

Expand full comment

I also referred to that one. :-) That's the opposite, though, constantly predicting the rare event because no-one cares when you get it wrong, rather than predicting the common outcome because the rare one comes up so rarely.

Predictions from psychics work this way as well. If you can land just one, you can keep pointing to that one, confident that no-one will care about the hundreds of low-publicity failed ones.

Expand full comment

You don't even need fiscal policy. Competent monetary policy is more than enough to avoid demand side recessions. Ie almost any recession that wasn't caused by covid or a war. And even the covid recession was rather short with a sharp recovery, because the Fed was competent this time, in contrast to 2008-ish.

Australia and Israel are good cases to study.

Expand full comment

I think monetary policy is probably sufficient in the large majority of cases, but in financial crashes trying to only use monetary policy ends up with the "pushing on a string" problem; you can flood the zone from the monetary perspective, but the money just sits around on bank balance sheets. I think this because we _saw it happen_ after the '08 crash. The '09 stimulus was undersized relative to the size of the hole in Aggregate Demand, and we got a sluggish recovery. The fiscal response to COVID was actually up to the size of the problem -- even perhaps a _little_ beyond. (And it would've been more effective / less inflationary if it had been better targeted, but some of the poorly-targeted stuff was probably necessary to make it politically viable.)

Expand full comment

Sounds like a good summary of Nassim Talib's "Black Swan" and David Graeber's "Bullshit Jobs" put together. NNT's take in "Antifragile" though (super oversimplified) is that we should try to organize our systems so that being wrong that 0.1% of the time is not so bad. There's a huge downside to being wrong about a volcano erupting when you live next to a volcano, not so much if you live 100 miles away!

Expand full comment

Taleb basically says what you're describing is emphatically NOT antifragility- that's plain ol' robustness against risk/disaster. Antifragile systems are systems that get stronger with occasional stress / minor disasters, if I recall his metaphors well enough.

Expand full comment

Yeah, that's true. Still, the question of "how much do you weigh the vulcanologists vs the rock" is seemingly intractable, but the question of "how do you not get buried in lava and ash" is actually easy.

Expand full comment

The security guard has a real responsibility— to act as a liability sponge if a break-in does actually occur. Also, to signal to prospective thieves and customers that this is a Secure Establishment.

Expand full comment

I wonder how broadly applicable this line of thinking is. In medicine, it's commonly observed that patients with minor ailments feel better after seeing a doctor and being given something to take, even if it would be cheaper for them to just buy an NSAID at the drugstore. So in a sense the doctor is there to send a message that you've been cared for as much as anything. I guess the extension to experts is that having experts do media appearances or having the state or federal government employ an expert panel makes people feel like the issue is being addressed, even if the experts have very banal suggestions only or just aren't listened to.

Expand full comment

You'd like _The Elephant In The Brain_ by Robin Hanson and Kevin Simler. It has a whole chapter (no. 14) dedicated to this very idea.

Here's an Amazon link:

https://www.amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0190495995

And here's a free PDF:

https://pdf.zlibcdn.com/dtoken/c6fe77f1028f5ac9b5c7e85d0e499ab9/The_Elephant_in_the_Brain_by_Kevin_Simler__Robin__17579654_(z-lib.org).pdf

Expand full comment

Thanks for the PDF!

Expand full comment

Patients likely feel better with time due to regression to the mean https://westhunt.wordpress.com/2016/03/31/medicine-as-a-pseudoscience/

Expand full comment

Regression to the mean is so often overlooked, although I also think the placebo effect plays a role. To whatever degree "having someone talk to you like they care" is a separable factor, that also seems important.

Expand full comment

But you could still replace them with a rock -- couched in a catapult aimed at the entrance whose door triggers the release mechanism to launch. Which is partly to say that I question whether the demoralizing position is a fair trade-off for whatever pilfered inventory might occur.

Expand full comment

But then who's gonna be the liability sponge :(

Expand full comment

The catapult vendor?

Expand full comment

One of the large challenges here is not having a culture that solely maximizes rewards for those that choose to follow the Cult Of The Rock (because indeed this is an easy way to farm up prestige in most areas); but also trying hard not to miscalculate and over-correct too hard in the opposite contrarian direction.

This is hard too, because being a contrarian can be very fun, and in some cases much more rewarding, especially for skilled and intelligent contrarians in valuable or niche markets. While rat-adj people are not perfect at this calibration (and surely, no one is perfect, but there is always room for improvement), it does at least seem *more* well-calibrated than most mainstream areas, and I feel like I've cultivated even *more* meta-heuristics that Almost Always Work when deciding which contrarians I should and should not listen to.

Also I very much love the format, flow, and elegance of this post! It's constantly funny and soothing to read, even if I think I know what a given section is going to say before I read it.

Expand full comment

There is also the Heuristic That Almost Never Works, where you take an annoyingly contrarian position on literally everything until purely by chance you hit one out of the park and are feted as a courageous genius. Then you proceed to be wrong about everything else for the rest of your life, but no one will have the courage to contradict you. This is also an attractive strategy, to some people at least.

Expand full comment

If I remember right one of Alex Jones's original claims to fame was that he predicted 9/11, because he said something conspiratorial within a few weeks of the attack and happened to mentioned details that overlap with the circumstances of 9/11. This is supposed to improve his credibility, while all of his disproved claims don't decrease his credibility, at least for this sort of trap to work.

Expand full comment

This is particularly true if you're dishonest about what you predicted.

Dilbert's creator got a lot of kudos 2016 for predicting Trump's win. The fact that his exact prediction was that Trump would win "by a landslide" got forgotten somehow.

Expand full comment

> This is supposed to improve his credibility, while all of his disproved claims don't decrease his credibility

I think people interested in this sort of stuff are more lax about factual errors because conspiracies by nature are secretive, so the likelihood that you're getting a distorted picture of what actually happened is much higher than would be typical.

I'm not aware of anyone who's actually kept score on how often he's been right vs. wrong (and how right vs. how wrong), but that's definitely something I'd read out of morbid curiosity.

Expand full comment

I don’t consider myself an expert in something until I find a ladle (something that stops the drawer from opening all the way as expected) or until I believe something that makes me feel emotionally distressed. To do otherwise is to think that everything should work the way I expect without me ever having to get my hands dirty and that my emotional reactions are one hundred percent attuned to the universe.

Expand full comment

I’m looking at the OED and not seeing that meaning for ‘ladle’. Did I miss it in the fine print or a typo?

Expand full comment

Fair! It’s a personal term for “WTF is this I have to deal with now???” A little bit cribbed from Terry Pratchett.

Expand full comment

TIL :)

Expand full comment

Is your ladle a useful thing that is for the moment hindering action/understanding or simply something holding you up? Either way I like this term a lot. I will be filing alongside yak shaving.

Expand full comment

I ran into bikeshedding on the way to learn yak shaving. It could come in handy too.

Expand full comment

I think of expertise as a “map of surprises” because otherwise any reasonably smart person could just figure out whatever the field is from first principles. No need to burn time unless being reasonably smart is the only criteria. A ladle is anything worth putting on the map. (This is a horribly mixed metaphor now and probably not useful outside of my own head.) But I do believe I am pretty dumb when I use the yard stick of “how much do I know compared to how much I need to know to get something done?” rather than the yard stick of other people and I use the ladle thing and the distress thing to check myself.

“Do I genuinely believe I am so smart that nothing should surprise me?” No. Therefore, if I know the truth, there should be a ladle.

“Do I genuinely believe the universe was built to my personal aesthetics?” Also no. Therefore, if I know the truth, something should bother me emotionally.

Expand full comment

Just want to say I really like these concepts.

Expand full comment

This is why you need a red team, or auditors. Security guard not paying attention? Hire someone to fake a break-in. Doctor not doing anything? Hire an actor. Futurist failing? Hoax them. etc. In general, this situation generally only occurs if there are only rewards for one kind of prediction. So, occasionally, make a reward for the other side, to see whether people can be replaced by rocks.

Expand full comment

This is of course a premise for a dystopian or sci-fi novel of one kind or another-- in the near future, corporations control the law, but they also control all crime, and they pay the criminals to A/B test different kinds of law enforcement. Real crime has been eliminated because it is too lucrative to commit crime as a white hat, so these law enforcement systems no longer have any purpose, but keep spinning on like a perpetual motion machine, far detached from any actual purpose.

Maybe I'm falling for an absurdity hueristic or something, but I think a) the ideal ratio of white hat cyber-attacks to real cyber-attacks is probably in the neighborhood of 10:1, and b) for most things in meatspace white hats are not very useful.

Expand full comment

Frank Herbert's very weird Whipping Star has an official Bureau of Sabotage .

Expand full comment

Or Dunes Bene Gesserit

Expand full comment

Yes, you've got it. Society needs error checking code that checks for (and incentivizes doing it) errors sufficiently more frequently (and hopefully cheaper) than the chance of the bad event happening.

Expand full comment

Parity or cyclic redundancy checking?

Expand full comment

What do you mean by "hoax them" here? You're right with the guard, idk with the doctor. For the interview problem it's actually really interesting! Like, hire a super-expert to play as a candidate with a mediocre CV and see if they get accepted. Does anyone do this? I would guess that the companies that most have this problem would be the least willing to use this solution.

But I think the problem remains for things that the upper level (who employs the potential rock cultist) doesn't understand well and/or are inherently hard to fake, like a society-wide problem (volcano, pandemic) or a weird solution to it (drug etc.) working.

Expand full comment

I recall that Doris Lessing, who was a famous and frequently published author at the time, submitted a new novel under a pseudonym to a bunch of publishers and got rejected by all of them. She would count as a recognized "super-expert", but it did her no good unless she dropped her own name.

Expand full comment

That's actually not a great example because in the case of a book author the recognized name is actually extremely important economically

Expand full comment

Right, to elaborate: The role of a publisher isn't to publish "The best books", but to publish "The best *selling* books", and I'm pretty sure "My shopping list, by J. K. Rowling" would sell better than a genuinely good (but not masterpiece) book by an unknown author.

Expand full comment

I don't think you need an actor to test the doctor. You need someone who's actually sick but doesn't look sick to a casual glance.

Expand full comment

They do hire people/actors to test doctors in training during medical school. The key component would be ensuring that the doctors telling the actors what to say/express as symptoms test a sufficiently wide range.

Expand full comment

But the fact is, society actually does it. The mall is full of relatively transportable and cashable goods, there is a clear incentive to break in. If nobody still does it, it means that the rock is right (still, might be valuable to hire a human because it creates more deterrence than the rock, even when it just follows the rock).

You failed to detect cancer? I really hope you have a good malpractice insurance, because your ass is getting sued (and if you are in the US, you might be sued either way).

You are able to detect outliers that outperforms the Ivy graduate? That juicy referral bonus is all for you!

You have a contrarian position? Just for packaging it well, you can be paid handsomely. If you are even remotely right (in the sense that you predicted something somehow similar to what happened in the last decade), you are a guru!

We provide handsome incentives to defy the rock, so if people with skin in the game still follow it, one has to admit that it is impossible to do much better

Expand full comment

Feels like Chaos Engineering on a societal scale which I think could be very useful but it would be a hard initial sell to the population. "We are going to constantly break things until they get better"

Expand full comment

Incumbent and regulated industries *hate* this, and use their political power to prevent it. (I've seen it shot down when proposed, for reasons no better than "we depend on the appearance of security to give assurance to our customers and regulators, and this will damage that".) Government rats even more so. Remember the pentest of court security case a while ago?

Expand full comment

The expert gives some serious value added over the rock because the expert provides the human interface. We’d all feel absurd looking at a rock directly. But a well dressed calm adult who listens to NPR and behaves identically to the rock? Now THAT we can trust.

Expand full comment

I was sure you were gonna end with "And that's why we need prediction markets".

Expand full comment

"and not reputation systems"

Expand full comment

In recent comments describing healthcare systems different from the US's, some people said the Dutch medical system gatekeeps healthcare like this:

"She just says 'It’s nothing, it’ll get better on its own'."

They mentioned the Dutch system doesn't keep doing this if you persist: if you're importunate enough, your problem is likely to be taken seriously.

But American doctors seem to believe American patients dislike "It'll get better on its own" as an answer. Patients demand antibiotics for colds, after all! And, as mental-health issues become less stigmatized (as they should), referrals to mental-health care, with some stock kindly words that the mind-body connection is mysterious, and one shouldn't feel ashamed if one's poor mental health manifests physically, proliferate. Then mental-health providers, who'd already be overstretched without treating patients with undiagnosed physical problems, get the patients with undiagnosed physical problems, too.

Expand full comment

The Dutch system has a bad habit of waiting too long, and then prescribing benzo IV until the problem goes away. Did I say bad habit? I meant "practice of clinical excellence".

Expand full comment

Now I'm envisioning warehouses of people on benzo drips for life because they got stuck with one of the problems that doesn't go away.

Expand full comment

No need for warehouses, benzo drips are self limiting, so to speak...

Expand full comment

Some examples are more plausible than others. Sure there are security guards who do nothing and journalists who add less value than the rock. But an investor who always says “no” isn’t generating outsized returns, and people who do this consistently definitely exist. From my perspective, prediction markets already DO exist, in venture and angel investing. Maybe instead of making narrativey arguments that people should value rationality, rationalists should all try to build massive fortunes by placing good bets and then use these fortunes to produce even better prediction markets, laws, etc.

Expand full comment

Not completely related, but your post reminds me of how Kaiser Permanente seems to run everything by simple algorithms. You don't need doctors at KP -- while they can't quite be replaced by rocks, perhaps, they can be replaced by the computers that make all the decisions.

If you have one of the 99.9 percent of problems that are solved by the computer algorithm, you think Kaiser is great. If you're a person with some kind of rare or tricky condition; or a person who only goes to the doctor once every several years, when you've eliminated 99.9 percent of things that the algorithm could have suggested, you're going to think Kaiser is crap and their doctors are useless.

Not that they are idle -- they have to churn out patients every 15 minutes, but fortunately their computer tells them a decent answer much of the time. What would happen if the computers just told the 99.9 percent of patients what to do, and doctors were not computer tapping drones but rather highly-trained problem-solvers kept in reserve to solve trickier problems through their own brainpower?

Expand full comment

"What would happen if the computers just told the 99.9 percent of patients what to do"

You would have already solved a big part of the problem if the computer/rock could tell the 99.9% from the 0.1%.

Expand full comment

Well, I'd suppose the computer solves it better than the rock does. If the patient is like "no, it's not that; no, it's not that; no, I can google too" then Kaiser needs to figure out what to do with those outliers.

But I think it pretty much doesn't. I think Kaiser is satisfied with handing out antibiotics and doing mammograms and colonoscopies, and letting the outliers die.

So in a sense, their computers are fancy rocks.

Expand full comment

That was pretty much the premise of House, wasn’t it? A team that solved the really weird medical mysteries.

Expand full comment

Do security companies ever employ Secret Robbers? That'd be like a Secret Shopper but instead of reporting back to the company how nice the sales people are it would be on how burglerable their security is

(Yes I realized I just described pen-testers but I think Secret Robbers is a better name)

Expand full comment

They do. One of the most visible is called - I shit thee not - Deviant Ollam. That's his name. He does presentations about breaking into buildings as a white hat.

https://www.youtube.com/watch?v=S9BxH8N9dqc

Expand full comment

A friend of mine was going for a meeting at an IT company he did security work for, and decided to see how far he would get if he just shadowed people, went in through open doors, and so on. He ended up making the phone call "I'm currently in your server room without any credentials, maybe we should talk about this?"

Expand full comment

I once walked on to the trading floor of a major investment bank without any ID, and walked up and chewed out a very confused young intern for absurd reasons. All I had to do was:

-wear a suit

-take the elevator to the floor

-walk with a group of traders and let them hold the door for me

If you look like you belong in a place, people will think nothing of just letting you in. One of the easiest ways to do this is to get work coveralls and a toolbelt - nobody ever wants to get in the way of a technician with a job to do. I've heard of people robbing stores and shopping malls in broad daylight by just walking in wearing service uniforms and pushing wheeled dollies, and wheeling furniture, carpets, etc. right out without anyone looking twice.

Expand full comment

One of the best ways to get past the doorman into an overcrowded nightclub is to dress in distressed stage blacks, walk right up to the doorman at the back ally entry, show him a bag of LPs, CDs, beatup mac laptop with stickers, etc, and say "I need to get this to the DJ *right now*.

Expand full comment

My gut feeling: the key to developing better heuristics is to find ways to move beyond the binaries. There is a wide spectrum between "the ache went away after a few days" and "the patient died a horrible, painful death"; there is a wide spectrum between "the technology completely upended the way society works" and "the technology tanked completely, and everyone who believed in it looked like a moron". Earthquakes and storms follow power law distributions IIRC, so there should be plenty of "noticeable, but not Earth-shattering" events to train your algorithm on before the big one hits.

Expand full comment

A lot of this comes down to understanding risk and reward. Our minds generally do not do well with very large and very small numbers. So a small probability event with a catastrophic result is doubly challenging for our wiring.

Expand full comment

Steve Coogan was that security guard in the 90s comedy show The Day Today:

https://youtu.be/zUoT5AxFpRs

Full version:

https://youtu.be/ob1rYlCpOnM

Expand full comment

Not sure what the point is, or if it's what I think it is if I buy it.

It's a) not that these cases are literally 99.9% heuristics, and b) not surprising that using the heuristic continuously puts you at a disadvantage.

Not all "is almost always right" heuristics are created equal. Some are more 99/1, 95/5 etc. ... which results in an entirely different risk profiles & unit economics.

The hiring heuristic: "comes from good college, has lots of experience" is more like 80 / 20 maybe? It also means those candidates are more expensive.

The people with brains add another variable, e.g. "Googliness" and experiment to what degree they change the odds & cost for hiring a good candidate.

Investors choose an area (their thesis) where they can increase the odds from maybe 1% chance of success to 2-10%.

Their "thesis" simply means they add variables to the heuristic that give them an advantage over the market.

You can think of the additional variable (that is not part of the "almost always right" heuristics) that detects the signal as the "thesis".

If you have a good thesis, you can increase your expected rate of return vs. "the market" if the market represents the default heuristic (the 99.9%).

No news that you can't beat the market if you use the same heuristics that the market uses (which is by default the one that is almost always true).

What's surprising about this? (I'm thinking this at least 50% of the time when I read NNT)

Expand full comment