Nassim Taleb summarizes the thesis of Antifragile as:
Everything gains or loses from volatility. Fragility is what loses from volatility and uncertainty [and antifragility is what gains from it]. The glass on the table is short volatility.
The glass is fragile: the less you disrupt it, the better it does. A rock is “robust” - neither fragile nor antifragile - it will do about equally well whether you disrupt it or not. What about antifragile? Taleb's first (and cutest) example is the Hydra, which grows more and more heads the more a hero tries to harm it. What else is like this?
Buying options is antifragile. Suppose oil is currently worth $10, and you pay $1 for an option to buy it at $10 next year. If there's a small amount of variance (oil can go up or down 20%), it's kind of a wash. Worst-case scenario, oil goes down 20% to $8, you don't buy it, and you've lost $1 buying the option. Best-case scenario, oil goes up 20% to $12, you exercise your option to buy for $10, you sell it for $12, and you've made a $1 profit - $2 from selling the oil, minus $1 from buying the option. Overall you expect to break even. But if there's large uncertainty - the price of oil can go up or down 1000% - then it's a great deal. Worst-case scenario, oil goes down to negative $90 and you don't buy it, so you still just lost $1. Best case scenario, oil goes up to $110, you exercise your option to buy for $10, and you make $99 ($100 profit minus $1 for the option). So the oil option is antifragile - the more the price varies, the better it will do. The more chaotic things get, the more uncertain and unpredictable the world is, the more oil options start looking like a good deal.
Evolution is antifragile. In a stable system, animals won't evolve. In a volatile system, they will. At times I became concerned Taleb was getting this wrong - animals will evolve to more perfectly fit whatever niche they find themselves in. In an environment where the temperature is exactly 70 degrees all the time, animals will become very good at living in exactly 70 degrees. In an environment with constant temperature swings, animals will become very good at dealing with constant temperature swings. Neither population of animals is "better" than the other, and each will outcompete the other in its own environment (the 70-degrees animals would outcompete the wild-swings animals in their own perma-70-degrees home). But I'm not sure Taleb is disagreeing with this; I think he’s saying that volatility in the genome (mutations) is required to move evolution along. Obviously you don't want arbitrary amounts of volatility in the genome, but nobody ever said things had to gain from arbitrary amounts of disorder - just that a little bit of disorder is sometimes good for them.
Exercise is antifragile. If you sit comfortably in bed, treating yourself the same way you would treat a glass or fine china, you'll probably end up unhealthy and miserable. It's only when you expose yourself to disorder - a level of exertion that ranges from "sleeping at night" to "running as fast as you can" - that your body stays healthy. Stressing your muscles to their absolute limits helps build muscle; stressing your heart to its absolute limit once in a while helps lower heart rates.
Options. Evolution. Exercise. Do these really have anything in common? Or is each "antifragile" for its own reasons, such that a category including all three of them wouldn't be very helpful?
Nassim Taleb doesn't have time to answer your dumb question, because he’s too busy coming up with newer, more exciting examples of antifragility! Taxi drivers are antifragile! The Mafia is antifragile! Lifting weights is antifragile! Seneca was antifragile! Small restaurants are antifragile! Religion is antifragile! Ancient Phoenicia was antifragile!
...yes, this is definitely a Taleb book, with all that implies. Expect love-it-or-hate-it digressions on how Taleb's intellectual opponents have poorly defined jaws and would lose to him in street fights, long rants against modernity, and a lot of pithy sayings from Lebanon - a country with so much folk wisdom relevant to managing risk and avoiding fragility that it's really quite surprising their economy is in freefall.
I feel bad trying to summarize Antifragile, not just because the medium is the message, but because it's hard to figure out what the message is without it. Taleb launches an ambitious project to tie all of his personal interests and hobbyhorses into the idea of antifragility, and it sort of works and sort of doesn't. At times, he seems to use it as a proxy for morality, especially a sort of Nietzschean morality where strong things are good and weak things are sickly and ugly and worthy of being despised. Sometimes you can kind of see it.
Other times it's harder. To choose an example close to my own heart, is it really true - as asserted without argument on page 422 - that Spartan hoplites are antifragile but bloggers are fragile? Spartan hoplites are good at war, which is a sort of disorder (though it's not clear they exactly benefit from it). But in other ways they seem quite fragile. Even slight deviations from their ideal conditions (flat open ground, with a slow enemy lumbering toward them from the front) would knock them off balance. A single break in the ranks would doom them. If a flood or avalanche hit, being stuck in unwieldy armor would assure them a swift death. As for bloggers, during all the greatest crises of the past few years - Trump's election, the BLM protests, coronavirus - my hit count skyrocketed, as people looked for writing that would help them make sense of the situation. What could be a purer example of gaining from volatility?
So think of this less as a sober attempt to quantify antifragility, and more as an adventure through Taleb's intellectual milieu. I can't possibly think of a linear way to summarize it, but here are some highlights.
II.
Book Two is called Modernity And The Denial Of Antifragility.
Taleb gives the parable of John and George. John is a banker. He works for a big bank. Every month, the bank pays him a salary of $3000. In a good month, he gets $3000. In a bad month, he gets $3000.
His brother George is a self-employed cab driver. He makes an average of $3000 a month, but it varies a lot. If there's a big convention in town, he may be very busy and make much more than $3000. If there's an economic downturn and people try to save on cab fare, he might make much less than $3000.
John fancies himself protected from volatility. But he is only protected from small volatilities. Add a big enough shock, and his bank goes under, and he makes nothing. George is exposed to small volatilities, but relatively protected from large ones. He can never have a day as bad as the day John gets fired.
And George can adapt. As his business starts going down, he can take appropriate steps, whether that's branching out into new businesses (courier? working as a cabdriver half-time and digging ditches the other half?) or lowering his expenses. If demand declines in one neighborhood, he can figure out where the passengers are and shift to another. John can do none of these things. He just sits and banks until the axe falls.
John treats his bank as a system that buffers him from volatility, but it actually makes him more fragile at the tails by creating a big discontinuity - his salary is $3000 until it's $0. Think of firefighters who put out every tiny little forest fire, unaware that by preventing controlled burns they're building up tinder for a conflagration they won't be able to prevent. Taleb interprets many of our institutions in the same context - by preventing "creative destruction", they briefly buffer us from shocks but mean that the shock that overwhelms the system will be a disaster. For example, when governments bail out failing businesses, they replace ordinary volatility (sometimes bad companies go bust and people have to shift to better ones) with extreme volatility (no company will ever go bust, until the economy becomes such a basketcase that the government runs out of bailout money and everything collapses at once because all the companies are incompetent dinosaurs).
This is one reason (among many) Taleb disagrees so strongly with Steven Pinker's contention that war is declining. Pinker's data shows far fewer small wars, but does show that World Wars I and II were very large; he interprets the World Wars as outliers, and notes that since WWII the trend has been excellent. Taleb interprets the constant small wars that used to happen as "controlled burns", and the various institutions set up to prevent those wars - the Concert of Europe, multilateral alliances, the UN - as the same sort of dangerous volatility-buffering you get from a corporate job or a government bailout. It ensures fewer small wars - until the system gets overwhelmed, and you get a giant one. As long as NATO is intact, there's no risk of some dumb war between France and Britain over fishing rights; and as long as the Warsaw Pact is in place, there's no risk of Poland and Ukraine scuffling over borders. The cost is the risk of World War III between NATO and the Warsaw Pact.
Evolution is the ultimate example of a system that allows volatility rather than unwisely trying to buffer against it. Being exposed to evolution sucks - animals very often die. Perhaps it would be much kinder if somebody gave unfit animals some Animal Chow to prevent them from starving. But such kindness would prevent natural selection, and gradually weaken the species (or, more technically, the species' suitability to its niche) until eventual cataclysm. The dodos had a good run free from predators for a few thousand years - which just meant they had a really bad time as soon as predators arrived. If they'd had predators the whole time, those few thousand years would have been less peaceful and pleasant, but they would have been overall better prepared. Too much government intervention - Taleb claims - is about protecting dodos from predators.
Not just the US government - Taleb focuses on the neighboring states of Lebanon and Syria. In the early 20th century, nobody had drawn a border between them and they were nearly identical. In the 1960s, the Baath Party took over Syria, and began a centralized "modernization" campaign, which for Taleb is symbolized by their dissolving the old bazaars and replacing them with modern office buildings. The result:
Lebanon and Syria had very similar wealth per individual (what economists call Gross Domestic Product) about a century ago - and had identical cultures, languages, ethnicities, foods, and even jokes. Everything was the same except for the role of "modernizing" Baath Party in Syria compared to the totally benign state in Lebanon. In spite of a civil war that decimated the population, causing an acute brain drain and setting wealth back by several decades, in addition to every possible form of chaos that rocked the place, today [ie when Antifragile was published in 2012, before the Syrian Civil War] Lebanon has a considerably higher standard of living - between three and six times the wealth of Syria.
Book Three is called "A Nonpredictive View Of The World", and annoyed me because it pre-empted a post of mine by almost ten years.
The post was A Failure, But Not Of Prediction, and it argued that getting your predictions right was less important than calculating payoffs right. For example, if some very smart scientists tell you that there's an 80% chance the coronavirus won't be a big deal, you thank them for their contribution and then prepare for the coronavirus anyway. In the world where they were right, you've lost some small amount of preparation money; in the world where they were wrong, you've saved hundreds of thousands of lives.
Taleb is even more into this than I am, because he thinks you generally cannot predict things - or, in his own words:
You can't predict in general, but you can predict that those who rely on predictions are taking more risks, will have some trouble, perhaps even go bust. Why? Someone who predicts will be fragile to prediction error. An overconfident pilot will eventually crash the plane. And numerical prediction leads people to take more risks.
This is in the context of investing - his sympathetic character, Fat Tony, buys some financial instruments that "predict" everyone else will go bust, then cleans up during the 2008 crisis. I'm not really sure how far Taleb wants to take this. Does he believe that I - someone who doesn't know very much about investing - could beat the market by buying literal oil options, or whatever other kind of option I pleased? What about betting on literal VIX - an index that goes up during higher-than-expected volatility and goes down during lower-than-expected volatility? Taleb never makes this claim, and I think it would be hard to argue that an entire category of instrument has been consistently mispriced since forever. But then what is he trying to say here?
Maybe this doesn't work in investing, but does work in real life? But I'm still not seeing it. Sure, banking is fragile and taxi driving is antifragile, but this has already been priced in - investment bankers make big bucks partly to compensate them against the fact that they might get fired after a few years. Exercise is antifragile, but the amount of exercise people do right now already prices in the fact that it will make them healthier and more muscular. Preparing against coronavirus is antifragile, and, um - maybe you could argue that there's an optimal amount of preparation to do before it becomes excessive, and suggesting you do more is implicitly assuming you're not at that point yet?
This chapter (and honestly the rest of the book) only make sense with an assumption that antifragility is systematically mispriced, both in literal financial markets and in the metaphorical realm of what job to have, how much to prepare for things, etc. I'm not sure it addresses this assumption. I think part of its response would draw on Taleb's previous arguments that people underestimate the risk of black swans, so the world will be more volatile than they think. I wish this had been discussed in more detail. For example, suppose I am long (or short) VIX. If something unpredictable changed to make the world much more volatile, that would be a positive black swan. If something unpredictable changed to make the world much less volatile, that would be a negative black swan. Should I expect equal chances of both of these? Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job? And everywhere else, people really do underestimate volatility, and antifragility systematically is underpriced?
The other interesting thing in this section is the analysis of Seneca and the Stoics. The Stoics, remember, cultivate a mental indifference towards catastrophe. Usually this gets interpreted as "be equally indifferent to success or failure", but Taleb isn't having it. Seneca was one of the richest men of his time; if he was indifferent to wealth, why did he keep making it? Taleb argues he wanted to keep the upside while eliminating the downside. If you can psychologically steel yourself against losing your wealth, you're in the same antifragile position as someone who owns an option on oil; volatility can only make your (psychological) position better, not worse. If you lose everything, you shrug and move on. If you keep everything, or get much more, you can be as delighted as you want!
Book Four is called "Optionality, Technology, And The Intelligence Of Antifragility".
I'm not comfortable applying the full connotations of "anti-intellectual" to Taleb, so let's just say he...hates intellectuals a lot. He thinks schooling dulls the mind, college makes people stupid and conformist, and theory is fragile - it's an attempt to shoehorn the complexity of the world into a single formal system, then deny any possibility of black swans/outside-the-system events, then collapse when they inevitably happen. Far better to learn in the school of hard knocks, encountering the real world in all its complexity.
(this is also why lifting weights is antifragile. Using a machine at the gym trains your body the same way going to school trains your mind - it strengthens your ability to do a very specific, almost ritualized action. Lifting free weights trains your body the same way attacking difficult real-world questions trains your mind; you have to do a lot of poorly-understood things in combination, and so you never get too reliant on a formalized proxy for the thing you want)
But haven't theories given us all sorts of useful things, like science, which leads to technology? Taleb is willing to compromise a tiny bit on this, but he suggests theory is much less important for technology than we give it credit for. He makes the same point I made in Is Pharma Research Worse Than Chance - a whole lot of drug design seems to happen more by accident (or more politely, through tinkering and investigating) than by smart people using theory to discover drugs. Fleming discovered pencillin by wondering why there were blank spots in his petri dishes; meanwhile:
Over a twenty-year period of screening more than 144,000 plant extracts, representing about 15,000 species, not a single plant-based anticancer drug reached approved status. This failure stands in stark contrast to the discovery in the late 1950s of a major group of plant-derived cancer drugs, the vinca alkaloids - a discovery that came about by chance, not by directed research.
And:
John LaMatina, an insider who described what he saw after leaving the pharmaceutical business, shows statistics illustrating the gap between public perception of academic contributions and truth: private industry discovers nine drugs out of ten. Even the tax-funded National Institutes of Health found that out of forty-six drugs on the market with significant sales, about three had anything to do with federal funding.
He adds:
We have not yet digested the fact that cures for cancer had been coming from other branches of research. You search for noncancer drugs (or noncancer nondrugs) and find something you were not looking for (and vice versa). But the interesting constant is that when a result is initially discovered by an academic researcher, he is likely to disregard the consequences because it is not what he wanted to find - an academic has a script to follow. So, to put it in option terms, he does not exercise his option in spite of its value, a strict violation of rationality (no matter how you define rationality), like someone who both is greedy and does not pick up a large sum of money found in his garden.
I've grown used to this in medicine, but I was surprised to see Taleb point out the same effect in fields like physics and engineering. For example, he argues that jet engines just sort of happened when engineers played around with airplane engines enough; physicists didn't explain how they worked until later. Engineers were already designing systems along cybernetic principles long before Wiener invented theoretical cybernetics. Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
For Taleb, too much of academia is "teaching birds how to fly" - finding some field that works, formalizing it into an academic/theoretical/narrative mold, then declaring they have discovered it, and demanding anyone who wants to practice it must pay them for a credential. And so "governments should spend money on nonteleological tinkering, not research."
This section is also interesting for its "Fat Tony Debates Socrates" chapter, where Fat Tony argues that putting Socrates to death was reasonable, because Socrates was claiming everything must be comprehensible (ie if Euthyphro couldn't come up with an academic-style formal verbal account of what "good" was, then it was somehow embarrassing that he was trying to be good) whereas formal systems are fragile and people should be okay relying on intuition and heuristics. Cf. Plato's attempt to define man as a featherless biped, vs. Diogenes' more reasonable outlook that, f*@k you, we know what men are, trying to reduce it to words is dumb and counterproductive."
Book Five is about "The Difference Between A Large Stone And A Thousand Pebbles". The theme is nonlinearities - if you drop a large stone on someone, you could kill them, but if you break the stone into a thousand pebbles and drop them one by one, probably the person will be fine. The effect of the large stone isn't just the effect of the 1/1000th-as-large stone times one thousand; something has been gained.
Taleb relates this to Big Business:
Some economists have been wondering why mergers of corporations do not appear to play out. The combined unit is now much larger, hence more powerful, and according to the theories of economies of scale, it should be more "efficient". But the numbers show, at best, no gain from such increases in size - that was already true in 1978, when Richard Roll voiced the "hubris hypothesis", finding it irrational for companies to engage in mergers given their poor historical record. Recent data, more than three decades later, still confirm both the poor records of mergers and the same hubris as managers seem to ignore the bad economic aspect of the transaction. There seems to be something about size which is harmful for corporations.
It's not in this section, but Taleb feels the same way about countries. He praises Switzerland, which is so federal that it's barely a single country at all, and argues that its small size (or rather, the small size of each canton) has helped it stay one of the world's most stable and prosperous areas (also, Venice!). He argues that a large country isn't just a small country times X. Small countries operate partly on informal bonds of personal relationships; everybody has "skin in the game" regarding decisions. Larger countries don't just multiply everything by a constant, they switch from personal/Near Mode to bureaucratic/Far Mode and get gradually worse as they expand.
(this is why ancient Phoenicia was antifragile; it was a collection of city states.)
The other good thing about city states is that sometimes they die. Taleb likes evolutionary systems - real evolution, the startup economy, and small groups of city-states. He says each individual is fragile (it's operating on a specific theory/strategy/vision, and the worse that does, the more likely it is to die) but the overall system is antifragile (as individuals die, it evolves and strengthens).
Book Six is about via negativa, which is technically a theological tradition that talks about God in terms of what He is not, but which Taleb repurposes as any strategy that succeeds by taking things away instead of adding them.
"Interventionism is a sucker's game" - intervening in a system requires some kind of theory, some kind of model where the positive effects will definitely be better than the side effects - and given how little people know and how bad we are at prediction, this will probably be wrong. But removing things is kind of like a negative intervention, and so probably good. So for example, you're unlikely to find a medicine as helpful as smoking is harmful, so focus on stopping smoking. You're unlikely to find a superfood as helpful as junk food is harmful, so focus on cutting out junk food. You're unlikely to find a new law as helpful as the old laws are harmful, so focus on getting rid of old laws. A lot of the time this sort of thing reduces to metaphorical or literal paleo diets - try to return to the environment of evolutionary adaptedness, before civilization "intervened".
This isn’t always true. Healthy people are fragile (increased variance can mostly make them worse), very sick people are antifragile (increased variance can mostly make them better). So it is reasonable to give a terminal cancer patient an experimental drug - the worst that happens is they die (which would happen anyway) and the best that happens is they recover - it's all upside and no downside. For the same reason, he is very skeptical of preventative medicine, where you give drugs (eg statins) to healthy people. To justify statins, you have to be both very sure the studies showing they work are right, and very sure the studies showing they don't have serious side effects are right.
(I might have missed it, but I don't think Taleb specifically describes healthy people as fragile and sick people as antifragile, even though it seems like less of a stretch than most of his uses of these terms, because he's got an antifragile = good thing going on, and this would mess with it)
This was also the section with the famous Lindy Effect: if something doesn't have a specific lifespan like humans do, then we should expect older ones to last longer than new ones. For example, people have been reading the Iliad for 2500 years, but Antifragile for only eight years; probably Antifragile will sink out of the popular imagination before the Iliad does. Or: San Marino has been independent for 1500 years, and South Sudan has been independent for nine years; probably San Marino will outlast South Sudan. Judaism has been around for 3000 years and Scientology for 50; probably Judaism will remain when Scientology is relegated to the history books.
Taleb uses this as a jumping point for various forms of contrarianism - instead of studying new technologies, read the classics! Instead of reading the latest studies, read older studies! As practical advice, this suffers from a certain having-obvious-transparent-flaws, but the Lindy Effect is fun to think about anyway. Also, it's the same principle as the anthropic assumptions behind the Carter Doomsday argument, and I'd be fascinated to know what Taleb thinks of this.
III.
I've previously written about a cluster/strain/school of books including those by G.K. Chesterton, Joseph Heinrich, and especially James Scott's Seeing Like A State. Taleb's Antifragile belongs in the same space.
Some sections seemed straight out of Scott. There's the mandatory section on Le Corbusier, Robert Moses, and Jane Jacobs. Taleb's picture of the fragilista prediction-loving intellectual-yet-idiot with his formal system and no willingness to think outside of it matches Scott's complaints about High Modernism. Taleb's ode to tinkering and skin-in-the-game matches Scott's conception of metis. Taleb is much more ambitious (some would say less careful and scholarly) than Scott, so instead of focusing on a few studies of historical farmers, he relates this to almost every part of today's society. But it's the same argument.
Other parts of Antifragile reminded more of David Chapman (confession: I have not yet read all of Chapman's many and confusingly-organized works). There's the same discussion of the limits of rational systems, also by a person who obviously knows a lot about formal rationality and isn't just some fundamentalist or postmodernist or something.
But Taleb seems to want to go faster than any of these people. He seems to have more of a sense of mission to deliberately weave a lot of disparate threads into a full intellectual counterculture. Obviously no one person can control such an emerging counterculture, but he seems to be ushering it in faster and more deliberately than most of the other people who are working on the project.
His project could fairly be described as “anti-rationalism”, in the sense that theory-building and rational thought can’t go as far as people expect. Lots of anti-rationalists think the rationalist community are their enemy, probably because of the name. I’m more optimistic; I think self-described rationalists are more about poking around rationality, exploring its limits, and trying to find ways to expand those limits or route around them - which means there’s really a lot of common ground. "Formal systems can't capture everything and we should be really careful with them" is absolutely true, but the exact task of figuring out how much to use them vs. not use them remains difficult and metis-intensive in a way that, of the figures above, only Chapman seems to grapple with seriously. At some point you have to do a thing, which usually means using some system but also being aware of its limitations.
I continue to enjoy Taleb's books and recommend them to anyone who wants an introduction to these subjects. If you've already read the various authors I compare him to, you might still enjoy it as an augmentation, or to hear it in a very different style.
Book Review: Antifragile