Summary and commentary on Nassim Taleb's "Antifragile"
"animals will evolve to perfectly fit whatever niche they find themselves in"
can somebody who's better than me at evolution address this
Re: "Evolution is antifragile"
I figured the idea was this: If the environment is stable for many generations, there is no environmental driver for evolution. If the environment is volotile, not over the lifespan of individual creatures but over the timespan of many generations, then the environment is driving the creatures to change to evolve to fit it.
"Perhaps it would be much kinder if somebody gave unfit animals some Animal Chow to prevent them from starving. But such kindness would prevent natural selection, and gradually weaken the species (or, more technically, the species' suitability to its niche) until eventual cataclysm."
Hmmm ... wouldn't this be exposing them to variation in their environment (sometimes there is Animal Chow, sometimes not) which surely should make them stronger!?
Never read any Taleb, but my impression of him from listening to him on EconTalk is that he is not a clear thinker when it comes to biology.
Like you seem to think, Taleb is best as a corrective (and effective Twitter partisan against Intellectuals-yet-idiots) rather than as a starting point.
Of the first few examples, I'd have to say stock options is the most egregious. Yes, the option gains value when the volatility *of the underlying* increases. Notice anything peculiar there? It benefits when chaos is applied *to something else*. On the other hand, when the value *of the option* is fluctuating wildly, that's really no fun for the option holder..
It's like saying "my company is antifragile, because when our supplier company is experiencing chaos, they get desperate and give us better deals".
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
There is a literature on "volatility investment." The big risk is totally losing your shirt - see this paper, which decorates its margins with Death wielding a scythe:
There was a big blowup that burned a lot of people who invested in volatility in February 2018. See Matt Levine for some details:
> Maybe this doesn't work in investing, but does work in real life?
Public markets make a poor parallel to real life. Public markets come extremely close to the efficient market hypothesis, and it's very hard to consistently beat them. But very few other markets are efficient.
The area I know -- early stage tech startups -- has huge amounts of value sitting around waiting to be taken. People no smarter than you make billions with simple strategies. Renting an office in SF is inefficient -- you can pay less than half price if you really shop around and negotiate.
It's frustrating that so many books about strategy and forecasting use public markets as an example, because it's the exception where simple strategies don't work.
On the topic of the Lindy Effect, why coin a new, worse term to describe eustress?
Taleb always bothered me with his grand, sweeping claims and just-so stories. They never seem to have much relation to the real world.
Like, the taxicab (or Uber) drivers I've known are generally one bad week/month away from bankruptcy, because they rely so much on their bodies to do their work and their equipment is on a vicious depreciation cycle. This is not true of the bankers I've known, some of whom take a year off from work and are fine.
Taleb would probably argue that he's speaking of some hypothetical banker and some hypothetical taxicab driver, but then what's the point? Why not just argue that dragons are fragile, which is why they never ruled Westeros? Either his arguments are grounded in the real world or they aren't.
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
I'm not aware of a way to go long on VIX which doesn't (naturally) decay over time to 0. There are expenses and weirdness naturally built in to the products. I'm not a super derivatives guy, though, so... maybe there is a way and someone will be kind enough to mention/explain it?
(Also, perhaps that property is less relevant than I think.)
This is only tangentially related but Acoup recently wrote a series of blog posts about how Spartans actually sucked, and I found it pretty fun to read. Extremely fragile, rather than antifragile. https://acoup.blog/category/collections/this-isnt-sparta/
Since you mentioned Sparta, I can't help but promote Bret Devereaux's series on the mythology that grew up around Sparta, even in its own time, and on Sparta's reality: https://acoup.blog/2019/08/16/collections-this-isnt-sparta-part-i-spartan-school/
Regarding the part about sick v healthy people, I'm guessing he'd say that antifragility > fragility with the ceteris paribus qualifier?
So, you'd probably rather live in rich, fragile country than a dirt poor, antifragile one, but if GDP was equal than the situation is different.
I'm having trouble squaring what seems to be this view that the attempt at theory is worthless with the fact that Taleb has an MBA and a PhD and has been a professor at multiple universities and the editor of an academic journal. He's trying to produce a theory that theory is stupid. This seems like the same basic impossibility of true moral relativism or non-Pyrrhic skepticism. Someone can make a convincing argument for them, but the very act of making an argument at all is inconsistent with what is being argued for.
> So think of this less as a sober attempt to quantify antifragility, and more as an adventure through Taleb's intellectual milieu.
Am I incorrect in interpreting this as 'Taleb takes refuge in unfalsifiability'? So many of these examples seem to hinge on their specific framing and level of focus; you point out a few and contradict a few more with the Fact Checks. Antifragility is a powerful concept to keep around, but I'm *extremely* skeptical of the prescriptions that are coming out of how it's being used.
Would the Ten Commandments be another example of "teaching birds how to fly"?
If you buy an option and you’re wrong you lose all your money (they expire worthless). This seems similar to lottery tickets and insurance. There are worse risks than that, but it seems like the risk reduction comes more from the ability to hedge, and this hedging happens when you don’t spend much money on such things.
"At some point you have to do a thing, which usually means using some system but also being aware of its limitations." Or some heuristic. That sometimes seems to make Taleb's distinction indeterminate.
My takeaway has always been, let different people try different things with volunteer participants, whenever that is possible. Then the hard cases are just those where it is difficult for pluralism to work, because the circumstances absolutely demand a unified response. Of course, as Covid has demonstrated, we do not currently have an alternative better approach to such situations, although sometimes people try to use compulsion to approximate it. Compulsion is fragile?
I have to question Taleb's statement on jet engines. The first patent on a gas turbine was issued in 1791, and the thermodynamics behind them were worked out by 1900, AIUI. I'm sure there is some aspect of their operation which was solved empirically before the theory was worked out, but it absolutely was not a matter of "people just tinkered with it before they understood how it worked".
On the discovery of things, I would argue that we need to be very cautious. Accidental discovery makes for fun story, and are thus remembered. But we don't remember the thousand of things needed to make car evolve from what they were to what they are.
We don't know who, when and even how many times the discovery made from following theory are made, because following the theory to find something make that something "not a real discovery/invention": if you follow a map that tell you there is a river here and there is indeed a river here, nobody care. We remember how and who made the first vaccine, but most of us are incapable of giving the names of those who use Pasteur's idea to eradicate other disease.
As with every bit of writing I have seen on this blog, I'm thrilled by the original perspective, the beautiful language, the humor... I am also excited to find that in a previous collection of your essays, you have addressed the question of fats (the different types, healthy/unhealthy etc).
I mean, he kind of completely misses the point of why we science (TM) which is not at all to build stuff and create new technology.
At least, not for all of us.
Does anyone have info on how black swan-ish funds have performed generally? I understand Spitznagel publicized his amazing performance at the start of COVID, but Taleb writes like black swan investing is a billion dollar bill lying on the sidewalk. Yet my sense is black swan followers have not by and large made a killing.
I was happy to read a review of this book, because there is no chance I’ll ever pick it up myself. I tried to read The Black Swan a few years ago and quit halfway through. I’m used to reading pompous academics, but Taleb was just over the top. Plus there were weird contradictions, like how he would go on and on about how useless and stupid philosophers are, and then praise Karl Popper and Bertrand Russell. Some people have laser intellects. Taleb is more like an old blunderbuss stuffed full of nails, rocks, and too much gunpowder.
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
Not an expert in medieval architecture, but I am pretty sure this is total nonsense, as long as geometry is included as part of mathematics. Getting two ends of an arch to meet requires decent geometry. And making two lengths of wall match without adding is probably impossible.
Doing basic arithmetic with Roman numerals isn't hard (in fact, adding in particular is super easy!) - you aren't any good at it, but that's cause you haven't practiced it ever. How many times have you added Arabic numerals? Do it that many times with Roman numerals, then tell me it's "too unwieldy".
It's true that they were built with rules-of-thumb and principles-of-practice rather than a defined theory of weight, mass, gravity, and structural engineering (in fact, a lot of stuff in the 19th and early 20th century was built with pretty ad hoc theory to back it up - it was extensions of stuff that had previously worked and been well measured). But to say it was built without mathematics seems ludicrous, and I'd want to read something with a LOT of evidence to back that up.
Re: strategies that succeed by taking things away instead of adding them.
I agree that this isn't always the right approach, but I like the idea so much that I've been trying to collect where it applies. So far I have:
* Probablistic conjuctions (occam's razor)
* Mindfulness meditation (to reduce thoughts that cause suffering, intrusive thoughts, etc)
* Conciseness in writing
* Software written with suckless / unix philosophy in mind
* Simplicity in mechanical systems
* Exercising the 5th amendment to avoid self-incrimination
* Exercising restraint in art to increase impact (examples: powerful film scenes lacking score; also I write down the song "Trio" by King Crimson as an example where the drummer was praised for not playing anything on the track)
* Tidying up your room
* "Too many cooks" -- in arguments, in artistic endeavors, etc
* Traveling light, allowing for traveling faster and freer (applies anywhere from taking a plane trip, to photons which literally "travel light" and move faster than anything else).
* Operational Security -- Reduce the number of components in your identity to avoid associations that compromize you
* InfoSec -- Reduce the number of components in your system to reduce your attack surface.
* Martial Arts -- Sometimes the best strategy is to wait for your opponents actions and use their momentum against them.
* "One bad apple spoils the bunch" -- So reduce your number of apples.
* "Nothing to Lose" -- Freedom resulting from having little.
* Large concentrations of population as ripe for epidemics.
Some of the examples seen a little weird to me. It seems like the term "anti-fragile" should be applied to a system of some sort. For example I'm not really sure how exercise is anti-fragile, surely the claim should be about the human body? In which case it is anti-fragile in some ways, as lying in bed all day isn't great, but not in others, as raising the internal temperature a mere 5 degrees can be deadly.
The case of the banker/taxi driver also doesn't seem right. He frames John as doing fine until something bad happens and he gets laid off, while George can adapt his business if it's slow in one neighborhood or something. But those seem to be very different scales of hardship. Something that causes a bank to fail very likely will affect taxi drivers pretty badly too (think of the current pandemic! Taxi drivers are much worse off than bankers). And while he allows George to change his business to a courier, he neglects the possibility that John can also get a different job.
I guess he's using this as a parable to show the benefits of allowing volatility, so there may be 50 Georges and some will go bankrupt while the others will flourish. But it seems a little odd especially in light of his real world examples. It's also not clear when you can turn the term "anti-fragile" back on itself. In the case of the forest fires, the periodic small burns prevent a large all-encompassing fire, so he presumably calls the forest anti-fragile. Alternatively, without these regular, periodic burns, everything will burn down at once. Does this make it fragile with respect the the small burns? A few measly humans with water hoses came in and ruined everything. Others have pointed out that this is also apparently the case for Sparta.
I'm not sure how much weight it's given in the book, but it's also worth remembering that anti-fragile systems work well in volatility, while fragile systems work better in stability. It's not said outright here, but he seems to imply that we should be making our systems more anti-fragile. And given black swans and all that, it's probably not bad to keep an eye on it. In the end though, they do have a cost, just like drinking out of a cup of silly putty would be a pretty terrible experience.
As a side note on the Lindy effect: he seems to conflate "lasting longer" with "better". For classical texts, people in antiquity were probably about as good at stories as people now, so you'd expect the stories that last to be preselected to be good. For a physical object, it probably undergoes about the same amount of stress regardless of how old it is, so older ones are probably sturdier. But I'd rather use a phone from today than one from 1960, even if that one is sturdier. Same thing with studies. It feels like the classic economics joke of not picking up money on the ground because if it was real, someone else would already have done so. If this older thing weren't as good, someone would already have discarded it. Maybe, but sometimes that someone is you!
Antifragility is a cool concept and it makes me feel like going out and exposing myself to disorder to get stronger. But aside from coolness I don't think antifragility is necessarily better than plain old robustness. For example, my bones get stronger under stress (eventually) and titanium gets weaker. But slightly fractured titanium is still probably stronger than a weightlifter's bones. And taking the idea to the extreme could lead to hoarding canned water and VIX instead of profiting from a calm period.
One thing bugging me here is Taleb's insistence that "antifragile" is different from "robust" -- I mean, certainly, antifragile is different from Taleb-robust, because he's defined them that way. But I don't think Taleb-robust is the same thing as robust-in-the-ordinary-sense, which seems to have quite a bit of overlap with what Taleb calls "antifragile" (e.g. the options example -- benefitting from upside but being protected against downside would ordinarily be called "robust"). This wouldn't be a problem, except that as best I can tell, Taleb doesn't seem to notice that his use of the word differs from common use, and so just says "antifragile is not the same as robust", leading to a lot of confusion.
> according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.
This struck me as very wrong. A post on Skeptics Stack Exchange agrees: (https://skeptics.stackexchange.com/questions/15130/did-only-a-handful-of-people-in-europe-know-how-to-do-division-before-the-13th-c).
The Byzantines, and the Muslims in Spain, both certainly knew arithmetic, and higher math as well. But even if Taleb means "Latin Christendom" and not Europe per se, basic arithmetic (as part of the quadrivium) was part of a 'standard' higher education. There wasn't any progress made, and architecture regressed, but people didn't forget how to divide integers!
I really don't get how Taleb could have claimed this with a straight face. I tried to look up Guy Beaujouan, but he wrote in French (which I don't speak) and before the age of the e-book, so I can't easily find a reference.
> Instead of reading the latest studies, read older studies!
Contra Scott's "As practical advice, this suffers from a certain having-obvious-transparent-flaws," I think this is generally very good advice, at least as far as it goes.
If you're looking to understand a field, you absolutely should read the older, foundational, most-cited papers before diving into the newest ones. If you want to learn about something on the cutting edge, taking the paper you think is interesting and going through its bibliography and first reading the oldest paper you see is probably a better play than reading the paper you want to learn about.
Books are the same way. In most circumstances, you're better off reading an older writer who everyone agrees is a classic than hoping the new hotshot will live up the their impressive debut novel.
News media and cultural commentary is the same. There's a reason the subreddit doesn't allow discussion of current events in real-time. I'd much prefer a world where the stories about "news" were all written with the benefit of a week's hindsight instead of a mad rush to be 'first'.
This is actually one of the things I liked about the "old internet". 10-15 years ago, the results at the top of your Google search were, nearly without fail, the best things about the topic you searched for. The most comprehensive. The best-written. These days, the internet (google, reddit, youtube, etc.) are biased towards new and ongoing and "engaging" in the social-media-analytics sense of the word. Its much harder to find the thing that was clearly the best article/essay/review of the thing you want to learn about because instead your directed to the scads of newer things, most of which are far worse.
Perhaps I've missed Taleb's point here. I certainly agree that reading the most recent research can be important for some academics, but unless you're trying to publish in the specific subfield of the stuff you're reading, you're probably safe ignoring it for at least a few years.
Fun fact: In China the Lindy effect is state-sponsored through the official designation of "Time-honoured brand" - https://en.wikipedia.org/wiki/China_Time-honored_Brand
Singapore vs Malaysia is a matched counterexample to Lebanon vs Syria. All four countries started out as islamic kingdoms, albeit at opposite ends of the crescent. To the extent that either Malaysia or Singapore was a country in 1920, they were the same one. It was in 1965 that Singapore won its independence, or, if you bought your newspaper at the other end of the causeway, a certain cancer was excised from the Malay body politic.
Lee Kwan Yew is a paragon of authoritarian high modernism, Malaysia is where James C. Scott spent his 18 months as a padi farmer. But, on any material measure, Singapore is winning.
The idea of anti-fragility is very important, but this is really an example of someone having a Big Idea.
Ironically, by application of his own argument, this Big Idea is itself fragile. It is exactly the kind of theory he complains about.
The problem is that he is just flat-out wrong about it in many ways, and we already have a much more useful model that is more generally applicable - natural selection.
Natural selection is where environmental pressures put pressure on a system and results in "survival of the fittest". The result is higher efficiency.
But if you look at what actually results in the best results, it's actually *artificial* selection. Artificial selection works many orders of magnitude faster than natural selection does. We have made crops that are vastly better than wild plants, and genetic engineering has allowed us to make even better ones in just a few decades.
Many good systems are irreducibly complex and will never arise naturally as a result. Likewise, natural selection doesn't always select for positive traits - again, the example of the dodo, the dodo evolved the way it did because it would be wasteful for it to evolve to be otherwise. The fact that so many island species evolved this same way shows exactly this. Natural selection is no defense against going down a blind turn and smashing into a wall.
Indeed, natural selection works at its best with a moderate level of pressure - too high and the animals tend to die out before selection can even really affect them. When a gigantic meteorite struck the Earth 65 million years ago, most things didn't adapt - they just died.
By way of analogy, if you have an event that destroys most businesses, you might not be promoting only the best businesses, you might be promoting businesses which happened to have a characteristic which protected them from that event. That doesn't necessarily mean that those businesses were necessarily "better" in a macro sense. For example, the COVID-19 pandemic has killed a lot of in-person things and promoted online things - but that doesn't actually mean in-person stuff is *bad*, it is just that the selective pressure on them forced people in a certain direction. If we spent a year under severe cyberwarfare conditions that almost shut down the Internet, then in-person businesses might thrive.
Blind selective pressure is not "good" or "bad". Evolution lacks foresight. An island population might be very fragile to outside invasion, but it is also less likely to get external pathogens in the first place. If a pathogen gets introduced to Maine, it will likely spread to Florida; if a pathogen gets introduced to Hawaii, it is less likely to be introduced to Midway.
Indeed, there's little evidence that being on a large landmass even makes you antifragile in the first place; the "fragility" of island ecosystems is really because humans got there recently enough to see the effects. Humans already killed almost all the North American and Eurasian megafauna in prehistoric times.
Really, the fact that more advanced, sophisticated, interconnected societies tend to dominate their neighbors is a strong point against the idea that they are inherently fragile; indeed, the supposedly "anti-fragile" city states have almost entirely died out or become much bigger countries.
His whole thesis is really just scattered and full of motivated reasoning.
Competition *is* desirable, but he is trying to connect a lot of disconnected ideas because he has this Big Idea, and so he is awkwardly cramming everything into it, no matter whether or not it makes sense.
Edward Luttwak wrote "Give War a Chance" along similar lines, but I think that was less about controlled burns than "war making as state making".
Erik Falkenstein also said that Taleb's theories imply that selling insurance should be a terrible business that frequently results in bankruptcy, which doesn't actually fit our reality of relatively long-lived insurance sellers.
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
I don't think that's actually true for people used to using them. It's really large numbers where they get too long compared to a base-10 numeral system.
Willmoore Kendall, the "wild Yale Don" involved in National Review's early days, argued that Socrates' death was justified... based on Socrates' own beliefs (and that he willingly drank the hemlock rather than escape with his supporters because it was his only philosophically permissible action).
Robin Hanson has also noted that mergers tend to be value-destroying, and thinks that they are undertaken anyway for reasons of internal corporate politics (similar to his reasoning for management bringing in "consultants" to recommend the thing they wanted to do anyway).
First, evolution and exercise are processes, not systems; the systems are the ecosystem and the muscles. When an environment is stable, life does not lose the ability to evolve; it just evolves to the stable system. When things change, the process of evolution will still occur. If the environment is volatile, species will adapt to the specific nature of that system, and may need to evolve differently should the volatility patterns change.
Likewise, with exercise; if muscles were truly antifragile, why would trainers, physical therapists, and orthopedists be so busy? Muscles grow in response to the proper stresses; if the type of "volatility" is wrong injury occurs.
The common ground between rationality and Taleb's project is an area well worth exploring - I'm glad you raised it in the last couple of paragraphs. Taleb's natural tendency to aggressively dismiss attempts to understand systems probably obscures how mutually beneficial the two philosophies can be to each other.
I actually wrote a blog post on the relationship between the two almost exactly a year ago!
On mergers, some of the diseconomy of scale that results seem to be due to big companies turning into mazes: https://thezvi.wordpress.com/2020/05/23/mazes-sequence-summary/ I think there's a real institutional design / corporate governance problem to be solved here -- how can you scale up without this happening?
What is the principle that connects the Lindy effect and the anthropic assumptions of the Carter Doomsday argument? I can kinda glimpse something, but I don't really see the connection.
I'm not sure that I see how religion is antifragile. Organized religion appears, in particular, to exist for the purpose of shielding morals and ethics from the memes du jour, for the sake of protecting them as they contain deeper wisdom that may not be apparent at the surface. Every virtue in an organized religion is a Chesterton Fence, but wouldn't the theory of antifragility say something like "you can get rid of all the Chesterton Fences and this thing should get better" ?
Re part 1: Did you really not get that antifragility is all about Jensen's inequality?
So... by the same token, moving from a personal wordpress site SSC to a guaranteed-income model of ACX makes Scott more fragile, the opposite of his stated goal.
“I think part of its response would draw on Taleb's previous arguments that people underestimate the risk of black swans, so the world will be more volatile than they think.”
On my reading of Taleb, the point is that there are two relevant distributions: first, there’s the probability distribution for events, including the tail events he focuses on in Black Swan (e.g. the probability of a big stock market crash); and second, there’s the distribution of outcomes, which is the focus of Antifragile (e.g. the price of your investment).
The first is taken as a given—or, more precisely, it’s taken to be never ever understood properly no matter how hard you try; tail events include things that have never happened yet and no model will capture the probability of things you’ve never seen or thought of. Our failure to model this usually leads us to underestimate its likelihood (hence, the whole Black Swan book).
The second is the focus of this book. The distribution of outcomes has two tails (good or bad, right or left), and the point of antifragility is to open oneself up to the right tail while not being subject to the left. The banker is subject only to left-tail events and is therefore fragile; the taxi driver is antifragile because he is open to right-tail events (the worst he can do in a week is make no money, but the best he can do is “infinite”). Ideally you set yourself up so that the distribution is right-skewed like this; even if your mean outcome is worse (or, looks worse because your model doesn’t properly account for tail events), an increased access to right-tail events is worth it. Hence, Taleb’s “barbell” investment strategy, etc.
If the space of outcomes is non-negative, like [0,\infty), it’s even more important to guard against the left side, because if you go to $0 then you don’t get to keep playing the game any more. (I don’t remember which book this point is from (I don’t think it’s Antifragile, maybe Skin in the Game?).)
Is Taleb really suggesting that you can invent something as complex as, say, the modern MRI machine, just by tinkering around with wires and things in your garage ? What ?
I mean, yes, all engineering requires a certain amount of experimentation; but it's guided experimentation, not just random guessing. The theory is the guide.
"But the interesting constant is that when a result is initially discovered by an academic researcher, he is likely to disregard the consequences because it is not what he wanted to find - an academic has a script to follow."
I'm a researcher in experimental biology, so this got me thinking.
My first reaction was to strongly disagree. Scientists love accidental discovery stories."I noticed unexpected thing X and I had enough breadth of mind to realize that meant Y might be true and that led me to make a major discovery I hadn't been looking for" makes you a real hit at conferences. To the extent there is a script (hypothesis-driven research in your discipline, I suppose), the ability to improvise when things go off-script is widely admired. You might imagine granting agencies would be upset if you take your research in unplanned directions, but usually if you get high profile paper out of it they are perfectly happy.
Then it occurred to me that it is true that sometimes my students have made unusual observations and as Taleb predicts I've discouraged them from following up on them. The first reason is that an accidental discovery and an experimental artifact can be hard to distinguish. The second is that when a project drifts too far from your own area of scientific expertise, you have to learn a lot of new literature and you are prone to making stupid beginner errors. When you supervise a bunch of people and have to keep a bunch of projects on track, it's a big time expense to pursue a new field. You don't see many lab with one virology project, one chromatin project, one metabolism project etc. Most professors can't keep up with all those fields of literature well enough to direct them. So there's a natural tendency for projects to be scuttled when they drift too far from the lab's core expertise.
How do you solve this? Collaborations can help: show your weird finding to someone with more specialized expertise and go from there. A few months ago a colleague got a strange result and didn't know what it meant, but he realized it involved a gene that I studied and had his student talk to me. Now it's the most exciting project my lab is working on. Even if you don't have a lot of different expertises in one lab, you will have them in one department or university.
I wonder how one could study this question rigorously "do scientists stick to planned paths too tightly". Unfortunately I lack specific expertise in this area and will not pursue it further.
when you got nothing, you got nothing to lose
It's not possible to be directly long or short the VIX. The VIX has mean-reverting behavior: when it's low, it's expected to rise over time, and when it's high it's expected to fall over time. Since this is common knowledge, a security whose price tracked the VIX wouldn't clear, because there would be more buyers than sellers whenever the VIX was below the long-term historic average and more sellers than buyers whenever it was above it. What you *can* do is trade cash-settled VIX futures. If the VIX is at 10 today (representing a very placid market), futures settling several months from now might be trading at 15, so you could buy those futures and be long volatility, but if the VIX only rose to 14 in that period, you'd be losing money even though the VIX went up just like you predicted. This is what prevents betting on volatility during seemingly-placid times from being an easy market-beater.
> And everywhere else, people really do underestimate volatility, and antifragility systematically is underpriced?
Haven't read the book, I suspect Taleb's point is that Lindy / folk wisdom tends to ~correctly price antifragility while "legible" / intellectual wisdom tends to underprice it. And that the latter kind of thought controls an increasing amount of modern society and is making a play for more (see also: rationalism). At least, that's the argument I'd make if I were him.
My favorite example of antifragility in traditional societies, because it's such a big one, is how premodern farming practices optimize for "least risk of starvation" rather than "highest average production". See e.g. https://acoup.blog/2020/07/24/collections-bread-how-did-they-make-it-part-i-farmers/
I'm confused about the Syria/Lebanon plot. Was that a version of what was in Taleb's book, or a snide rejoinder to Taleb by Scott? The plot clearly shows that Lebanon was way ahead of Syria for the duration of the measurements, back in 1820. And the additional divergence around 1950 was not that something happened in Syria to suddenly depress growth - Syrian growth continued as before. Instead, there was a massive increase in economic activity in Lebanon in the 1950's which a little Googling shows was due to Beirut being the financial center of the post WWII middle east connections to Europe.
The examples of evolution and collections of city-states vaguely reminded me of a metaphor from the video game Obduction. I can't find the actual text, but it went something like:
Once there was a gardener who carefully separated their seeds into separate plots, and tended and pruned their plants dutifully to keep the whole garden neat and organized. But despite the gardener's dedication, the plants grew sickly, and their garden never flourished. Eventually, they gave up and stopped tending the garden. Then one day, much later, they came back they found the garden was lush and filled with thriving plants, growing wild in every nook and cranny.
They argued that allowing seeds to be scattered to the wind is often bad for the individual seeds, but it's good for the species. Lots of independent big risks taken by individuals leads to a lot of individual suffering but allows the collective to capitalize on opportunities they couldn't have found otherwise and thereby expand the total resource base of the species.
(Warning: Generalization from fictional evidence. Pretty sure competent agriculture has higher food yields per acre than gathering-from-wilderness does.)
“This chapter (and honestly the rest of the book) only makes sense with an assumption that antifragility is systematically mispriced”
There is no “proper” price.
In financial markets, prices constantly change, sometimes drastically! Prices are not static, they’re dynamic. One could say prices are always wrong, thus always changing, trying to be less wrong.
Investors seek to own what another investor will purchase more for in the future. Wise investors are long term investors.
In the long term the winning investments are antifragile.
Economics classes teach Efficient-Market Hypothesis, the idea that the prices reflect all available information and you can’t “beat the market”
Haha, that is false. Humans misprice *all the time*
“Healthy people are fragile” ... “very sick people are antifragile”
Can you elaborate?
If very sick people benefit from increased variance, why do they rest all day?
I’m a healthy person and increased variance makes me strong, resilient, happy, wise. But you say “increased variance can mostly make them worse”
Can you explain?
“Reasonable to give a terminal cancer patient an experimental drug - the worst that can happen is they die”
Do you mean - they suffer immensely from unforeseen effects and die sooner than otherwise?
Antifragile does not mean nothing left to lose.
Antifragile means what doesn’t kill me makes me stronger. A healthy person is antifragile to illness-19, a sick person fragile.
Humans appear to be tremendously successful in dominating the biosphere despite being extremely fragile in evolutionary terms:
1. We reproduce slowly and in small batches of offspring compared to lots of other mammalian species, or species in general. That means selection in general happens much slower for us than with, say, rats.
2. Our survival is highly dependent on a socially transmitted set of knowledge that takes years to learn. Take that away, and we're creatures that can freeze to death outside of tropical areas because we don't have any fur.
The taxi driver vs. banker example seems to have been disproven by the current pandemic. The taxi driver is hosed, because the massive reduction in personal travel has outlasted his ability to survive a smaller income. The banker is still collecting his salary while working from his home office. Individual bank branches might be fragile, but banking as an industry seems pretty anti-fragile - it will survive at least as long as capitalism does.
Beyond that, of course, taxi drivers never made as much as bankers. So the bankers could buy themselves some volatility protection via savings and investments that the taxi driver could not afford. The banker might have an income of 100% or 0%, but if he can live for a couple years on 0% while the taxi driver will go broke on 6 months at 50% pay, the banker is less fragile.
Taleb is right about a lot of stuff and also needs a good dick punch. He talks about skin the game and grit and such, but his books and tweets are all a good example of Matt Levine’s definition of a great hedge fund manager: one who collects more in fees than the investors’ initial capital. Which Taleb does- his fund loses ten years in a row, collecting fees all along and then in year eleven profits enough to make up for all the loses. He also makes a lot then too. A bit like a bodega owner (antifragile?) who makes money selling lottery tickets and then gets a payout when one of its patrons hits the mega millions pot. Which is all fine! But like “news” is really advertising with some news attached, Taleb’s books are really hedge funds with some book ends. Doesn’t make them bad books, but probably explains their heft.
Shameless plug: https://thepdv.wordpress.com/2019/06/03/a-general-theory-of-bigness-and-badness/ is my attempt to specify explicitly why the pattern seen in Book Five w.r.t organizations and countries happens. I think it has more gears than Taleb's take and therefore is more likely to be useful. (Which does not imply it's more likely to be _correct_, TBC.
> He praises Switzerland, which is so federal that it's barely a single country at all, and argues that its small size (or rather, the small size of each canton) has helped it stay one of the world's most stable and prosperous areas (also, Venice!).
> So, a glib take you’ve probably heard is that the problem with Big Government, Big Business, Big Etc. is not the government or the business or the etc. but the “Big”. This is extremely superficial and is essentially elevating a trivial idiosyncrasy of the English language to an important structural principle of the universe, which makes about as much sense as nominative determinism. I think it’s true anyway. Here is my theory of why:
I’m a bit surprised you don’t mention Karl Popper here. If I recall, Popper’s thoughts on induction are behind a lot of Taleb’s thinking. I’m no expert in Popper, but I am curious about how to reconcile Popper’s thinking with the rationalist way of of thinking. Anyone thought about this?
I don't think it's quite true that Lindy = Doomsday. The Doomsday Argument uses one specific generating process: sampling a point on a finite interval, and gets Lindy as a result. But you can get Lindy from lots of generating processes:
- A geomtric series with unknown rate and uniform prior.
- A Poisson process with unknown rate and exponential prior. (This also explains hyperbolic discounting: see https://scholar.google.com/scholar?cluster=13790279530154362968&hl=en&as_sdt=0,5.)
- Nick Bostrom's x-risk model of drawing balls from an urn.
- Time until you beat your current highest sample for any given distribution.
- Time to return from a random walk. (Probably. I haven't worked out the details of this one yet.)
Some of these are different representations of the same process, but I'm not sure all of them are. So I suspect Lindy's Law is deeper that the Doomsday Argument.
I would err between Specialization vs Antifragile.
On the one hand, I should specialize into narrow fields to increase efficiency, i.e: I don’t know anything about agriculture, cannot start a fire by myself, I am incredibly fragile if left alone in the wilderness, our civilization gives me all the incentives to ignore these things and focus solely on good performance at workplace.
On the other hand, Antifragile requries me to diversify skills, expose to volatility in environment to maintain survival capability, being a jack-of-all-trades, lose my job to young enthusiats who go all-in due to competitiveness in my industry.
I'm not an expert, but it seems to me that COVID is a pretty refutation of the "theory isn't any help in medicine" theory, at least in its wider sense. Even if the story about Moderna developing its vaccine in literally two days (https://www.businessinsider.com/how-moderna-developed-coronavirus-vaccine-record-time-2020-11) wasn't quite true, we still saw the development of multiple vaccines which turned out to be effective within weeks or months of the emergence of a new virus. I don't know whether there's a reason to think vaccines are very different in this respect from other drugs (or inventions as a whole), but it does seem to be a striking success story.
The danger of black swans isn't just that they're rare, unpredictable, and large. It's that we don't know how large they can get, even after studying past black swan events in the space.
We tend to talk about the Carrington Event as though it's a worst case event that might be repeated. It's not the worst case. According to the math and physics we know today, we have no idea how big the worst case might be. (Source: A keynote talk at the 2020 New England Complex Systems Institute conference.)
We look at deadly wildfires and think that the fire in Paradise, CA was shockingly horrible (it was) and so it must be a worst case. It's not close to a worst case. A fire tornado followed the 1923 Great Kantō Earthquake. That fire tornado killed 38,000 people. What if the Paradise fire had started upwind of a major city in similar conditions? Could we see 100,000 dead? A million?
My takeaway is: We are not prepared, and perhaps we can't be prepared, for some of the actual plausible worst case events.
The nice thing about Taleb is it's easy to install his brain-module. He helps make sense of one's experience a little, perhaps.
I feel like my own field of academia -- astrophysics -- runs counter to a lot of Taleb's "theory vs practice" argument.
A lot of the 20th century's major discoveries were not accidental bolts from the blue, but resulted from people being guided by theory in order to design their experiments just right.
Einstein came up with General Relativity - the most important theory in modern cosmology - by immersing himself in the theory, incorporating work by lots other scientists (people like Maxwell and Lorentz), and trying to attack a particular problem. And he famously succeeded.
Accidental discoveries did happen, of course. Like Hubble discovering that the Universe is expanding. But still, Taleb's version of the process - someone makes a practical discovery, and theorists come in along later and hastily try to explain what's going on -- just doesn't really fit here. Even before Hubble, scientists were aware that Einstein's equations really seemed to imply an expanding Universe (and people came up with all kinds of kludges to 'fix' the problem). Hubble showing that the Universe is really expanding caused a feeling of 'oh thank goodness, the theory was right all along'.
Or take the discovery of the Higgs Boson. Or gravitational waves. Or the first exoplanet. In all of these cases *theory came first*, and experimenters, guided by the theory, knew where to look.
You know, I'm always bemused by the number of people (even within education) who assert that the purpose of formal education is to imbue the student with a beautiful theoretical framework that will allow him to easily predict and calculate all he needs to know about the Real World that he is about to enter.
This is deeply and even obviously silly (although plenty of people doing the educating think this way, perhaps to pamper their own egos). The only rational purpose of education is to summarize and distill the past, so that the student learns all that has been done before (in some relevant area) far more efficiently and quickly than if he had to stumble upon it himself by chance in the Real World.
It is (or ought to be) a *past-focussed* process to greatly shorten the time of complete n00b apprenticeship, so that the student can become a journeyman in the Real World much sooner and achieve mastery at a younger age. That is, it's logical foundation *is* skepticism about theory versus Real World experience. It says "learn all the ways people have tried X and Y and theory Z and T and why they didn't work as fast as possible, in a planned firehose of information dump, so that you can go out in the Real World sooner and NOT repeat any of the umpty-six dumb mistakes people have made since AD 800 or so."
That doesn't, of course, mean that the real purpose of education has remained uncorrupted, or that nitwits, both within and without education haven't enthusiastically debauched it. That they have -- the Church of Education cultists is almost as obnoxious as the Church of Science cultists. But in principle education should be a big buffer against volatility -- an "antifragile" enterprise -- because by allowing students to learn of the experience of far more people, in far more situations, than would easily be possible in any Real World situation of equivalent duration, it makes far fewer of the curveballs life and Nature throw at us come as an utter surprise.
Personally I vew the modern fashionable disdain for the institutions that helped us tame and ride chaos (as well as the descent of those same institutions into fossilized rococo courtly competitions) as a kind of broad late-Empire intellectual decadence, the kind of hazy sentimentality that might've led the late Empire artisan chafing under Imperial taxation and corruption to fantasize that the life of a medieval village smith would turn out to be a tremendous improvement for his grandsons. Ah! The fresh air! The simple joys of the peasant life in the harmonious shtetl nestled in the bucolic countryside, free of any distant scheming Senators. From which mistake follows 1000 years of muddy plague-eaten misery, but maybe that's what happens when (a) some of us mistake a rational system for a religion, and (b) the rest of us are too impatient to scrape away the barnacles and decide to jus tgo all Canticles of Liebowitz on the whole thing. If rationality has been so thickly coated in ritual that is hard to recognize any more -- why not treat *all* rationality as ritual and just give yourself over to impulse? That'll work out well.
The comments on small versus large nations made me imagine the US as 50 sovereign countries. Imagine the diversity of culture, social systems, and economic systems in such a world. Of course, who knows how many "intra-US" wars would have been fought over a couple centuries. If you had a choice between one bigass US (as today), or 50 sovereign nation states, which would you choose?
"Only make sense with an assumption that antifragility is systematically mispriced" it is. Antifragility benefits systems over cases and the collective over individuals: again, evolution. *Individuals* don't want big shocks, and there is is certainly anti-humanitarian to say that the weak and the unlucky should die for the benefit of the strong and lucky as both you and he point out. So we tend to seek the stable, the predictable, the smooth, meaning that such things are overpriced due to (misguided) demand.
Nobody (for the most part) *likes* the idea, I think Taleb is just arguing that it's a better model of the world than the ones currently being used, and that it *matters* because the current at-odds-with-reality models are disaster-prone. I also agree that I don't think he'd take umbrage to the Rationalist movement: the whole thing (and I realize I'm grossly oversimplifying and unlike you I wasn't there at the beginning so correct me if I'm wrong) seems to me to have started when Yudkowsky looked around and said "hey, why do all of these intelligent educated people believe in and do all these patently absurd things? There must be some important thing here besides intelligence and education that we're failing to reify".
"Taleb never makes this claim, and I think it would be hard to argue that an entire category of instrument has been consistently mispriced since forever. But then what is he trying to say here?"
I think this is exactly the claim he makes. Just in a reverse "picking up pennies in front of the steam roller" sense that it will take a long time. I don't think Taleb believes in the EMH.
Re mergers: "The combined unit is now much larger, hence more powerful, and according to the theories of economies of scale, it should be more "efficient". But the numbers show, at best, no gain from such increases in size [...] There seems to be something about size which is harmful for corporations."
Ronald Coase did work on this in The Nature of the Firm (1937). In a nutshell, the size of a firm is a function of economies of scale (favoring expansion) and transaction costs (favoring contraction). Firm sizes equilibrate at the intersection of these lines. In other words any given firm is probably roughly as big as it ought to be, and if you merge two firms you're likely to introduce higher transaction costs, which your gains in economies of scale are not large enough to offset.
Transaction cost in this case is basically the friction with which information flows inside the firm. So overhead, the likelihood of managers poorly allocating resources etc.
This distinction between discovery by "accident" and by research seems very arbitrary to me. What do you think those scientists/engineers were doing when those accidents happened? Most scientists (most good ones anyway) are well aware that the direction of research sometimes has a life of its own, but that doesn't mean you can eliminate research while keeping the "accidents" to which it leads!
The Portuguese most likely discovered Brazil by accident in the context of their programme to find a maritime route to India by following a coherent theory that you could just sail around Africa. Does this show that all their work trying to map the African coast and trying to model the Atlantic wind patterns could have been ignored in favour of just sailing aimlessly around the Atlantic? Or does it illustrate that a deliberate programme to discover/explore one thing/aspect/field/whatever will often yield unexpected results with unforseen benefits - but which wouldn't have happened if they weren't doing the "research" in the first place? Hint: without the wider programme to find the maritime route to India, the Portuguese wouldn't even have developed ships capable of making it to Brazil.
Now, there is an argument to be made that maybe currently there's too much effort dedicated to incremental research compared to looking for breakthroughs (which would arguably make these "accidents" more likely). It's still all research though, and it doesn't invalidate that both types of approaches are important - even if one is obviously sexier.
"John fancies himself protected from volatility. But he is only protected from small volatilities. Add a big enough shock, and his bank goes under, and he makes nothing. George is exposed to small volatilities, but relatively protected from large ones. He can never have a day as bad as the day John gets fired."
Of course he can. George gets into a car accident - pretty likely when you spend all your time driving - and not only has he lost his job, he's lost his cab, which he needs to get further employment as a cab driver. If John gets fired, the only thing he needs to find another job as a banker is his brain.
Everything is antifragile until they encounter a risk they didn't include in their model.
"according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
That seems like crazy talk. Any time you have N items, and M people who want to share them, you divide N by M. Even if you do it like a Turing Machine would do it (going around the M people and having them take one until you have less than M items left all the while incrementing a counter for the number of rounds), you're still dividing.
Is Guy's claim that you never have N items and M people? How is that even possible?
In the up-front trichotomy between fragility, robustness, and anti-fragility, it is not at all obvious to me why one would prefer anti-fragility over robustness. I suppose the argument is that most of modern society has a false confidence about how robust it really is, but it doesn't follow that the solution is "embrace volatility" as opposed to "anticipate under-appreciated possible sources of volatility and take steps to avoid them."
A book full of common sense in a world sourly lack it.
I would say you're right about hoplite and phalanx formations - they're quite powerful, but also fragile, and once they start to crack, it's often all over from there.
If you absolutely wanted to force the fragile/antifragile pattern, then the Roman legion would be the at least less fragile one - while a phalanx has all the tactical flexibility of a thrown brick, legions were designed to be maneuverable, to swap units in and out to combat fatigue, and so on (this is why we get the original Pyrrhic victory - even when they kinda lost, the legions inflicted a ton of punishment, because they could be defeated without things just snowballing from there).
"Evolution is antifragile. In a stable system, animals won't evolve. In a volatile system, they will. At times I became concerned Taleb was getting this wrong - animals will evolve to more perfectly fit whatever niche they find themselves in."
If I recall correctly, Taleb adopts a gene-centric, "selfish gene" perspective on evolution. The antifragility of evolution seems pretty straightforward under that view. For example, a population of animals will have genetic variation related to in which temperature they thrive. If the temperature stays the same for long periods of time, genetic variants associated with fitness at that temperature will become more common. If the temperature then changes, those rare individuals with variants associated with fitness at the new temperature will thrive, while the majority adapted to the previous temperature may go extinct. Or if there's no genetic variation left and no fortuitous mutation occurs, the whole population may go extinct, with other animals taking over the newly vacant habitats. (The high polygenicity of many traits could be thought of as an antifragile mechanism: even under strong selection, not all variation is exhausted, meaning that if the environment changes, organisms can still evolve towards the new optimum.)
I believe Taleb says something to the effect that no individual or population or even species is antifragile in the evolutionary scheme. Rather, it is life itself (or genes embodying life) that is antifragile.
I read this around the time it came out and have been thinking about revisiting, but this may have scratched the itch. I too, enjoy Table, but I realized with antifragile that part of the reason I'm so engaged is that I love to hate his arrogant tone, and because it's a challenge to accept that he's seemingly correct with so many of his points, but also contradicts himself terribly throughout the book. I.e. warning about halo effect, but acting as if he's an expert in exercise physiology when he brags about his weightlifting routine in the middle of the book.
I like that the critique of this book is exactly what you'd expect from Taleb's intellectual attitude - he doesn't have a grand overarching theory of antifragility, but instead a series of anecdotes and thought experiments with some grounding in the real world that you can chew on in order to improve your thinking about the subject.
Writing from a farm in central Kansas, I wonder what Taleb would say about agriculture. There is immense variance involved in the practice of agriculture. However, rather than fostering antifragility, nearly all agricultural practices I can think of are designed to stamp out variance, in order to permit fragile, but hugely efficient practices.
Grain prices jumping up and down? Why be antifragile when we can kill the variance by building silos and storing our grain.
Weather getting you down? Why be antifragile when we can tame the variance with irrigation, state-of-the-art forecasting systems (the daily forecast is probably the highest-rated show around here), and genetic engineering.
Random calving complications? Why be antifragile when we can flip variance off by hiring a vet to oversee tough cases?
Not to mention the reliance on increasingly-complex machinery (combines and trucks, of course, but also increasingly GPS, and many others) which requires parts, fuel, maintenance, an uplink to space, incomprehensible supply chains, and a million other things without which the whole thing comes crashing down.
A world where we designed agriculture to be antifragile, is almost certainly a world where Taleb goes hungry.
I tried to read this book after hearing much praise for Taleb. I had to give up in frustration pretty quickly as, to me, it was just a lot of arm waiving. In particular, he employs the pseudo-intellectual practice of first creating, and then discussing, his own private terms of art. But the terms are never defined and change at will to fit whatever point is supposedly being made. There is no clear hypothesis that could ever be tested, and no useful rule or insight is ever forthcoming.
Shorn of all the jargon, he seems to be saying nothing more than: "stuff happens, it's hard to predict, act accordingly." I don't get what people think they are getting from his books.
One of my (many) problems with Taleb's writing is that it doesn't lead to any sort of practical model or decision procedure. The fragile/robust/antifragile classification suffers from not being mutually exclusive nor well-defined, and so it's mostly useless when it comes to applications. The comments here have already highlighted many problems with the classifications Taleb gives in his book, suggesting that the classifications are not well-defined enough that people can agree how to classify things. Moreover, something might be antifragile to small changes, and fragile to large changes: an example is muscles, which Taleb points out are antifragile to small stresses (getting stronger with exercise), but they are fragile to large stresses (strains and tears can cause permanent damage).
Without the ability to clearly classify things as fragile/robust/antifragile, the theory lacks any predictive ability and greatly limits its usefulness. There's definitely interesting things to be said about systems that take advantage of natural disorder, but I feel like the framework Taleb sets up falls short of a working theory.
I totally agree with the idea that anti-rationalism isn't opposed to rationalism - it seems like a natural result of using rationality on itself. This is where 'why philosophers should care about computational complexity theory' feels relevant:
If you aren't consciously thinking about how accurate your model might be, what its limits are, and where it's going to be wrong, you're probably assuming some naive model of computational complexity which says the model is cheap and easy to compute totally accurate answers really fast.
Likewise, if you ignore the fact that people are computers, you might naively think we should be able to scale societies arbitrarily largely. Once you understand that human beings _are_ computers and that our societies are networks of computers, it becomes reasonable to conclude that governance systems are network topologies, and not all network topologies are going to scale to arbitrarily large degrees.
If he's opposed to anything, i think it's something like a blind faith in experts, and trust in an existing system, rather than a willingness to prioritize evidence-based thinking, and skin-in-the-game predictions, over "what those smart people think."
The thing I find most notable about Taleb is how many clearly intelligent people hate him and his theories, without reading one of his books.
If you hate him, I wonder: compared to whom?
Taleb is generally more right, more insightful, and more actionable than Malcolm Gladwell and the TED crowd.
Taleb is more correct and useful than most academics in social sciences.
Taleb is approximately as correct and insightful as Ted Kaczynski, and likely less dangerous on net.
Taleb is likely less accurate on hard sciences and the patterns of pure invention. His discovery here is a phenomenon where you can win without being right, yet he's seemingly preoccupied with being right.
I suspect the biggest umbrage is not that Taleb is a bully, but that he packages his philosophy with just enough math that those who live and breathe math find they have to deal with the ramblings of a mad philosopher when interacting with others.
I was extremely disappointed to learn that the Carter in the Carter Doomsday Argument was not Jimmy.
I would love for Scott to offer a review of some of Chapman's work, and if I could pick, I'd ask him to write about https://meaningness.com/
I think the basic thesis in the first few paragraph falls apart as soon as you try and pick apart what is meant by "the better it does" and "does well" here. These imply value judgements or objective functions of some kind, which glasses and rocks do not inherently have. What does it mean for a glass to do well? Why assume that glasses have an inherent goal of continuing to be vessel-shaped rather than transform into entropy-maximizing piles of shards?
Intuitively, glasses are "do better" by being vessel-shaped because that makes them more valuable to conscious, value-judgement-having observers. But if you define "better" as being a value judgement on the part of conscious observers, rather than the entity itself, then the hydra example falls apart, because arguably a hydra growing more heads is a *worse* state of affairs for the hero fighting it.
If instead you try and salvage this by replacing "better" with "more stable or resistant to being altered", then both the hydra and the evolution examples fall apart.
Am I missing something here?
Exercise is antifragile until you overdo it. And it has to be appropriate. A lot of people shouldn't run anywhere, except to the orthopedist's office.
The paragraph "For example, if some very smart scientists tell you that there's an 80% chance the coronavirus won't be a big deal, you thank them for their contribution and then prepare for the coronavirus anyway. In the world where they were right, you've lost some small amount of preparation money; in the world where they were wrong, you've saved hundreds of thousands of lives," bothers the crap out of me, because it seems completely at odds with the message of your post "a failure, but not of prediction." The point is that if the scientists gave a 20% chance of a pandemic, you don't "prepare anyways;" you prepare because a 20% chance of hundreds of thousands of lives being saved justifies an 80% chance of wasting a small amount of preparation money on something that wasn't going to be a big deal.
I'm not convinced that "prediction" in the sense of putting a probability on an outcome is so totally divorced from recognizing tail risk. It's not possible to be robust or antifragile to every conceivable event, and it's certainly not cost effective to protect against all of them equally. Some tail risks are more likely than others, or harder to protect against. A "tail risk" of 1 in 100 or 1 in 10,000 is very different. There are steps you can take to be protected against broad categories of events (e.g. stocking your house with nonperishable food could help in case of pandemic, natural disaster, social unrest, military attack, or a variety of other events that make leaving home or acquiring food difficult) but inevitably most agents will have to choose what to prepare for and that necessarily implies a model about how the world works.
> This is one reason (among many) Taleb disagrees so strongly with Steven Pinker's contention that war is declining. Pinker's data shows far fewer small wars, but does show that World Wars I and II were very large; he interprets the World Wars as outliers, and notes that since WWII the trend has been excellent.
FWIW, Pinker has acknowledged that if WW3 were to happen the death count would probably be astronomical and he still allows the possibility despite the persuasive arguing in Better Angels of our Nature that violence is decreasing.
I ran the same Lebanon v. Syria comparison on the same website as you did, and it shows only a roughly 10% difference in GDP per capita in 1913. Same Maddison data, very different picture when I generated the chart.
Taleb has seen this post and he was *extremely* unamused by your joke about Lebanese proverbs, to the point that he is blocking people for sharing you: https://twitter.com/RichardHanania/status/1374798853493288961/photo/1
Antifragility, the idea that some systems benefit from fragility, seems like an insightful idea. But every example I work through in my head tells me that robustness is the ultimate goal, and antifragility is useful only as a reminder that not all systems need to depend on stability.
Exercise is a great example. The goal is fitness, to be able to physically overcome a wide variety of situations. That's robustness. Exercise is an antifragile system that helps you get fit. But eating, hydration, and sleep are also important, and those are fragile systems. (Maybe eating is antifragile? Intermittent fasting says yes.)
Evolution? Robust is another way of saying fit. Natural selection is definitely an antifragile system, but there are plenty of fragile systems like symbiosis or food chains too.
What about computers, or even smart phones? Fragile in the volatility sense, but not in the practical sense! Are there any antifragile systems in a smart phone? Seems to me it's just fragility surrounded by a robust case. Sure, sometimes there's a catastrophic failure like dropping on a sidewalk or going through the wash, but the cost of those is small compared to the value of the working phone.
So I like the idea of antifragility, but I don't believe it's the best idea ever. It's just one more too l in the toolbox.
I have a notion that anti-fragility exists over ranges, and in particular, that enough stress makes an anti-fragile system stronger, but too much stress will break it.
I *think* this is important because Taleb is so much in love with anti-fragility that he doesn't want to think about his favorite systems having limits.
Which gets to that I think Taleb's boasting and insults might be part of his charm*, but they have real epistemological risks-- his style makes it hard for him to notice whatever errors he might be making.
*I've pretty much become immune.
"But haven't theories given us all sorts of useful things, like science, which leads to technology?"
I massively recommend the book Shock of the Old, dedicated to this subject. It is very brief and dense and really enjoyable. It persuasively argues that the answer is "no" by arguing both the history of how important inventions arise and fall (in the 20th century) but also that our ideas about which technologies are important to us are wrong. A novel (to me) example he gives of how technology works is that he claims that poor mechanics in India understand American cars much better than the people who designed and built them: because in much of India they have to understand how to keep them running for many times their designed lifetime.
I got a bit disillusioned with science in university when it became clear that the epistemology of science was actually to keep fiddling with your model (adding more and more free variables, appropriately justified by this or that idea of reality) until your predictions matched reality. And that the prediction of novel phenomena from these models (that is, these models teach us things we didn't already know -- rather than just allowing us to make accurate predictions in line with statistical or machine learning methods) is really the exception rather than the rule. You can really see this when you note that as a whole Newtonian Mechanics has no scientific justification given Quantum Mechanics -- you learn them entirely separately, and just wave your hands, or hold them over your heart, and you have it as an article of faith that if we were omnipotent we would understand how quantum-mechanisms give rise to Newtonian dynamics, but honestly: there is no other way to describe Newtonian Mechanics today other than statistical curve fitting: where concepts like "force", and so on, are just meaningless free variables (pleasing to our intuition) we're using to fit reality. Dark matter is another example of this (a thing that is the vast majority of the universe, but only detectable as a magic free variable that helps our equations better fit reality). My current understanding of physics is that it's statistical curve-fitting, but where everyone involved is constantly lying to themselves about what they're doing, even though nobody apart from those on the current-bottom-level (I guess string theory? About which I know nothing, sorry.) should have any reason to believe they are doing.
Anyway, sorry -- the point is that once planes exist, we can curve-fit the behaviour of planes, and use that to guide our development of better planes. But there's an issue that before planes existed, we had nothing to curve-fit to -- so engineers just had to try out stuff until planes were invented. And then the same thing happened once we got to super-sonic planes -- scientists weren't much help -- except when they were being engineers. The Old New Thing makes this point about the Manhattan Project that it was an engineering project employing well-known engineers (who are generally known as scientists because that is a more fashionable title -- compare Galileo who went by the title Philosopher because "Mathematician" didn't command respect."
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
One thing that confuses me in statements like this. Is it implicit when people talk about "Europe" or the "whole of Europe" in these days that they are talking about Christian Europe? Or do people making statements like this have a blind spot about Islamic Europe in these days?
In any case I think this statement is unfair, because there were plenty of Muslims who were into maths and technology, and there were quiet imports of technology into Christian Europe.
For example, officially, the Catholic Church believes The Pope invented mechanical clocks in 963AD (an accurate pendulum clock that rang bells for specific hours)... and that it was a pure coincidence that this was after an extended trip to spend time conversing with some Muslim experts and various things.
The internet is presumably anti-fragile-- it considers censorship to be damage and routes around it. Pretty good censorship (as in China) is still possible, though.
Unrelated question: Has Taleb influenced enough people that he's affected what investments get made?
Speaking of the final note on 'anti-rationalism', I think Taleb rather thinks he belongs/actually belongs to the tradition of critical rationalism alongside with Hume, Popper and Hayek. Many of his remarks on antifragile systems seem to me to relate to Hayek's on 'spontaneous orders', just as his praise of risk and adaptation to volatility is slightly reminiscent of Popper's point about making conjectures as bold and therefore unlikely and specific as you can. I think it's in Objective Knowledge where Popper deals with how people seek for regularities in daily life, stability, balance, then don't find it and become unhappy because of that.
History person chiming in here - Spartan Warriors were the definition of fragile. The problem is we see them as being lone soldiers, or in a phalanx with other Spartans, and don't look at the society as a whole. Spartan soldiers were essentially idle - they fought, but they didn't 'work', and the society as a whole was structured with a vast, vast underclass of helots and semi-independent Greeks supplying a tiny elite at the top, maintained by terror.
Any serious disruption to this system did far, far more damage than an equivalent state like Athens or Thebes, for almost no benefit to the society at all. - there is no Spartan art, poetry, music, drama or even architecture.
I kind of wish substack would implement something like what webnovel has for comments, where you can comment on specific paragraphs, and to keep them from getting in the way, it just shows a number at the end of the paragraphs with comments on them that represents how many comments there are for it, and clicking the number opens a pop-up with all the comments, in branched format.
Mostly so I could have left a quick comment on the large vs small stone paragraph: the method of gain is from the square-cube law. For (my favorite) example: going from a cube with edges of length 4 units to length 5 units almost DOUBLES the volume.
"Taleb is much more ambitious (some would say less careful and scholarly) "
Isn't part of his argument that it's not that good to be "careful and scholarly"? In which case, he practices what he preaches.
Evolution works because the sun is continuously throwing lots of energy on this planet. Doesn't "anti-fragile" just mean "things that eat energy from the entropy of others"?
>>> "he suggests theory is much less important for technology than we give it credit for. He makes the same point I made in Is Pharma Research Worse Than Chance - a whole lot of drug design seems to happen more by accident (or more politely, through tinkering and investigating) than by smart people using theory to discover drugs" and
"I was surprised to see Taleb point out the same effect in fields like physics and engineering. For example, he argues that jet engines just sort of happened when engineers played around with airplane engines enough"
Matt Ridley's recent book How Innovation Works goes into this in depth; it's almost the thesis of it.
I partook in a Model-UN representing Libya in late 2011, and quite clearly remember the Syrian delegation refusing to partake in any negotiations because they'd been sanctioned by the entire world as a result of the uprisings there. So as far as I'm aware and can remember (Wikipedia backs me up) the Syrian civil war began in 2011, meaning Scott's comment about how "Antifragile was published in 2012, before the Syrian Civil War" isn't quite true.
> For example, suppose I am long (or short) VIX. If something unpredictable changed to make the world much more volatile, that would be a positive black swan. If something unpredictable changed to make the world much less volatile, that would be a negative black swan.
I assume Taleb would say there is a tail of volatility/fuckery so insane that it rocks the entire financial system to the point where your position on VIX is just paper. VIX operates, like the banker, within a region and just trails off to zero at the point of counterparty risk.
Hi, there is no such thing as "anti-fragile." All things in the universe are fragile, material and immaterial. Some things may gain from disorder but they are not "anti-fragile". Too much disorder will make them fragile. Put options may gain if volatility increases but beyond a certain point exchanges may go bust due to counterparty risk. Taleb is inventing artificial images to attack. He is excellent in marketing and convincing crowds he got something. He got nothing. He is confused and his books are repetitions and can be summarized in 500 words.
I think my new unifying theory is this. In the face of uncertainty, it pays to be Talebian. Otherwise, you should be Alexanderian (Scott).
I have sometimes toyed with the idea of reading Taleb.
This post finally cured me of that. He seems a lot like a stupid person's idea of a smart person?
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
Ok about the middle ages, but the Romans themselves had very advanced abacuses that could do math in their weird system of decimal for integers and dozenal for fractions. (Also, unlike the legend says, they obviously knew about the zero.)
As a reminder, the "rationalist" community is very anti-rationalist too (which you kind of suggest), the proper term for it would instead be empiricist ?
For some reason i missed this post when it came out; i am only commenting here on how you presented evolution. if in fact Nassim Taleb presented his material as you indicate; it is inaccurate. for one thing there is no such thing as stable environment. Secondly, there is no such thing as a stable niche. one of the over simplifications of Darwin's approach (though he actually was far more complex than he is made out to be and did not say most of what people think he did) is adaptation to static niches. in actual fact environment and organism continually alter each other. environment is not background with holes in it into which organisms fit themselves. it is more akin to a living field that organisms adapt to. environment then changes causing changes in organism and so on. over simply, the entire ecological scenario is a self-organized nonlinear emergent behavior dynamic which operates best when close to the moment of self-organization. if it moves too far from that orientation, it becomes static and begins to fail, if it moves too close to the line across which self-organization occurs it fall apart. the healthiest situation maintains a balance point where constant change occurs, neither too far from or close to the line across which self-organization occurs. western science has too long over simplified evolutionary thinking for the masses which has resulted in a great many misconceptions. in part because most scientists never really understood it themselves.
Isn't your concern about Taleb's understanding of evolution misplaced, since evolution tends towards punctuated equilibrium rather than incremental improvement?
In antifragility is so great, why not just invest your life savings in lottery tickets?