I'm sorry to be the one to break it to you but slaves and slaveowners exist and the developed world's material excess is 100% built off the backs and blood of slave-labour at a certain level.
100% seems a little excessive, the most biased estimates put out by anti-trafficking charities claim that around 40 million people are currently enslaved, which is a lot in absolute terms but low in relative terms.
I mean, if you assume that wage labour is slavery then I guess we're all either slaves or slaveholders, but I think it's reasonable to say that we've made progress away from slavery, which is pretty impressive considering how common it's been in the past.
That "40 million" is the "acceptable by the mainstream" version, which only accepts the traditional mechanisms of slavery, and not new and advanced ways to force people to work for you - which cover billions in the developing world.
I hardly think forcing people to work for you requires new and advanced techniques - serfdom and slavery are the oldest tricks in the book (both predate writing).
The main innovation in the modern economy is paying people money - I think that's a good thing on balance, for most of history threats of violence were the primary means of exchange between the upper and lower classes. Now the ruling class only has to resort to violent coercion when they can't find anyone willing to do the job for less money! (Obviously multinational corporations still resort to violence more often that most consumers think).
Honestly, the main reason I object to arguments like yours is not because they're factually untrue, but because I feel like they promote a fatalistic apathy that discourages us from actually trying to make the world a better place.
I guess we chronically underestimate the tail risks but overestimate the magnitude of damage they'll do once they happen? In this case, it's clear the very few people who care enough to estimate pandemic risk at all thought this was fairly likely to happen but it didn't seem to be on most people's or government's radars, but once it happened, the early drama from Italy and New York City made it seem a lot more dire than it ended up being in most places.
Sometimes you're on the brink, though. Next time, it could be a deadlier, even more transmissible virus. Sometimes these overestimates of how bad it will get are dependent on human factors, too. Putting aside whatever job security was permanently lost to the lower end of the wage scale with the appearance of the gig economy, we recovered from that fairly well, but in some counterfactual universe where a few personalities are different and we don't bail out AIG, then what? I guess, according to Nassim Taleb, we become even stronger?
Another issue is just that the world at large is always going to recover on a long enough timescale, until finally it doesn't. I feel like this personally is my own big bias and blind spot. It keeps me from ever really worrying about anything or taking any claim seriously when people think the world is bad or getting bad. I look at the fact that tremendous world wars and genocides and nuclear arms races dominated much of the 20th century, yet things still got better overall almost everywhere, and it makes me way too confident that will always be the case. We can reach back from the brink and recover infinite times, but extinction only needs to finally happen once. It's why the true enemy always wins.
Same. I stocked up in February (on canned food etc. as well) -- not because I took the coronavirus seriously, but because my mom did and I didn't want her to worry. Maybe there's a lesson there!
As noted, millions did die from Covid-19, and millions more died at least according to some report from side effects of lockdowns, deferred treatment (e.g. for cancer), hunger and associated physical decline (e.g. in Bangladesh), etc.
This pandemic is probably not "once in multiple lifetimes." The Spanish flu was worse, the Hong Kong flu was at least as bad, and there are probably more coming our way.
It could easily have been MUCH worse in impact on supply chains, e.g. the supply chains for vital medicines imported from China might have been gutted. The Chinese lockdown was short. It could have been much longer, or it could have been long enough to cut supplies for vital medicines to a level sufficient only for China, or only for China and high bidders.
HIV also was worse, and isn't anywhere near over yet (though we're at the stage with anti-retrovirals that is comparable to a really expensive vaccine for covid).
Only in the same way that social distancing is as effective as a great vaccine in preventing covid. It's a continuing series of decisions you have to make, and those decisions do have costs to your actual experiential pleasure in life.
I ask because in undergrad I was taught that evolution does not produce perfectly fit organisms, it only eliminates those so unfit that they cannot survive to pass down offspring.
Nothing succeeds like success. Whatever life currently exists is thus by definition the most "perfect" that was possible under all the actual real world circumstances.
https://www.lesswrong.com/posts/XC7Kry5q6CD9TyG4K/no-evolutions-for-corporations-or-nanodevices touched on the basics. High fidelity of replication requires prohibitively long timescales for adaptations to accumulate, while low fidelity limits the total complexity of adaptations. Evolution can only meaningfully be said to occur in a specific range of conditions (which happens to include known biology).
Wait - isn't it possible that ALL organisms in an environment die, rather than a few always adapting to survive it? In other words, my understanding is that the theory of evolution does not imply that there will be a "fittest" that is guaranteed survival. Trying to refine my own understanding here. I'm no expert in this.
That's correct. The vast majority of species that have ever lived, have gone extinct.
If you're asking this for the purpose of understanding Taleb's example, the thing I took away from this post (and I think this is what Scott implied as well), is that "antifragility" does not have a real, consistent definition, and the examples are therefore "things to think about," and nothing more.
Kind of. I mean, we can be pretty sure that Earthly life isn't going to survive in the corona (though bacteria can last a surprising time in space, enough so that panspermia isn't entirely implausible).
Within reasonable limits, though, life does usually find a way. The bit that often trips people up is that evolution isn't just selecting *within* species, but *between* them as well.
The granularity at which competition happens is at the gene level, not species level or organism level, according to The Selfish Gene, by Dawkins. Great book. Maybe that idea has been debunked though. I read it many years ago.
As Scott once said, your cells will be selected for cancer as you age and they divide - but you can expect all the genes you inherited from your parents to be selected *against* cancer - or, at least, against cancer that kills you before you finish reproducing.
Similarly, evolution selects for selfish genes within an interbreeding population, but a separated population will speciate and evolution will select for whoever has less selfish genes when an interchange event happens. This is why most organisms aren't 99.9% retrotransposons.
It depends how you define "environment," I guess. Something is always lurking around somewhere to take the place of anything that leaves a space by dying. Since life got a decent foothold a couple billion years ago, "all organisms" have never died out. It's just a continuous process of "out with the old, in with the new." Forever. (For example, if there was ever life on Mars when it had liquid water that life has probably continuously evolved to the current conditions, and we'll eventually find those microorganisms.)
You're correct-ish. Over a long timespan, if every organism is allowed to compete, the most fit species for that environment will end up being the only species left. But that's an ideal scenario, which is how silly species like the dodo last for a long time (nothing competed with them for a long time).
But the dodo wasn't a silly species for the niche for which it evolved, which is why it evolved in the first place. Once the niche changed, or course it didn't do well, but that had more to do with specific circumstances (like the introduction of predators) than anything else.
More generally, their extinction didn't really have to do with the fact that they had a stable environment for a long time. There are plenty of examples of invasive species coming in and wiping out the natives, and not just because the native species had become decadent or something. Especially in the case of the dodo, the history of the species doesn't really matter. If a sufficient change occurs in the environment, it won't be able to evolve its way out of it, and it only took about 10 generations to go from first contact to extinction.
Yes, and that reason is that those species are robust to everything (can live in most climate, can eat everything or something very common), not that they are antifragile.
The argument is that they're more robust *because* of antifragility. They're more robust as a result of a higher-risk environment, and so they did better when the environment changed than the dodos (a species from an exceptionally low-risk environment).
In fairness, there are other reasons; for example, Eurasia is a much *bigger* environment than the tiny island of Mauritius, so rats/cats/humans are the three *most* invasive species drawn from a vastly wider range of species than were available on Mauritius.
The reason is simple: ships brought them to Mauritius, ships never took Dodos away. They might have thrived somewhere else but they were never given a chance.
But the very fact that rats and cats did an exceptional job of getting on boats and then escaping and breeding in the wild is a big part of why they survived and drove the dodo to extinction. Or in the case of humans, the fact that they were capable of making boats.
They arose on a larger continent, with larger populations, and thus more competing mutations and more natural selection (as opposed to drift, which is more powerful in small populations).
Yeah, I noted that below. It is true that they are in some general sense "fitter"/more robust/able to fit in a wider variety of environments than Dodos though, and it is at least *partly* because of the more hostile and varying environment they developed in.
I'm not so sure about this. It could be that the optimum involves some kind of symbiosis between multiple species. Exploiting the environment in the most successful way might include behaving in a way that allows other species in that environment to flourish.
I would recommend taking a look at chapter 3 of Dawkins's The Extended Phenotype, which addresses practical limits on the degree of perfection reachable via natural selection; it contains a good list of factors and a fairly comprehensive explanation of each, along with some practical examples.
-"it only eliminates those so unfit that they cannot survive to pass down offspring."
This sounds like a wrong way of putting it to me; two organisms which both survive to produce offspring may produce different numbers of offspring, and that goes into the calculation of what evolution will favor. You want to say "so unfit that they cannot produce as many offspring as the best organisms do" -- but this is just another way of saying they evolve to produce a maximal number of offspring. (Modulo the concerns mentioned by other commentators about how this is an asymptotic limit never actually reached; but over evolutionary timescales it can get pretty close.)
Natural selection eliminates the most unfit -> average fitness goes up a little. Do that 10.000 times - > average fitness goes up a lot. Squint a little -> high fitness animals look perfectly fit.
"Fisherian runaway" is an example where evolution can reduce the fitness of a species. One example is a male peacock's extremely long tail. If women like males with such tails men who do not have this kind of tail will be at a disadvantage even if peacocks overall do worse because of the existence of this preference.
I think Taleb would argue that it's reduced their long-term fitness as a species. Note: I do not endorse this idea. I think Taleb's idea of "fitness" is a bit superficial.
I'm also not necessarily endorsing Taleb, but it's clear he means fitness of the species itself, which is, of course, evolutionarily meaningless. As long as life continues in some form at all, the genes persist even if it's in another species.
You mean to some external threat introduced later? If the predators are currently there, then even if females prefer longer tails, if the longer-tailed birds aren't as effective at reproducing they'll never reach fixation.
No, even if the threat has always been there. The peacocks could evolve into a bad equilibrium where they are all worse off because of the female preference for long-tails.
Things can certainly evolve their way into a corner, but what would that look like in this case? How would they be worse off in this case? If there was some trait that lowered reproduction, I'd expect it to pretty quickly be eliminated.
The mere existence of longer-tailed rivals can make short-tailed peacocks less reproductively successful than they would otherwise be by drawing away potential reproductive partners.
Imagine there are two types of peacock, long-tail and short-tail; long-tailed peacocks die twice as often, but a peahen will always choose a long-tail over a short-tail. A group of only short-tailed peacocks will do better than a group of only long-tailed peacocks, but if both exist then the long-tailed peacocks will rapidly outcompete the short-tails by attracting all the females, resulting in an entirely long-tail group (which, as noted, does 50% worse than an entirely short-tail group).
Admittedly this doesn't address the question of *why* the females prefer the long-tails; wouldn't it make more sense to prefer the short-tails?
Empirically, mate preferences are indeed most often for fitness-excluding-mate-preference-enhancing things, but pretty often for fitness-excluding-mate-preference-reducing things. What differentiates the two situations? Possibly the sexy son/daughter hypothesis is to blame here; mate preferences arise because they're adaptive (or at least non-harmful), but once present they become locked-in and potentially exaggerated by the self-fulfilling prophecy of "X is reproductively successful because it attracts mates because it's reproductively successful because it attracts mates because..." But this doesn't actually make clear predictions for when species will or won't go into such spirals, at least not to a layman like me; possibly more formal versions do?
Note: I'm pretty sure you can specify the parameters such that the peacocks, after multiple iterations of increasingly-long tails, would actually be outcompeted by their better-at-surviving-but-less-sexy extinct ancestors if they were reintroduced. But it isn't necessarily the case; obviously it isn't in this toy binary long-or-short model.
The thing that surprises me is that the species hasn't forked. Yes, one peahen with a preference for short-tailed males probably doesn't get a reproductive win, but I'm imagining a difference scenario.
Suppose she just wants (or is willing to settle for) a shorter-tailed male, a male who has a slight advantage in avoiding predation.
It's at least plausible that mutations frequently don't just happen once. There are peahens who eventually don't want maximally flashy tails.
Obviously, this hasn't happened. Maybe the big tails are closely linked to other traits that aren't handicaps.
The main issue I see with this is that a huge portion of the environmental pressures are from other organisms, so there's no real guarantee that either the individual or the ecosystem will converge to a steady state (i.e. it's chaotic in the sense of dynamical systems)
I figured the idea was this: If the environment is stable for many generations, there is no environmental driver for evolution. If the environment is volotile, not over the lifespan of individual creatures but over the timespan of many generations, then the environment is driving the creatures to change to evolve to fit it.
I think it's weird to say that Evolution is either antifragile or fragile at all. Evolution is a process, not an entity. I don't think the property of antifragileness applies to it either way.
It's the biological organisms/ecosystems that are antifragile. And evolution is the process that makes them antifragile. I guess it works if you parse "Evolution is antifragile" to mean "Evolution promotes antifragility", but I think it's a confusing use of the terms.
I don't have an opinion of whether it's correct or not, but I think in this case the idea really is that evolution itself is antifragile. You just have to have a weird concept of what counts as "good" or "strong". If you say that the process of evolution is a "thing", and you say that evolution is "doing good" when it is making things change, then indeed environmental change kind of feeds evolution to do what it does.
I agree. If something goes from Status Quo A to Status Quo B, is it evidence of "fragility" or "anti-fragility?" It solely depends on your frame of reference. If your portfolio adapts to the market by shrinking, it's "fragile" only because you like money.
But if natural selection gets rid of Dodo bird genes because they are no longer adaptive, Taleb says this means natural selection is "antifragile." If you are a Dodo, however, you would think this was "fragility" (or that's what you would think, if you weren't extinct.). Personally, I think Talebism is nothing but semantic word-game nonsense.
Evolution can make species fragile. There exists for example a species of moth that's exclusively adapted to live in sloth fur and reproduce in sloth feces, which is exactly as absurd as it sounds. Given a stable environment, evolution will often hyperspecialize species and they can get royally fucked over by environmental change.
Evolution tends to take short steps in the fitness landscape; they are the most probable. So it gets stuck at a local optimum. If there is a long enough period of stability, then after some absurd number of generations, eventually a longer step may be taken towards a higher optimum.
Also, the fact that mutations are constantly happening means there is always the potential to evolve if the environment changes suddenly. That speaks to one of the few things I do credit _Antifragile_ for saying: if some function is vitally important, there should be more than one way to do it. (Yes, it's a stretch.)
"Perhaps it would be much kinder if somebody gave unfit animals some Animal Chow to prevent them from starving. But such kindness would prevent natural selection, and gradually weaken the species (or, more technically, the species' suitability to its niche) until eventual cataclysm."
Hmmm ... wouldn't this be exposing them to variation in their environment (sometimes there is Animal Chow, sometimes not) which surely should make them stronger!?
Never read any Taleb, but my impression of him from listening to him on EconTalk is that he is not a clear thinker when it comes to biology.
Yeah, that'd give the animals a new niche to exploit - finding ways to get humans to give them food. That process of accomodation and urbanization is why some animals (deer, raccoons, coyote) have succeeded in the Anthropocene era and so many others have failed.
Have the geologists decided that it's the Anthropocene? Personally I would consider the Anthropic Extinction Event a much better name. Every mass extinction is called an event, and with the exception of one they all took a while, up to 20 million years for the late Devonian. Still an event. Humans have existed for what, 2 million years?
Over hundreds of generations it would eventually make them stronger. In the near term it would cause them to breed beyond the niche's non-animal-chow carrying capacity, which would cause starvation (and, likely, violence) every time the chow is withdrawn.
"People" are a subset of "animals", and this is essentially the conservative objection to the welfare state - that it creates dependency.
On the other hand, if we cut off things like the the Supplemental Nutritional Assistance Program (which could reasonably, if impolitely, referred to as "animal chow"), people would starve now. That's not good either.
Like you seem to think, Taleb is best as a corrective (and effective Twitter partisan against Intellectuals-yet-idiots) rather than as a starting point.
Of the first few examples, I'd have to say stock options is the most egregious. Yes, the option gains value when the volatility *of the underlying* increases. Notice anything peculiar there? It benefits when chaos is applied *to something else*. On the other hand, when the value *of the option* is fluctuating wildly, that's really no fun for the option holder..
It's like saying "my company is antifragile, because when our supplier company is experiencing chaos, they get desperate and give us better deals".
There's a similar problem with the evolution example.
The "volatility" is that organisms have random mutations, which cause some of them to do better or worse. Mutations are not on average beneficial. So where's the benefit? At the population level, the population evolves in a direction that is a better fit for the environment. So volatility in the mutations of an individual benefits the population. But shouldn't the definition of "antifragile" mean that volatility in a *population* benefits the population? What does volatility in a population mean, anyway? Random changes in social interaction or pecking order? Change in number of individuals? Relocation to a new geographic area?
I think what Scott was saying at the start of the post is essentially "antifragility is not actually defined, and the author is just using a fancy-sounding word to talk about a bunch of things he finds interesting."
Why would the value of the option fluctuating be no fun to the option holder? The volatility is the point of options - you want it to be the wildest ride possible, as your losses are limited to 100% and gains scale with volatility.
I mean, an option wildly seesawing between ITM and OTM isn't the most fun, but that's a really specific form of volatility. I agree with you - as a rule, option buyers are going to have more fun in high-volatility environments.
And more sophisticated option holders can "delta hedge" their option by selling a fraction of the underlying if they are long a call. Then volatility is unambiguously good. In that case you make money if the underlying goes up big and also if it goes down big. And if you change you hedge every day or hour you make money even if it settles right at the strike as long as ride to get there is very wild.
As in: would you rather have an option gain gradual value every day, up to 3x on day n, or take on completely random values every day, and then suddenly be at 3x on day n? You make 3x either way, but your cortisol levels will show the difference.
Right; but I think we're claiming that both 3x over n days and random noise over n days are both examples of stock volatility. A non-volatile stock would remain at x.
Interestingly gaining 3x at a constant rate over n days is an example of extremely low volatility. A option seller who hedged dynamically according to the standard option pricing formula would make money due to low volatility. An option buyer would definitely prefer a wild path.
Isn't it true for any stock that the losses are limited to 100%, and gain scale with volatility? In this sense, any financial product that has a defined lower bound but no defined upper bound is anti-fragile. And perhaps not just financial products. A species can never have less than zero members, but can always grow further by any factor - does it make it anti-fragile?
Yes, it's a bit weird for Taleb to highlight options as anti-fragile, because stocks are already a form of option. You gain from the upside of the company's assets, and are shielded from the downside (generally the debtholders absorb the downside). See also "Merton Model".
How do debtholders absorb the downside? They get paid before the stockholders. The whole idea of a corporation is NO ONE gets a downside of more than 100%.
Options are effectively leveraged, because the payoff is option premium / difference in price. A 5% price move of the underlying can put you at +300% profit, while a move in the other direction will put you at the maximum loss of -100%.
This asymmetric payoff is a huge reason why people bother with options in the first place, and is to an extent priced into the premiums.
My impression is that options speculators take that as "Part of the Game," which is in contrast with say your local gas company, who buys natural gas futures in the summer, when prices are generally lower, for delivery in winter when they're higher, though they sometimes lose spectacularly, e.g. this year in Texas.
Taleb's use of options for anti-fragility is usually focused on buying deeply OTM puts, such that they're fairly neutral $1/mo -> expiry and repeat, until a chaotic event happens and those puts skyrocket in value. The approach is more specific than you're thinking, it sounds like
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
There is a literature on "volatility investment." The big risk is totally losing your shirt - see this paper, which decorates its margins with Death wielding a scythe:
According to the second one, they didn't invest in volatility but rather shorted it (and lost all their money when volatility went up).
(Also, by definition anyone investing in volatility through the stock market is betting on only a little bit of volatility. Truly-extreme volatility potentially makes the stock market, currency, and/or property rights irrelevant.)
> Maybe this doesn't work in investing, but does work in real life?
Public markets make a poor parallel to real life. Public markets come extremely close to the efficient market hypothesis, and it's very hard to consistently beat them. But very few other markets are efficient.
The area I know -- early stage tech startups -- has huge amounts of value sitting around waiting to be taken. People no smarter than you make billions with simple strategies. Renting an office in SF is inefficient -- you can pay less than half price if you really shop around and negotiate.
It's frustrating that so many books about strategy and forecasting use public markets as an example, because it's the exception where simple strategies don't work.
> Renting an office in SF is inefficient -- you can pay less than half price if you really shop around and negotiate.
This tells me that it's inefficient if you grade it from the lens of how much money you spend for the office space that you get, but the fact that startups continue to do so suggests that there are other aspects that make efficient in other ways that counterbalance the inefficiency in office costs.
In reality, the reason they do this is because a significant percentage of the top tech talent already lives in the SF, (or at least is much more willing to move there), and either the "talent" doesn't want to work remotely or else the startup believes that the benefits of not having people work remotely makes the rental of expensive office space worthwhile. (Obviously COVID has changed the dynamics here a lot, though)
Like, I say this as someone who works in programming in the (non-Chicago) midwest. On the one hand, it's great - programmer salary in a place with *much* cheaper cost of living. But on the other hand, you ever try convincing someone who lives in California to come move to Indiana?
You're talking about a different kind of inefficiency. I meant failing to conform to the https://en.wikipedia.org/wiki/Efficient-market_hypothesis, where the price of everything reflects all available information. If doing extra research means you can get a better price, the market isn't efficient.
No, that's what I'm talking about too. The price of office space in SF includes available information like "top tech talent lives in or is willing to move to SF".
Whereas your "better price" is only from a strictly monetary perspective and something like "making it harder to hire talent" is a non-monetary cost which is not factored into your better price.
I'm not saying it's not possible that tech companies are leaving money on the table by their heavy concentration in the Bay Area - (in fact many will argue that COVID has proved that they were, though you can argue that their judgement was sound in the pre-COVID world).
But your argument seems to be "they're spending a lot of money on office space, when they could get it for cheaper instead, therefore not efficient market", which I think is just misunderstanding what the efficient market hypothesis means.
Really, this just sounds like something like Hotelling's Law, the same sort of logic that drives 4 gas stations to all build on the same corner. Sure they could probably build cheaper elsewhere, but that doesn't mean that's the efficient thing to do.
I think you've still misunderstood Trevor. He means something more like "the prices of different office rental agreements within SF are inefficient, as evidenced by the difference in price of nearby rental agreements of comparable spaces. You seem to think he meant "it is an inefficient choice for startups to rent office space in SF."
Antifragile is framed more as a property of the target, eustress is framed more as a property of the environment.
I was going to say "... which may be why the term Antifragile has become more popular, contra the Lindy effect", but when I checked it seems eustress (a term I'd never heard before) is actually the considerably more common term for this phenomenon.
Frankly, I think eustress does a better job of capturing the fact that nothing is *completely* antifragile and few things are completely fragile. Conversely, in fairness, antifragile does a slightly better job of capturing the fact that things have radically different ranges of what counts as eustress vs distress and it can be a positive thing to increase those ranges (especially if reducing the environmental stress is impractical.)
I may switch to using eustress on the rare occasions I have to refer to this concept.
Taleb always bothered me with his grand, sweeping claims and just-so stories. They never seem to have much relation to the real world.
Like, the taxicab (or Uber) drivers I've known are generally one bad week/month away from bankruptcy, because they rely so much on their bodies to do their work and their equipment is on a vicious depreciation cycle. This is not true of the bankers I've known, some of whom take a year off from work and are fine.
Taleb would probably argue that he's speaking of some hypothetical banker and some hypothetical taxicab driver, but then what's the point? Why not just argue that dragons are fragile, which is why they never ruled Westeros? Either his arguments are grounded in the real world or they aren't.
Yeah, between the combined risk of the taxi driver killing someone by accident, or getting ill at a bad moment, or having too many bad week in a row, etc... and the risk of the banker losing his job and being unemployable for some reason, I don't think the banker has the riskier situation.
I feel the same way. I really liked his books the first time I've read it 12(?) years ago. Was a big fan of the Black Swan and everything. But years had passed, I ve read more of his books, followed his twitter, actually got into trading myself and ..... the glow had fallen off. I still like his books I guess, most of them are fun to read and think about, but I kinda stopped taking them seriously. As I've realized that things are different in the real world.
As someone who hasn't read his books, it seems that you can get most of the value out of Taleb by reading reviews of his books, because the grand ideas are more interesting and thought provoking than the details.
Doing some research on it, it looks like they're not quite as dead as I stated, but still pretty diminished. There are places where the government has deemed it required to possess a taxi medallion to carry passengers (some city centers, some airports); in these areas taxis can outcompete Uber, so they're still present.
There are also some larger institutions that have standing contracts with taxis to carry passengers regularly (some hospitals, some schools, a few companies) so some still exist primarily in that space. Also, I guess taxis can take cash while Uber can't, so presumably that might carve out some market space for them.
This is a common myth. Uber's taxi business is wildly profitable and has been at least ramen-profitable for nearly a decade. The VC money didn't subsidize fares (except presumably at the very beginning), it subsidized all the other ridiculous BS like their godawful self-driving car arm.
Uber fares are low because Uber's routing algorithms are <b>fantastic</b>. Pre-pandemic, it cost literally 10% as much to get a shared ride from point to point within San Francisco on Uber, as it cost to get an entire Lyft. (I know because I was still using Lyft, and I split a Lyft ride with a friend, and he told me that paying half was 5x the cost he'd pay for a solo ride. I checked - he was correct.) Lyft's routing is not great, so the shared ride is a small discount. (Taxis, presumably, are even worse.)
Uber drivers pre-pandemic spent roughly 20% as much time idling at the curb waiting for the next fare as Lyft drivers. <i>That's</i> what "subsidized" Uber fares.
Note that requiring a smartphone and a credit card to get an Uber almost certainly filters out the lowest-tier of potential customers, who are probably also the most likely to be various kinds of trouble--more likely to rob you, more likely to run off and stiff you, etc.
I still think he's wrong, but he didn't pick the industry if I'm reading Scott's writeup correctly, but rather the drivers, who are presumably still driving for Uber, so that's seemingly an argument that they "survived" the death of their former industry just fine, presumably for a value of survival that nets them less profit than whatever they earned when the profession was more moated.
> This is not true of the bankers I've known, some of whom take a year off from work and are fine.
Do the bankers you know make only $36,000 a year like the one in the example, though?
I think the bank employee vs taxi driver example is a bad example in terms of actually being true to life, but a good one in terms of illustrating a much broader point that being robust to small shocks can make you more vulnerable to large shocks.
Maybe some truth in the sense of crash of 1929 caused some bankers to throw themselves off of buildings, but if you were already a hobo, you probably didn't even notice.
But it's flatly ridiculous to suggest in general that economic shocks are worse to the upper classes than lower. Even in a true post-apocalypse, I would doubt we'd actually see the fantasy literature type stuff where the earth is inherited by particularly brutal sheriffs and used car salesmen. The aristocracy has a hell of an ability to reproduce itself even across societal collapse, external conquest, and internal revolution. Most street tough people just die in the streets.
> But it's flatly ridiculous to suggest in general that economic shocks are worse to the upper classes than lower
Yes, and the point that Melvin is making is that "comparing an upper class profession to a lower class profession" is an unfortunate byproduct of the point being made. I think the analogy is essentially asking you to pretend that banker isn't a high-class profession that means they're just going to naturally have a lot more money because they earn *much* more than $3000 in practice.
To avoid the apples to oranges comparison, it'd have been better to either pick an anti-fragile, but higher earning job (maybe a real-estate broker?) instead of taxi-driver or else pick a fragile but low-earning job (maybe fast food industry).
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
I'm not aware of a way to go long on VIX which doesn't (naturally) decay over time to 0. There are expenses and weirdness naturally built in to the products. I'm not a super derivatives guy, though, so... maybe there is a way and someone will be kind enough to mention/explain it?
(Also, perhaps that property is less relevant than I think.)
There's that (you can't buy and sell VIX directly and can only use derivatives that don't represent permanent long positions). But there's also that the index itself has no long run trend at all. Volatility has certainly not always increased, and though there is nothing mathematically preventing the existence of unbounded volatility, high enough values seem to imply a level of societal collapse such that you can't just ride the wave forever. In reality, the index always returns to the historical "normal" level. This isn't like value indices that can actually go up forever as long as the economy keeps growing.
Its not without expenses but you can roll option contracts while hedging them with the underlying to get pretty close. Essentially you buy say a 3 month out option hold it for one month then sell it and buy the next 3 month out option. If you do this while also shorting the under lying you have pretty much a pure vol position.
This is only tangentially related but Acoup recently wrote a series of blog posts about how Spartans actually sucked, and I found it pretty fun to read. Extremely fragile, rather than antifragile. https://acoup.blog/category/collections/this-isnt-sparta/
It's funny that right at the beginning he gives the movie 300 so much shade. Yet it's based on a comic by Frank Miller who shouldn't be let off the hook so easily.
Thanks for the share, this looks like a great read!
It's pretty amazing. His other stuff about the way iron was made, grain was grown, the silliness of a universal warrior going back to Roman times is also good.
I find acoup a little frustrating. Clearly he knows his stuff and does a good job of dismantling the simplistic views of ancient and medieval history that we normies tend to have, but on the other hand he spends far too much time beating up on straw men instead of getting to the good stuff.
I mean, yes, 300 is not an accurate rendition of history. But did anyone ever _really_ think it was? Really really? I'm interested in hearing about how real Sparta differs from the mythologised version, but do I really need to read another twenty paragraphs explaining the very obvious point that "300" is inaccurate and that actually Sparta _wasn't_ perfect and that anyone who enjoys "300" too much just _might_ be suspected of political wrongthink, before we get to the interesting stuff?
And he keeps telling me that Sparta was "not the ideal society that some have made it out to be"... but again, did anyone ever really think it was? People throughout history have admired Sparta's dedication to purpose, but as far as I know nobody has ever made any kind of effort to actually emulate them.
I likewise find the focus on “debunking” those bad positions a bit tiresome, but I think it’s understandable how he ended up in a position where he feels that emphasis is deserved. Namely, his day job consists, in large part, of making undergrads unlearn whatever wacky historical ideas they’ve absorbed over their childhoods. I am inclined to believe him when he says that many of those misconceptions are held by a substantial portion of his students.
On the series "the universal warrior" he gives examples of a bunch of people who really idolize the idea of Sparta. And also his whole point in the Sparta series is that this "dedication to purpose" is just a myth, and that nothing about the actual Sparta was desirable on a society at all
Saying the dedication to purpose is a myth and Spartan society isn't desirable are two different things, though, which is the point the parent comment was making. I don't think he shows that Sparta wasn't dedicated to its ideals, does he? Just that it wasn't a very nice place to live?
Along with undergrads (who might be more representative of the general population) and the examples he explicitly mentions, the other important "strawman" that he is dealing with is the US military. Even if the ideology of the US military is not the steelman you are hoping for, it is important to engage with it.
The way he totally ignores the criticism of his Spartan battle record thesis is also annoying.
Pretty much any expert who gets too deeply into 'debunking' the misconceptions of the general public turns into an arrogant twat, eventually. They both become assholes about their area of expertise but they also develop a blindspot for the possibility that they can be wrong.
The most fascinating part for me was how seemingly all parties completely bought into the propaganda surrounding Sparta's military prowess. Sparta legitimately believed they were supermen and their enemies also believed it. Evidence to the contrary was ignored or discarded.
Yeah, I don't buy his argument that the Spartans weren't militarily excellent. He says their battlefield record is about .500, but that's what it should be if opponents can choose whether or not to fight, which he makes clear they can and do in that period. No one should fight a battle they expect to lose if they have other options, so battles should only occur when both sides think they have a good chance to win. Assuming that the two sides are equally good at judging their relative power, battlefield records should always be .500 for everybody, regardless of how powerful the army.
In fact, if his hypothesis that the Spartans are overrated is true, then the Spartans should have a well-below-.500 winning percentage, because their enemies should only be willing to face them on the field when they have overwhelming force that gives them [the enemies] a really good chance to win, and the Spartans should foolishly choose to give battle under those unfavorable circumstances.
I think his data are meaningless, and we probably need to default to the judgements of their contemporaries if we want to assess their military prowess.
(I found all of his other analysis of Sparta pretty compelling).
There certainly are examples of classical societies that did bat well above 0.5: the Macedonians and the Romans. Alexander the Great had a near perfect if not perfect score and the Romans had 0.84 against Fremen societies (although closer to 0.5 against the Persians). They also both conquered much of the known world.
Devereaux's argument is that they were slightly better than other Greek city-states but not legendarily good. Since Spartans today have at least as big of a reputation as the Macedonians or Romans, they are clearly overrated.
My point is not that departures from .500 are impossible (I point out that if Bret were right, we would expect the Spartans to be well under .500 after all), especially not when you are looking at single campaigns (cough Alexander cough). Just that it tells you very little about the underlying military potency of the group in question, except in certain very narrow situations that Bret goes out of his way to make clear don't apply to the Spartans.
Did the Romans bat above 0.5? They are not around anymore, which suggests they had a losing streak eventually.
But even if they did very well, that shows they were better a picking their battles than others, which is not the same thing as being better at fighting them.
Roughly speaking, to bat above 0.500 you need to be unusually lucky, you need to be better than people expect you to be, or you need to be able to force battle on people who can't refuse. Between the invention of walled cities and the invention of cannon, it was rather difficult to force battle on civilized people who didn't want battle; at most you can force them to endure a siege and then surrender. Deveraux IIRC was not counting bloodless sieges in his list, and certainly didn't count "we'd rather not even go through the trouble of a siege, let's negotiate terms up front".
The Romans, as you note, batted ~0.5 against civilized societies and did much better conquering "barbarians" who didn't have walled cities. Alexander, was better than anyone expected because he was a singularly talented general and because his dad had built an army better than Macedonia had ever had before (but didn't use it conspicuously enough for the world to have taken notice).
The legendary Spartans, were the legendary Spartans for a couple of centuries, and they were surrounded by people who knew how to build walled cities. Is there anyone who consistently batted >>0.500 against walled-city-builders for more than a generation or two in the pre-gunpowder era?
The mongols and the crusaders? And if the Spartans were so much better than their neighbor and everyone knew this, they should have been able to simply conquer them without battle, and create their own huge empire.
Acoup's essay on Spartan grand strategy explains why this isn't the case, but basically, they specialized on battlefield power at the expense of being able to prosecute sieges.
The Mongols and Crusaders each got a generation or so of stunning success, before they fell back to trying to hold on to their gains and slowly losing ground. Maybe three generations for the Mongols, but only if you count conquests on the far edges of the known world where the Europeans don't understand how effective the Mongols were against the Chinese.
As boylermaker notes, the Spartans weren't very good at sieges (or naval warfare) which limited their potential for expansion in that environment. But even Deveraux I think admits they built a far larger mini-empire than five two-bit towns would normally have been able to manage. They had the best heavy infantry on the planet at the time, and they conquered about all there was to be conquered without ships, cavalry, or siege engines. And then other people learned the trick of making heavy infantry even better than the Spartans.
Yes reading this really kill the idea of the spartan as antifragile. It's a society that survive on ideal condition and was destroyed as soon as those condition changed
What I immediately thought is that peaks are always fragile, and valleys anti fragile. So being fragile correlates with being high, which is what we wants. Being anti fragile is like the quote "there's no way to go but up!"
I'm having trouble squaring what seems to be this view that the attempt at theory is worthless with the fact that Taleb has an MBA and a PhD and has been a professor at multiple universities and the editor of an academic journal. He's trying to produce a theory that theory is stupid. This seems like the same basic impossibility of true moral relativism or non-Pyrrhic skepticism. Someone can make a convincing argument for them, but the very act of making an argument at all is inconsistent with what is being argued for.
I think it makes sense through a more pragmatic lens. Taleb thinks the theory is worthless, and he wants to promote this idea. In order to reach more people, he produces both popular and academic takes, catering to different groups. I guess in some sense, it is the philosophy of antifragility applied to the propagation of itself.
Glad someone called this out. Antifragile was my second attempt at reading Taleb. Made it through The Black Swan, but stalled in the middle of this one. One of the most annoying aspects of his work for me is the tendency to deprecate / distance himself from attributes that he himself seems to possess in abundance.
As Scott points out, Taleb is one of the most intellectual anti-intellectuals out there. For someone allergic to the ossification of academia and theories, he certainly spends a lot of energy producing the thing he decries. He clearly craves attention for his ideas and cultivates an aura of "unconventional" genius at every opportunity.
I read Fooled by Randomness and then The Black Swan. I couldn't tell if I liked the former better, or if it's just that the second dose of Taleb doesn't add much beyond the first dose. I've had students really try to push Antifragile on me though.
Those that can't do, teach. If Taleb could make a ton of money directly employing theories of antifragility, he would. But second best is making a ton of money writing about how theories of antifragility could make the reader a ton of money.
I haven't done a thorough investigation, but I think he's actually an independently wealthy crusader for his particular hobby horse? Like, even before his books?
> So think of this less as a sober attempt to quantify antifragility, and more as an adventure through Taleb's intellectual milieu.
Am I incorrect in interpreting this as 'Taleb takes refuge in unfalsifiability'? So many of these examples seem to hinge on their specific framing and level of focus; you point out a few and contradict a few more with the Fact Checks. Antifragility is a powerful concept to keep around, but I'm *extremely* skeptical of the prescriptions that are coming out of how it's being used.
I think it's sometimes fair to have philosophical principles that you can't immediately reduce to falsifiable facts or studies, but I'm not sure Taleb does this responsibly.
Sure. I'm reminded of Continental philosophy - ideas can be unsuitable for testing and still extremely valuable... but at the same time, it's hard to build very far off of a shaky foundation.
I'm more concerned by things like the Syria v. Lebanon GDP comparison. Mistakenly interpreting a signal from noise is one thing, but that looks more like a case of deriving a signal from *error*. Worse (maybe?) when the signal doesn't seem to be much larger in magnitude that the initial mistake. I'm feeling something like a philosophical version of Gell-Mann amnesia: I see that someone's being irresponsible where they can be checked, therefore when they are difficult to check I conclude that... I should check out the book to hedge against selection bias? ¯\_(ツ)_/¯
In many cases a prescription implies a theory, and in the case of the Ten Commandments I think it does. The theory is "following these prescriptions leads to favorable outcomes." If you didn't have that as a theory, then there'd be no point in giving the prescriptions.
Anyway the pattern holds for prescriptions regardless of whether there were a theory attached. The pattern is, "People already had this idea. The now-canonical written form didn't teach them the idea; it was a formalization of the idea they already had."
It's not a theory if you're God and you know for certain you're going to punish whoever doesn't follow your prescriptions.
It's unclear if whoever actually came up with codifying these really believed they were sent from God or they were trying to predict optimal social outcomes based on factors other than pleasing the almighty.
Although, even if you generalize to laws at all, I think at least some if not most laws have as much "please the lawmaker" as the intended outcome as "do whatever it optimal for whole society."
Some of them are fairly obvious, but the Sabbath and the ban on idols aren't, and one could argue that codifying the prescriptions gives them greater power.
The Ten Commandments are immediately followed by a load of less intuitive rules about diet and clothing, I suspect there's a benefit to putting some relatively uncontroversial rules against murder and theft up front before you start on the mildew regulations.
If you buy an option and you’re wrong you lose all your money (they expire worthless). This seems similar to lottery tickets and insurance. There are worse risks than that, but it seems like the risk reduction comes more from the ability to hedge, and this hedging happens when you don’t spend much money on such things.
If you buy/sell combinations of options, you can pretty easily hedge against pretty much any scenario short of “the entire options exchange collapses”, even without reserving any money outside of options.
It doesn't violate no-arbitrage because it exchanges risk for return.
For example, historically there's been a really easy way to beat the S&P 500: invest in the S&P 500 with moderate amounts of leverage. Returns go up, but volatility also goes up with the leverage ratio.
A no-arbitrage principle would mean that there is no _risk free_ way to earn a profit in excess of the risk-free interest rate (something like US Treasury Bonds of the appropriate duration).
This means, for example, that a stock that trades on two different exchanges should trade for the same price on both.
This is not a constraint of the market, but rather an outcome in that any arbitrage opportunity gets quickly snapped up by parties who can exploit it.
Yes and more generally, risk arbitrage should also ensure that the same amount of risk earns the same expected return.
I took the original strategy to imply that the small bets would yield the same expected return at lower total risk which in my view shouldn't happen - assuming diversifiable risks are diversified away properly in either case.
"At some point you have to do a thing, which usually means using some system but also being aware of its limitations." Or some heuristic. That sometimes seems to make Taleb's distinction indeterminate.
My takeaway has always been, let different people try different things with volunteer participants, whenever that is possible. Then the hard cases are just those where it is difficult for pluralism to work, because the circumstances absolutely demand a unified response. Of course, as Covid has demonstrated, we do not currently have an alternative better approach to such situations, although sometimes people try to use compulsion to approximate it. Compulsion is fragile?
I have to question Taleb's statement on jet engines. The first patent on a gas turbine was issued in 1791, and the thermodynamics behind them were worked out by 1900, AIUI. I'm sure there is some aspect of their operation which was solved empirically before the theory was worked out, but it absolutely was not a matter of "people just tinkered with it before they understood how it worked".
OK. This bothered me enough to go looking for Taleb's source, and Taleb screwed this one up. The source in question isn't claiming that nobody understood what was going on at all. There was definitely theory for the basic operation of the jet engine. But a lot of the problems of making a jet engine work had to be solved practically, which surprises nobody who knows about this kind of stuff. He cites his son not knowing this as evidence of something, but I'm also an aerospace engineer, and we didn't talk a whole lot about history in propulsion class. What I know about this comes from personal reading.
Yes. Lots of fluid dynamics is fundamentally unsolved, and possibly unsolvable. We use CFD which takes an approximation with a lot of points, and things like wind tunnels and experiments. But this is obvious to anyone with a passing knowledge of fluid dynamics. If that's what Taleb is claiming, then he's saying something banal that doesn't mean what he thinks it does.
As someone who used to do quant finance, he has a pretty bad record for his claims in that area too, despite the fact he used to be a trader. He definitely has a recurring issue of accusing others of being wrong without understanding their field.
I've heard that contrary to Taleb's claims, Value At Risk and other fragile formulae are no longer actually used much to make decisions in the finance industry. They've moved on to better methods. True?
The Navier-Stokes equations may be absolutely insoluble in the strict sense. This isn't a problem, because we *can* use experiments. And simplified models. And, yes, our understanding of fluid mechanics. "Understanding" and "rigorous analytical solution" are two different things.
The people who developed jet engines were not just "playing around with aircraft engines until jet engines sort of happened"; there's no plausible amount of "playing around" with a reciprocating-piston internal combustion engine that gets you a gas turbine optimized for exhaust thrust. The people built the first jet engines, understood what they were doing.
They probably understood jet engines before they machined their first turbine blade, better than Taleb understood antifragility after he finished writing a book about it. Taleb is in the business of Thinking Real Hard with his Mighty Brain until he has come up with something he believes is true and important and that he can sell; I speculate that he thinks this is what other smart people like scientists and engineers ought to be doing (possibly with a side order of Feeding the Numbers into a Computer), and if we're not doing that then we must be just flailing around randomly. In which case, no. You have to do experiments if you want to do design useful engines, and you have to understand the problem if you want to design useful experiments. Michelson and Morley weren't just playing around with mirrors until they accidentally vanquished the luminiferous aether.
Yes, the theory is fairly old, and steam turbines were well understood by 1900. The hard part for a jet engine is that the compressor blades have to move at least 3/4 of the speed of sound in order to compress air efficiently, which requires spinning very fast. For the compressor to not fly apart at that speed & temperature it has to be made of exotic high-temperature alloys. These only became available starting around WWII.
Which is why the turbojet engine was separately created twice: once by von Ohain in Germany (hydrogen fueled!) and later (albeit patented prior to von Ohain's work) by Whittle in Britain (hydrocarbon fueled).
On the discovery of things, I would argue that we need to be very cautious. Accidental discovery makes for fun story, and are thus remembered. But we don't remember the thousand of things needed to make car evolve from what they were to what they are.
We don't know who, when and even how many times the discovery made from following theory are made, because following the theory to find something make that something "not a real discovery/invention": if you follow a map that tell you there is a river here and there is indeed a river here, nobody care. We remember how and who made the first vaccine, but most of us are incapable of giving the names of those who use Pasteur's idea to eradicate other disease.
I can think of a few examples where theory definitely preceded invention:
Maxwell's theory of electromagnetism predicted radio waves, which was later confirmed
Einstein (?) predicted the feasibility of atomic bombs
Actually... I'm gonna stop the list here already, because basically every technology I can think of was preceded by theory: the computer (see eg Turing and von Neumann), camera sensors, ...
(Also, think about how precise your theory of optics has to be in order to produce glasses that correctly correct vision deficiency without chromatic aberration and all the other problems. Is optics fragile?)
Einstein is generally credited with E=mc^2, which shows how much energy is locked up in matter. Though Oliver Heaviside came up with the same equation fifteen years earlier under the less general assumption that the only fundamental force was electromagnetism.
However, the energy released by hydrogen bombs is only a hundredth of this amount, and atom bombs about four times less again. Atom bombs were made because means were found to encourage 'autocatalytic combustion' of unstable elements. In principle, their relation to E=mc^2 is no more than that of a coal fire, though a coal fire releases only about a millionth as much of the fuel mass as an atom bomb. The equation just indicates the maximum conceivable power of any bomb of the same mass, even if it uses technologies unknown to us.
So the theory didn't predict the bomb - it incentivised it and limited the parameter space, perhaps. But the bomb was developed out of observing the properties of uranium and heavier elements.
The theory did predict the bomb. The atomic bomb wasn't discovered by accident or by experimenting randomly. What Thomas is referencing is I think the letter that Einstein wrote about the feasibility of the atomic bomb, letter that prove that the atomic bomb was an applied theory, not a discovery.
The "properties" of uranium you speak of can't be measured without an advanced theory of how matter works.
The famous letter was written by Leo Szilard in collaboration with Edward Teller and Eugene Wigner, they just pulled Einstein in at the last minute to sign it because he was more famous and better connected.
> Oliver Heaviside studied Maxwell's A Treatise on Electricity and Magnetism and employed vector calculus to synthesize Maxwell's over 20 equations into the four recognizable ones which modern physicists use.
E=mc^2 *did* show that the loss in mass during radioactive decay came out as energy; that was the key insight that got people thinking "oh shit, this is a doomsday device if harnessed", rather than just "huh, this is weird".
That's not what E=mc² is used for. Sure, the E of a nuke is much less than the m (times c²) of the nuke, but you're applying the wrong m.
E=mc² applies to the difference in mass between the reactants and the products. To use a standard example, one of the nuclei used to make nukes is ²³⁵U (uranium-235). The reaction is ²³⁵U + n (neutron) --> ¹⁴¹Ba (barium-141) + ⁹²Kr (krypton-92) + 3n.
m(²³⁵U) + m(n) > m(¹⁴¹Ba) + m(⁹²Kr) + 3m(n)
The *mass deficit* becomes energy, and that is what is released in a nuke.
Though of course, this doesn't directly predict nuclear weapons, but this was a necessary first step. It along with the discovery of nuclear fission motivated the Manhattan Project. Pretty theoretical if you ask me.
The only thing needed to see the possibility of any bomb is (1) a large energy release, and (2) a way to rapidly auto-catalyse it (or just catalyse it, I suppose, to be maximally general). It's as true of the atom bomb as any other. If nobody knew that fission released a lot of energy, Einstein's mass equivalent of energy would have implied a large energy release (for the fusion bomb, we can say the prediction came before the experiment, but by that stage the issue was certainly well understood). However, it seems obvious that it was already known that fission released a great deal of energy.
"Though of course, this doesn't directly predict nuclear weapons, but this was a necessary first step."
How is this a necessary first step for the atomic bomb, without also being a necessary first step for gunpowder?
If you've got Meitner and Fermi, then you've got atom bombs even if you mistakenly believe the mass remains unchanged during nuclear fission and the phlogiston fairy is just extra-generous when free neutrons get involved in the chemistry. Bomb-makers don't care where the energy comes from, so long as there's experimental proof that some process reliably generates lots of heat quickly. They also don't care that some theory says that a golf ball contains a megaton of city-busting energy, if there *isn't* an experimentally verified process for getting that energy out in a hurry.
²³⁵U + n (neutron) --> Kaboom, is the only equation that matters if what you care about is blowing up cities; the rest only matters if you care *why* there's a crater where the city used to be. And I'm pretty sure Leslie Groves didn't give a damn about that, any more than the Chinese Prometheus who tinkered about and invented gunpowder.
The laser is another very good example, where we knew in theory that it could work before we manage to build the first one fifty years after theorizing it.
In real life, theory feed from experimentation and reciprocally.
Let me just say that my experience working in geointelligence showed me acutely the importance of theory. We could only do what we did because of extremely precise earth models and ephemeris readings from the vehicles combined with a level of understanding of physics that is awe-inspiring to see. I remember Trump rather callously declassifying an image from a system I spent the better part of three years developing the image formation software for and supposed "experts" in imaging not believing we could achieve GSD at that resolution just due to atmospheric effects. They have absolutely no idea what we can really achieve and it's all possible because some extremely smart people out there have spent the last four decades perfecting the physics. There is no practical way to achieve this kind of thing with tinkering alone when it costs billions just to get a single vehicle into orbit.
Granted, the cost is coming down a lot with smaller form factor vehicles and reusable rockets, but we've been doing this for a long time.
I don't see how relativity (Einstein) has boo to do with atomic bombs. Everyone knew at least by 1917 or so, when Rutherford first showed that nuclear fission was possible, that the force holding the nucleus together had to be tremendously strong (to be able to hold identically-charged protons within a femtometer of each other -- the Coulumb repulsion is staggering). If it could be released, obviously it would be energy on a scale that dwarfed anything chemical. The fact that this energy release implies a mass defect (what special relativity tells us) is kind of neither here nor there, it would still be true and important even if relativity was nonsense or undiscovered.
But up until Meitner and Frisch worked out (in 1938) that what Otto Hahn had unexpectedly observed was a *natural* process of fission, which could take place *without* the enormous input energy per particle everyone had had to use before, there was no plausible idea on how to unlock that energy.
So that one actually is a good example of serendipity. If Hahn hadn't been bombarding *thorium* with neutrons, and had instead been using one of the 90-odd natural elements that *don't* easily fission, the discovery wouldn't have been made and nobody would've had a clue that deliberate nuclear fission on a military scale was possible.
Although one could argue that people were messing around with neutrons all over the world anyway and sooner or later someone was bound to stumble over fission by thermal neutrons, but it could have happened later, perhaps many years later depending on the unexpected twists and turns of what interested people.
Special relativity was 1905 and his E=mc² paper was at most a couple years later iirc. 1917 was after general relativity, and E=mc² was definitely published by then.
Yes of course, but neither special nor general relativity say anything at all about the strong nuclear force, and both are classical theories from which it is impossible to deduce that mass could transform to energy *within the same reference frame* (which is what we're talking about when we talk about mass defects and radioactivity). The only way m turns into E in a classical theory like pure relativity is when you change references frames.
The theoretical applicability of E=mc^2 to nuclear reactions only becomes apparent when (1) you have an idea nuclear reactions are possible, because you recognize the existence of the strong nuclear force -- Eugene Wigner proposed its existence in the 1930s -- and (2) you have a quantum mechanics which you can make relativistic -- Dirac did this in 1928 -- and discover that particles can transform into other particles, i.e. mass is not conserved *even in the same reference frame*.
Relativity, or more precisely relativistic quantum mechanics (which was the work of people other than Einstein, since Einstein didn't really like QM), was largely *retrofitted* onto the observations of radioactivity and nuclear transmutation (in the 1890s through early 1910s) and later fission (1930s-40s) to provide a satisfactory theoretical explanation. But from what I understand it played no role in driving the initial recognition that (1) nuclear reactions could release a lot of energy (which one might reasonably attribute to Rutherford's experiments on radioactivity and nuclear transmutation), and (2) that nuclear reactions could be sparked by low-energy particles (which should be attributed to Otto Hahn's lucky choice of an experimental substrate, and Lise Meitner's and Otto Frisch's realization that he had observed fission.
As I said, I think this is one case where experimental noodling around led the way, and at that there is a nontrivial element of random chance involved, since natural fission is a pretty rare form of nuclear decay, and it was just luck Hahn stumbled across it when he did.
That's not to say theory played no role at all, of course, but if anything it would be the early theories of the structure of the nucleus and what was holding it together, which are rooted more in early quantum mechanics than relativity per se.
That's just not true. You don't have to go quantum to convert mass to energy, as mass IS energy in its rest frame. An example would be a box of relativistic classical particles. The mass of the box of particles would be greater than the mass of each particle and the box added together.
"Einstein (?) predicted the feasibility of atomic bombs"
Good that you included the question mark. As Gerry Quinn notes, Einstein "predicted" the feasibility atomic bomb in the same way that he "predicted" the feasibility of the hand grenade, the antimatter bomb, and the hafnium bomb.
The people who made the actually useful predictions were primarily Lise Meitner and Enrico Fermi. Einstein added name recognition and gravitas when it came to convincing mundanes like FDR, who had a big checkbook but probably didn't know who Meitner and Fermi even were.
Another couple of examples of theory preceding engineering.
The Nyquist-Shannon coding theorem. Shannon established that it was mathematically possible to first measure the bandwidth capacity of a noisy channel and second, to exploit all of this bandwidth. The next 40 years was the engineering half of the field slowly marching toward the Shannon limit.
The RSA cryptosystem rests on mathematics that was hundreds of years old by the time it was employed (i.e. the Chinese remainder theorem and Fermat's little theorem).
As with every bit of writing I have seen on this blog, I'm thrilled by the original perspective, the beautiful language, the humor... I am also excited to find that in a previous collection of your essays, you have addressed the question of fats (the different types, healthy/unhealthy etc).
I think he's perfectly aware that there are a bunch of nerds who do science as a hobby. But if you tell Bobby Taxpayer "hey, I'll take a chunk of your paycheque and use it to give Dr Science a salary and some expensive machines so he can cultivate his hobby", that might not go over so well.
Does anyone have info on how black swan-ish funds have performed generally? I understand Spitznagel publicized his amazing performance at the start of COVID, but Taleb writes like black swan investing is a billion dollar bill lying on the sidewalk. Yet my sense is black swan followers have not by and large made a killing.
I didn't know what you were referencing so I looked it up and boy was that great. Falkenstein really nailed the stuff which bothered me about Taleb's books.
Black Swan funds like Universa are not designed to make a killing. Comparing them to any index misses the point. They are designed as an insurance policy and are supposed to make up around 3% of your portfolio.
A 97/3 split with the sp500 has done better than 100 in the sp500 since 2008, at least.
I was happy to read a review of this book, because there is no chance I’ll ever pick it up myself. I tried to read The Black Swan a few years ago and quit halfway through. I’m used to reading pompous academics, but Taleb was just over the top. Plus there were weird contradictions, like how he would go on and on about how useless and stupid philosophers are, and then praise Karl Popper and Bertrand Russell. Some people have laser intellects. Taleb is more like an old blunderbuss stuffed full of nails, rocks, and too much gunpowder.
His over the top praise of Popper made me think that he's really trying to get a jab in edgewise at George Soros, who is supposed to be the famous investor whose ideas magically all came from Popper.
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
Not an expert in medieval architecture, but I am pretty sure this is total nonsense, as long as geometry is included as part of mathematics. Getting two ends of an arch to meet requires decent geometry. And making two lengths of wall match without adding is probably impossible.
Doing basic arithmetic with Roman numerals isn't hard (in fact, adding in particular is super easy!) - you aren't any good at it, but that's cause you haven't practiced it ever. How many times have you added Arabic numerals? Do it that many times with Roman numerals, then tell me it's "too unwieldy".
It's true that they were built with rules-of-thumb and principles-of-practice rather than a defined theory of weight, mass, gravity, and structural engineering (in fact, a lot of stuff in the 19th and early 20th century was built with pretty ad hoc theory to back it up - it was extensions of stuff that had previously worked and been well measured). But to say it was built without mathematics seems ludicrous, and I'd want to read something with a LOT of evidence to back that up.
I'm glad I ctrl-Fed "Roman" before commenting, because I was about to say something very similar! I got curious about this exact topic several years ago and yeah, Taleb is very wrong on this. Arithmetic was difficult and they didn't know algebra, but they were great at geometry.
A military engineer called Vitruvius wrote what is considered the Big Book of Roman Architectural Theory called *De Architectura*. There are all sorts of little mathematical tricks for architecture in it, all based on geometry. (Though I would say that's the minority of the content; if I remember right, there was much more about which materials to use for what purpose and so on.) This book was really influential even after the Romans were dust! In all, a pretty poor example for someone trying to argue that theory is useless and ineffective.
Excellent research. Two points for people reading this who may not be familiar with the ancient world:
1)The word "book", when referring to ancient Greek and Roman texts, is basically equivalent to a modern chapter or section. The Iliad is 24 "books", but it's still only one book in the modern sense.
2) For the most part, the Romans didn't use their numerals for mathematical operations the way we use ours. Simple calculations were likely memorized and complicated calculations would done using an abacus.
"[A]ccording to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
I don't buy that. You don't have to understand any math to divide a basket of apples in three, and you can adapt the same principles to Roman numerals if you have to.
Most people don't know how to do long division even now. As for extracting square roots, probably 1% at most know the 'official' method. I know there is a formal method for cube roots but I never learned it myself - that doesn't mean I can't calculate a cube root by a series of approximations if I have to. So can millions today. And maybe millions couldn't have done cube roots in medieval Europe, but more than five could have done division.
Division is easy if you don't weirdly insist on decimal notation. What's 17 divided by 69? 17/69. Done. What's 17 divided by 69 multiplied by 3? (17x3)/69 = 17/23. All very easy and known back to the Romans, at least. It's only when you absolutely insist that all your fractions have denominators that are powers of 10 that things get computationally challenging.
Mind you, it's true that living within a realm of rational numbers means you can be bemused by some nasty little mason impertinently asking you to write down the corner-to-corner distance of a 1 cubit square block.
This works fine in the example you mentioned, but as soon as your numbers are large enough, you bump your head into prime factorization, which is notoriously computationally intensive. Also, comparing fractions is also not particularly easy. So yeah, there is nothing "weird" in insisting on decimal (or positional) notation.
I can't imagine any practical engineering problem which would result in serious difficultly reducing your fraction to lowest terms. That implies staggering levels of precision needed.
I also disagree that comparing fractions is difficult. It may be so for people who very used to decimals, but that's just a QWERTY v. Dvorak argument and a priori unpersuasive. The fact that the ancient world *entirely* used fractions in their everyday practical engineering problems is all by itself pretty decent evidence that fractions are very easy to compare -- if you're used to them.
> It's only when you absolutely insist that all your fractions have denominators that are powers of 10 that things get computationally challenging.
This isn't true at all. Traditional Egyptian mathematics found fractions challenging while allowing denominators to have any value. (The only numerator allowed was 1.)
What makes you say the Egyptians found fractions challenging?
My first argument that fractions are easier than decimals is simply the evidence that fractions dominated noninteger math throughout the centuries when most math had to be done in your head. There's a very good reason ancient number systems had duodecimal annexes (like Roman fractions) or even sexagesimal (the Babylonians), and why so many systems of measurement are base 12. People didn't do that because they were dummies who couldn't imagine the obvious benefits of decimal math.
I'd say a decent argument can be made that the architects and engineers of the time did a lot of stuff empirically less because they *couldn't* do the math than because the precision of the math wasn't sufficiently matched by the precision of the materials and instruments available at the time.
I mean, it's not much good calculating the perfect proportions of your stone arch if your model of the properties of granite differs nontrivially from the properties of the granite *you can actually source* and if the instruments the builders must use to cut, dress, and build it won't allow tolerances of 1mm to be specified anyway.
I think anyone who does practical amateur carpentry or stonemasonry himself understands this. Sure, you can calculate exactly and precisely on your computer the dimensions of each piece, to 20 decimal places if you like, but unless you are using some kind of phenomenally expensive precision-cut lumber and/or stone, it's pointless. You might as well do some approximate calculation with paper and pencil, because you're going to have to fudge things a little when it comes time to actually build anyway. You can't guarantee a cut (with your home table saw) is going to be sufficiently accurate, that the wood won't have some tiny warp to it, the stone might be a little off here and there, et cetera.
Re: strategies that succeed by taking things away instead of adding them.
I agree that this isn't always the right approach, but I like the idea so much that I've been trying to collect where it applies. So far I have:
* Probablistic conjuctions (occam's razor)
* Mindfulness meditation (to reduce thoughts that cause suffering, intrusive thoughts, etc)
* Conciseness in writing
* Software written with suckless / unix philosophy in mind
* Simplicity in mechanical systems
* Exercising the 5th amendment to avoid self-incrimination
* Exercising restraint in art to increase impact (examples: powerful film scenes lacking score; also I write down the song "Trio" by King Crimson as an example where the drummer was praised for not playing anything on the track)
* Tidying up your room
* "Too many cooks" -- in arguments, in artistic endeavors, etc
* Traveling light, allowing for traveling faster and freer (applies anywhere from taking a plane trip, to photons which literally "travel light" and move faster than anything else).
* Operational Security -- Reduce the number of components in your identity to avoid associations that compromize you
* InfoSec -- Reduce the number of components in your system to reduce your attack surface.
* Martial Arts -- Sometimes the best strategy is to wait for your opponents actions and use their momentum against them.
* "One bad apple spoils the bunch" -- So reduce your number of apples.
* "Nothing to Lose" -- Freedom resulting from having little.
* Large concentrations of population as ripe for epidemics.
These are good, but they are often the opposite of anti-fragility - many of these are about producing a distinctive and unique work that stands out in one way, even if it's hated by most and ignored by others, rather than a crowd-pleasing thing, which I would think anti-fragility is about.
Ah yeah these are just specific to the idea of "succeeding by reduction", I didn't have antifragility in mind.
Although, does your comment apply only to the artistically-oriented points on the list? Some of them, for example the software and mechanical engineering examples, I think clearly succeed better in their functionality because of their lack of components.
These are all instantiations of the Delphic Oracle's central maxim: "nothing too much (or too little)." It applies for all action -- since "too" is by definition to be avoided.
The implication is that all action entails some sort of balance -- if I flip too much or too little, I do not actually flip, and so on.
Maybe so, but it seems to me the Delphic maxim is a little too broad for this idea, for two reasons:
1) I think "nothing too much" and "nothing too little" deserve to be their own classes of guidelines, to be analyzed separately
2) The situations I've described I think are more generally described as "minimizations" than as "balances"; ie "get as close to 0 as you can". I think you raise a good point though, that even in most of these things you can't go *fully* to 0 or the thing doesn't work (ie "too many cooks"... you need at least one cook). But still, this is a special kind of balancing act, in which you aim for the most minimal balancing point that works, as opposed to the most maximal one that works.
I think these occupy an important sub-category of the Delphic maxim, because if given a choice between maximizing and minimizing to solve a problem, all else being equal, we should *prefer to minimize*, in most circumstances. There are multiple arguments for why that is, but I guess the broat main point is that it simply costs less in resources to acquire and maintain fewer things than more things.
In summary, I think you're right that these fit under the Delphic maxim, but they fit even tighter under a more specific, preferred subset of it.
1) the primary insight of the maxim is grouping too much and too little imo. The greek word used is ἄγαν [agan] which strictly just means "too" and can function as an adjective and an adverb. The practical implication here is that all failure is alike. Drinking too much water and drinking too little water are in a certain way, wrong for the same reason -- both ignore the deeper reality of human hydration requirements.
2) The most minimal and the most maximal will be the same if you are strict about what "works" means. There is a proper amount of cooks you need in the kitchen to do the job of cooking well. That is both the maximum and the minimum (maybe better, the optimum) for cooking well, and you can calibrate to that optimum by asking whether you are "agan" in number of cooks.
3) If I hear what you're saying, you're saying that in some cases, it's better to err on one side of agan or another. E.g. better to buy bananas when they're too green rather than too brown! I agree that this works, but note that this will imply a parallel subset that should be just as powerful.
* Occam's Razor -- most possible explanation in fewest possible hypotheses. Parsimony in itself is not valuable if it does not explain.
* Mindfulness meditation (to reduce thoughts that cause suffering, intrusive thoughts, etc) -- a balance between peripheral and conscious awareness. Too much mindfulness leads to mind wandering, too much focus leads getting lost in thought. Cf. Scott's review of "The Mind Illuminated"
* Conciseness in writing -- same as Occam. Conciseness only good when it conveys meaning. Otherwise it's just terse.
* Software written with suckless / unix philosophy in mind -- dk
* Simplicity in mechanical systems -- I imagine they actually have to do the job -- i.e. pure simplicity is worthless unless you eg move the rock. But given that you move the rock, simplicity is a virtue.
* Exercising the 5th amendment to avoid self-incrimination -- Silence is a virtue but only when paired with speech. One can talk too much or too little. It seems like proper speech is a balance aimed at communication again perhaps?
* Exercising restraint in art to increase impact (examples: powerful film scenes lacking score; also I write down the song "Trio" by King Crimson as an example where the drummer was praised for not playing anything on the track)
* Tidying up your room -- a room can both be too tidy and too messy. Funnily both seem to lead to stress (cf. ADHD and OCD).
* "Too many cooks" -- in arguments, in artistic endeavors, etc -- covered, but to be clear, teamwork produces great dishes; there is a proper amount of cooks to create any given dish (and always at least one) :)
* Traveling light, allowing for traveling faster and freer (applies anywhere from taking a plane trip, to photons which literally "travel light" and move faster than anything else). -- out of my depth, but a general and valuable point is that freedom is not good simpliciter and nor is order, and that good government is a balance between freedom and order (among other things). Libertarian philosophy seems to demonstrate the "leaning" principles you want -- i.e. as little order as possible, while still preserving freedom. (there's a point at which not having a police force is actually detrimental to order).
* Operational Security -- Reduce the number of components in your identity to avoid associations that compromise you -- ...while still expressing your identity :)
* InfoSec -- Reduce the number of components in your system to reduce your attack surface. --- dk!
> you're saying that in some cases, it's better to err on one side of agan or another ... I agree that this works, but note that this will imply a parallel subset that should be just as powerful.
I think is where our disconnect is. I suspect that, not just sometimes, but in most cases we look at, the optimal point will be far closer to minimization than to maximization, and that those "minimization" cases deserve special attention. I feel this way for two reasons:
1) The fact that, in the physical world, acquiring and maintaining more stuff generally entails more cost to the holder. It's less expensive to have less than to have more, in almost any situation you can name. This will cause a natural asymettry in favor of "prefer to minimize".
2) Empiracally, looking at my list above, these things really do comprise a huge proportion of the things that are materially relevant to my life. If there's an equally large and powerful set of things where it's best to get as much as you can (without hitting some threshold), either it doesn't apply as much to me (or I'm just missing it).
I see you've given responses to each of my things above. I'll try to reply to that in a separate comment.
Some of the examples seen a little weird to me. It seems like the term "anti-fragile" should be applied to a system of some sort. For example I'm not really sure how exercise is anti-fragile, surely the claim should be about the human body? In which case it is anti-fragile in some ways, as lying in bed all day isn't great, but not in others, as raising the internal temperature a mere 5 degrees can be deadly.
The case of the banker/taxi driver also doesn't seem right. He frames John as doing fine until something bad happens and he gets laid off, while George can adapt his business if it's slow in one neighborhood or something. But those seem to be very different scales of hardship. Something that causes a bank to fail very likely will affect taxi drivers pretty badly too (think of the current pandemic! Taxi drivers are much worse off than bankers). And while he allows George to change his business to a courier, he neglects the possibility that John can also get a different job.
I guess he's using this as a parable to show the benefits of allowing volatility, so there may be 50 Georges and some will go bankrupt while the others will flourish. But it seems a little odd especially in light of his real world examples. It's also not clear when you can turn the term "anti-fragile" back on itself. In the case of the forest fires, the periodic small burns prevent a large all-encompassing fire, so he presumably calls the forest anti-fragile. Alternatively, without these regular, periodic burns, everything will burn down at once. Does this make it fragile with respect the the small burns? A few measly humans with water hoses came in and ruined everything. Others have pointed out that this is also apparently the case for Sparta.
I'm not sure how much weight it's given in the book, but it's also worth remembering that anti-fragile systems work well in volatility, while fragile systems work better in stability. It's not said outright here, but he seems to imply that we should be making our systems more anti-fragile. And given black swans and all that, it's probably not bad to keep an eye on it. In the end though, they do have a cost, just like drinking out of a cup of silly putty would be a pretty terrible experience.
As a side note on the Lindy effect: he seems to conflate "lasting longer" with "better". For classical texts, people in antiquity were probably about as good at stories as people now, so you'd expect the stories that last to be preselected to be good. For a physical object, it probably undergoes about the same amount of stress regardless of how old it is, so older ones are probably sturdier. But I'd rather use a phone from today than one from 1960, even if that one is sturdier. Same thing with studies. It feels like the classic economics joke of not picking up money on the ground because if it was real, someone else would already have done so. If this older thing weren't as good, someone would already have discarded it. Maybe, but sometimes that someone is you!
The taxi driver is more fragile than the banker. If the bank goes under, the banker can probably get a job at another bank. If the taxi driver loses his car, he's screwed. And sure, he could switch to some other kind of job, but so could the banker.
Many small agents in a system, bring with them volatility and incremental or random enhancements, make it more antifragile than a centralized system such as wage-earning. For the individual actors maybe it isn't as clear-cut but if you compare a Bank Teller vs a Driver (which I think is the better comparable than a Bank*er* vs Driver) it makes more sense.
Banks can go under; if you're a 55 year old bank teller you don't stand much of a chance to get another job at a senior wage level if your employer goes under.
Drivers are (more-or-less) independant and can manage their own fate. Some fail, some lose everything, but the overall system improves as the failures (those who drive drunk, don't take care of their car, etc.) fall out.
Antifragility is a cool concept and it makes me feel like going out and exposing myself to disorder to get stronger. But aside from coolness I don't think antifragility is necessarily better than plain old robustness. For example, my bones get stronger under stress (eventually) and titanium gets weaker. But slightly fractured titanium is still probably stronger than a weightlifter's bones. And taking the idea to the extreme could lead to hoarding canned water and VIX instead of profiting from a calm period.
One thing bugging me here is Taleb's insistence that "antifragile" is different from "robust" -- I mean, certainly, antifragile is different from Taleb-robust, because he's defined them that way. But I don't think Taleb-robust is the same thing as robust-in-the-ordinary-sense, which seems to have quite a bit of overlap with what Taleb calls "antifragile" (e.g. the options example -- benefitting from upside but being protected against downside would ordinarily be called "robust"). This wouldn't be a problem, except that as best I can tell, Taleb doesn't seem to notice that his use of the word differs from common use, and so just says "antifragile is not the same as robust", leading to a lot of confusion.
Presumably he's just making the observation that there are different kinds of "robust." You can build a Maginot line or you can build mobile armor -- they are robust in different ways. You can build fast fighter jets, well-armored fighter jets, or stealthy fighter jets -- they are robust in different ways. You can build your muscle strength and mental endurance, or you could build your knowledge of mechanical advantage and set of handy sharp tools -- also robust in different ways.
After that, you can observe that there are situations in which one type of "robust" is...well, more robust than another, since "robust" at its core has a purely functional definition -- that which survives the challenge better.
> according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.
The Byzantines, and the Muslims in Spain, both certainly knew arithmetic, and higher math as well. But even if Taleb means "Latin Christendom" and not Europe per se, basic arithmetic (as part of the quadrivium) was part of a 'standard' higher education. There wasn't any progress made, and architecture regressed, but people didn't forget how to divide integers!
I really don't get how Taleb could have claimed this with a straight face. I tried to look up Guy Beaujouan, but he wrote in French (which I don't speak) and before the age of the e-book, so I can't easily find a reference.
I suspect it may be a "Shakespeare invented half of the English language" kind of thing, where all we have to go on is written accounts so we assume that Shakespeare was constantly inventing new idioms and that only five people know how to divide. That kind of assumption always annoys me.
It may just depend on what we mean by division, I think humans intrusively understand division on some level, but we're generally not great with large numbers so we do need techniques like long division for 53/7, not so much for 10/3.
> Instead of reading the latest studies, read older studies!
Contra Scott's "As practical advice, this suffers from a certain having-obvious-transparent-flaws," I think this is generally very good advice, at least as far as it goes.
If you're looking to understand a field, you absolutely should read the older, foundational, most-cited papers before diving into the newest ones. If you want to learn about something on the cutting edge, taking the paper you think is interesting and going through its bibliography and first reading the oldest paper you see is probably a better play than reading the paper you want to learn about.
Books are the same way. In most circumstances, you're better off reading an older writer who everyone agrees is a classic than hoping the new hotshot will live up the their impressive debut novel.
News media and cultural commentary is the same. There's a reason the subreddit doesn't allow discussion of current events in real-time. I'd much prefer a world where the stories about "news" were all written with the benefit of a week's hindsight instead of a mad rush to be 'first'.
This is actually one of the things I liked about the "old internet". 10-15 years ago, the results at the top of your Google search were, nearly without fail, the best things about the topic you searched for. The most comprehensive. The best-written. These days, the internet (google, reddit, youtube, etc.) are biased towards new and ongoing and "engaging" in the social-media-analytics sense of the word. Its much harder to find the thing that was clearly the best article/essay/review of the thing you want to learn about because instead your directed to the scads of newer things, most of which are far worse.
Perhaps I've missed Taleb's point here. I certainly agree that reading the most recent research can be important for some academics, but unless you're trying to publish in the specific subfield of the stuff you're reading, you're probably safe ignoring it for at least a few years.
In psychiatry and psychology, any study older than a few decades was done with such terrible statistics as to be almost meaningless. There are often smart people and good books from before then, but the further they are away from statistics-reliant formal studies, the better.
Statistics, though I didn't stay in the academy after graduation.
It's actually a pretty interesting literature, since you basically had "no computers", "computers-but-they-kinda-suck", and "everyone has computers and your new method better have an associated R/python package or no one is going to use it" periods.* You get different types of problems that folks are interested in and different approaches/solutions in each period.
My perspective is that the newest most wiz-bang things might be really cool, but most publications are useless outside of a pretty narrow application. Much more interesting to look back on foundational works and the articles that demonstrated methods that people would continue to build on, develop, and use. I might view things differently if I had to publish regularly.
*There was also an "everyone's a eugenicist" period, but we don't like to talk about that. And currently there's a "Wait, why is everyone focused on AI/ML instead of us??" period.
It must depend on the field of study or endeavor. If you're trying to learn physics, there is no way you would want to start with Maxwell, Newton, Boltzmann, Heisenberg, etc. It's not that the foundational material is wrong, just that the expression, summary, and notation is vastly improved. Learning Newtonian mechanics by reading Newton is like learning vector arithmetic with Roman numerals - it's possible, but why would you do it?
In the sciences at least, you don't read the original materials, but the classic reviews. The annual review articles, Feynman's lectures, the textbooks like Griffiths, Jackson, or Goldstein. I don't know if the difference between "original" verses "classic" works operates the same in other fields, but it seems like a useful distinction.
Certainly depends on the field. But if you had a PhD in a specific sub-branch of physics, and you wanted to learn something about a different branch of physics, you probably wouldn't reach for the latest publication in that subfield.
Physics is interesting because its literature goes back centuries. The distinction I was trying to draw was between "published last year" vs. "published 20 years ago", not "published 50 years ago" vs. "published 500 years ago".
I think that misses the point a bit. If I want to learn a different branch of physics or mathematics, you will not start with an (old or new) research article, but with an introductory book or a survey; but within this category the more recent ones will generally be preferable to old ones.
Textbooks are definitely the best place to learn a subject, but good ones don't always exist, especially with niche or newer topics. Surveys often times don't go into the details you want--that's kinda what "survey" means, after all--but at least they're very useful as a guide to the literature. Like Matt A, I've often found original papers to be the most readable presentation of the idea, probably because since the idea was new at the time, they really focus on what the new insight is and don't accidentally assume you know stuff already. I'm having trouble thinking of examples on the spot, but there were definitely math topics that never fully clicked for me until I read the original paper.
I was going to disagree with this, but now I think it totally depends on the field.
If you are starting in Statistical Mechanics or E&M, then I'd bet the old Berkeley series (Reif of SM and Purcell on EM) will be better than a random new intro text. On the other hand when I wanted learn more Cosmology this year I picked a modern intro text, 'cause the field has changed much in the past ~50 years.
There's a pair of schools in the United States (St. John's College in Annapolis and Santa Fe, NM) that tries to do exactly this. They have the students learn proofs by reading Euclid and learn calculus by reading Newton. They don't even have majors. Everyone studies the same thing. Classics only. My ex-girlfriend from back when I was in my early 20s went there. I'm not really sure how it worked out. She ended up becoming the only person I have ever known who became a primatologist, which took forever because so few universities even offer a PhD in primatology.
Fun fact: The Netherlands has exactly such a designation, being that "medium-to-large companies, associations and institutions with a very good reputation, which have existed for at least 100 years" can call themselves 'Koninklijke' (translation: Royal) as in 'Koninklijke Philips NV'. I wouldn't be surprised if there's something similar in other countries too.
Singapore vs Malaysia is a matched counterexample to Lebanon vs Syria. All four countries started out as islamic kingdoms, albeit at opposite ends of the crescent. To the extent that either Malaysia or Singapore was a country in 1920, they were the same one. It was in 1965 that Singapore won its independence, or, if you bought your newspaper at the other end of the causeway, a certain cancer was excised from the Malay body politic.
Lee Kwan Yew is a paragon of authoritarian high modernism, Malaysia is where James C. Scott spent his 18 months as a padi farmer. But, on any material measure, Singapore is winning.
James Scott is much more concerned with Upland Burma than Malaysia. Also, Malaysia and Singapore weren't the same, Malaysia was a collection of historical small sultanates including cities and countryside and inhabited by mostly Malays.
Singapore was a created cosmopolitan port with very little countryside, no history, and inhabited by Chinese immigrants.
The idea of anti-fragility is very important, but this is really an example of someone having a Big Idea.
Ironically, by application of his own argument, this Big Idea is itself fragile. It is exactly the kind of theory he complains about.
The problem is that he is just flat-out wrong about it in many ways, and we already have a much more useful model that is more generally applicable - natural selection.
Natural selection is where environmental pressures put pressure on a system and results in "survival of the fittest". The result is higher efficiency.
But if you look at what actually results in the best results, it's actually *artificial* selection. Artificial selection works many orders of magnitude faster than natural selection does. We have made crops that are vastly better than wild plants, and genetic engineering has allowed us to make even better ones in just a few decades.
Many good systems are irreducibly complex and will never arise naturally as a result. Likewise, natural selection doesn't always select for positive traits - again, the example of the dodo, the dodo evolved the way it did because it would be wasteful for it to evolve to be otherwise. The fact that so many island species evolved this same way shows exactly this. Natural selection is no defense against going down a blind turn and smashing into a wall.
Indeed, natural selection works at its best with a moderate level of pressure - too high and the animals tend to die out before selection can even really affect them. When a gigantic meteorite struck the Earth 65 million years ago, most things didn't adapt - they just died.
By way of analogy, if you have an event that destroys most businesses, you might not be promoting only the best businesses, you might be promoting businesses which happened to have a characteristic which protected them from that event. That doesn't necessarily mean that those businesses were necessarily "better" in a macro sense. For example, the COVID-19 pandemic has killed a lot of in-person things and promoted online things - but that doesn't actually mean in-person stuff is *bad*, it is just that the selective pressure on them forced people in a certain direction. If we spent a year under severe cyberwarfare conditions that almost shut down the Internet, then in-person businesses might thrive.
Blind selective pressure is not "good" or "bad". Evolution lacks foresight. An island population might be very fragile to outside invasion, but it is also less likely to get external pathogens in the first place. If a pathogen gets introduced to Maine, it will likely spread to Florida; if a pathogen gets introduced to Hawaii, it is less likely to be introduced to Midway.
Indeed, there's little evidence that being on a large landmass even makes you antifragile in the first place; the "fragility" of island ecosystems is really because humans got there recently enough to see the effects. Humans already killed almost all the North American and Eurasian megafauna in prehistoric times.
Really, the fact that more advanced, sophisticated, interconnected societies tend to dominate their neighbors is a strong point against the idea that they are inherently fragile; indeed, the supposedly "anti-fragile" city states have almost entirely died out or become much bigger countries.
His whole thesis is really just scattered and full of motivated reasoning.
Competition *is* desirable, but he is trying to connect a lot of disconnected ideas because he has this Big Idea, and so he is awkwardly cramming everything into it, no matter whether or not it makes sense.
I don't disagree with your actual point, but I disagree on natural vs. artificial selection. Artificial selection is better at producing plants that are useful to us because natural selection isn't trying to do that.
Also, while artificial selection can work pretty quickly, so can natural selection if the environment changes suddenly. There's a famous example of British moths evolving darker camouflage in response to pollution staining trees black. In the past century or so, African elephants have increasingly become tuskless, making them unattractive to ivory hunters.
Edward Luttwak wrote "Give War a Chance" along similar lines, but I think that was less about controlled burns than "war making as state making".
Erik Falkenstein also said that Taleb's theories imply that selling insurance should be a terrible business that frequently results in bankruptcy, which doesn't actually fit our reality of relatively long-lived insurance sellers.
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
I don't think that's actually true for people used to using them. It's really large numbers where they get too long compared to a base-10 numeral system.
Willmoore Kendall, the "wild Yale Don" involved in National Review's early days, argued that Socrates' death was justified... based on Socrates' own beliefs (and that he willingly drank the hemlock rather than escape with his supporters because it was his only philosophically permissible action).
Robin Hanson has also noted that mergers tend to be value-destroying, and thinks that they are undertaken anyway for reasons of internal corporate politics (similar to his reasoning for management bringing in "consultants" to recommend the thing they wanted to do anyway).
First, evolution and exercise are processes, not systems; the systems are the ecosystem and the muscles. When an environment is stable, life does not lose the ability to evolve; it just evolves to the stable system. When things change, the process of evolution will still occur. If the environment is volatile, species will adapt to the specific nature of that system, and may need to evolve differently should the volatility patterns change.
Likewise, with exercise; if muscles were truly antifragile, why would trainers, physical therapists, and orthopedists be so busy? Muscles grow in response to the proper stresses; if the type of "volatility" is wrong injury occurs.
The common ground between rationality and Taleb's project is an area well worth exploring - I'm glad you raised it in the last couple of paragraphs. Taleb's natural tendency to aggressively dismiss attempts to understand systems probably obscures how mutually beneficial the two philosophies can be to each other.
I actually wrote a blog post on the relationship between the two almost exactly a year ago!
On mergers, some of the diseconomy of scale that results seem to be due to big companies turning into mazes: https://thezvi.wordpress.com/2020/05/23/mazes-sequence-summary/ I think there's a real institutional design / corporate governance problem to be solved here -- how can you scale up without this happening?
Over 80 years ago Ronald Coase wrote about exactly why firm sizes equilibriate at certain levels. Economies of scale implies upward pressure on the size of firms, and transaction costs imply downward pressure on the size of firms. The optimal size of a firm is at the intersection of these lines. The transaction costs are overhead, limitations to management, and basically the "maze of middle management".
It makes sense that your average firm is roughly optimally sized, and that a merger would send the firm into disequilibrium over economies of scale and transaction costs - in other words, the maze is too comprehensive and the gains from economies of scale isn't enough to offset the growing transaction costs.
Are you sure "transaction costs" is the correct term for what you're talking about? My understanding was that "transaction cost" usually referred to the cost of *market* transaction -- i.e., transaction costs lead to larger firms, not smaller. I don't know what the word is for these sort of organizational or coordination costs, but I don't think it's transaction costs.
I'm not sure it's 'correct' but 'transaction cost' seems to cover the idea that merging two companies involves real (and often) significant costs that, apparently, often overwhelm the benefits of the formerly-separate companies cooperating as a single company.
What is the principle that connects the Lindy effect and the anthropic assumptions of the Carter Doomsday argument? I can kinda glimpse something, but I don't really see the connection.
The Doomsday argument says that there's a 95% chance we're in the last 95% of humans to exist. So if there have been X humans so far, there's a 95% chance there are no more than X * 20 humans total.
The Lindy effect just expands this to how long things have existed. If something's existed for 10 years, there's a 95% chance it doesn't live beyond 200. However if something has been around for 100 years, there's a 95% chance it doesn't last past 2000. You can make a similar argument for the minimum bound too, so therefore things that have lasted longer are more likely to continue to last.
This seems very different to me. The Lindy effect is like saying "if Alice plays Russian Roulette 5,000 times in front of you and survives, but Bob only plays it once and survives, then it is reasonable to bet that Alice's chance of surviving the next N rounds will be higher than Bob's chance." (In other words: the implicit statement is that if someone survived the past 5,000 iterations it is likely that there is something about their situation that produced this result -- e.g., you have more confidence that Alice's bullet is actually a dud, while Bob's might be a live round.)
On the other hand, the Carter Doomsday argument is weird. It uses the Copernican principle to assume the number of Roulette experiments that have happened or will happy. I'm not comfortable enough with it to say that it's wrong, but it feels very different.
I'm not sure that I see how religion is antifragile. Organized religion appears, in particular, to exist for the purpose of shielding morals and ethics from the memes du jour, for the sake of protecting them as they contain deeper wisdom that may not be apparent at the surface. Every virtue in an organized religion is a Chesterton Fence, but wouldn't the theory of antifragility say something like "you can get rid of all the Chesterton Fences and this thing should get better" ?
Ah I suppose you are right that the religion itself can be an antifragile thing, but I think what I was thinking is that the religious adherent is perhaps a fragile entity, as they are steadfast in their morals and traditions and seem to "not thrive" when those traditions are removed from their life, e.g. church closures during the pandemic.
So... by the same token, moving from a personal wordpress site SSC to a guaranteed-income model of ACX makes Scott more fragile, the opposite of his stated goal.
Starting his own clinic also makes him more antifragile. He swapped his guaranteed income as a psychiatrist for the guaranteed income from the blog, and the uncertain income as a blogger for the uncertain income of his own clinic, so arguably there is no net change in fragility.
“I think part of its response would draw on Taleb's previous arguments that people underestimate the risk of black swans, so the world will be more volatile than they think.”
On my reading of Taleb, the point is that there are two relevant distributions: first, there’s the probability distribution for events, including the tail events he focuses on in Black Swan (e.g. the probability of a big stock market crash); and second, there’s the distribution of outcomes, which is the focus of Antifragile (e.g. the price of your investment).
The first is taken as a given—or, more precisely, it’s taken to be never ever understood properly no matter how hard you try; tail events include things that have never happened yet and no model will capture the probability of things you’ve never seen or thought of. Our failure to model this usually leads us to underestimate its likelihood (hence, the whole Black Swan book).
The second is the focus of this book. The distribution of outcomes has two tails (good or bad, right or left), and the point of antifragility is to open oneself up to the right tail while not being subject to the left. The banker is subject only to left-tail events and is therefore fragile; the taxi driver is antifragile because he is open to right-tail events (the worst he can do in a week is make no money, but the best he can do is “infinite”). Ideally you set yourself up so that the distribution is right-skewed like this; even if your mean outcome is worse (or, looks worse because your model doesn’t properly account for tail events), an increased access to right-tail events is worth it. Hence, Taleb’s “barbell” investment strategy, etc.
If the space of outcomes is non-negative, like [0,\infty), it’s even more important to guard against the left side, because if you go to $0 then you don’t get to keep playing the game any more. (I don’t remember which book this point is from (I don’t think it’s Antifragile, maybe Skin in the Game?).)
The problem I have with this is Taleb never gives us an indication when we ought to ignore his advice and it devolves into a Pascal's Mugging.
Like if we can't assign subjective probabilities to any outcome how do we know what imaginable downsides we should treat as low probability in our day to day lives.
I have a model about how likely there is to be a dragon lurking outside my front door. But if there was one there it would be a disastrous outcome for me. It seems Taleb would say "you can't trust your model so focus on the downside" but in this case I would never go outside. Obviously Taleb ignores all kinds of extreme downside risk in his daily life, and he uses some model of the world to do that. We all do.
I think Taleb's cautions about not falling in love with your model of the world and paying a lot of attention to skewness in the outcome distribution is important, but he never seems to give us a limiting principle.
Yes I agree with you on this. He says "pay close attention to rare but catastrophic downside risk" but does not worry about dragons. Presumably there is some level of low probability that is low enough that we can ignore it? I don't know if Taleb would agree with that statement but it seems clearly true.
That seems exactly backwards though right? The taxi driver has a pretty hard upper-limit on what they can make in a week since revenue scales linearly with number of fares and there is only so much time in a week and limited demand for taxi rides. Also the taxi driver and banker have the exact same lower bound on income, $0 per week. The banker on the other hand could get promoted into a position where they get a bonus or get a job at an investment bank or as a trader where their upside is (while still in practice bounded) much less tied to scaling up effort. It is a bit strange since he spends a lot of time talking about this in Black Swan.
A better example than the taxi driver (and one Taleb uses iirc) is the stripper. Looking only at the income distribution, a stripper is protected from the downside by having income fluctuations, while also getting exposed to the tail event of a billionaire client providing 10x lifetimes of income due to his infatuation.
Taxi drivers describe the limitation of the downside but fail to describe the upside.
It still doesn't really make sense to me though. Both the stripper and the banker have the same downside risk (making $0 per month). The stripper does maybe have more of an upside but the banker also has an upside. More to the point, if we assume the cab driver is working for himself then he has MORE downside risk because he has to invest in a car and keep it maintained. So he can lose not only the revenue stream of cab fares, but also lose his invested capital.
I think the point he is trying to make is that the person with variable income ends up living a lifestyle that is more robust to income shocks because having unsteady income forces them to not take on certain financial commitments. But doesn't that just mean that they live a lower average quality of life for a given level of average income (or else they have to finance the same quality of life by smoothing out income with debt, making them more prone to negative shocks)? Maybe Taleb thinks that is better but it honestly has the ring of a rich guy waxing poetic about the nobility of poverty.
I worked in restaurants for a decade, much of it as a delivery driver or waiter, receiving minimum wage plus a variable income. A great day of tips was $100, or about 2x the base wage.
I was working at a Denny's on 9/11, and I made $4 in tips. Anyone who saw the news knew that something awful had happened, but I had the benefit of immediate evidence that my income would be affected.
Fast forward to 2007, and I'm working as an intern for a well established public tech company. As the absolute lowest member of the department, my starting salary is more per hour than I made in any job in my life. I'm delighted, of course, and ready to settle into the tech middle class life.
The crash hits, my company's stock goes from $20 to $2 overnight, and the board decides to lay off anyone who is classified as a contractor. As an intern, this includes me.
I get worse than laid off - I'm notified that I will be laid off, three times over two years, but never actually laid off. Instead I receive just above the unemployment compensation, but I have to continue working to receive it.
One mentor was fragile: he lost his job simply because he was the only PM without a critical deadline coming up. A 20 year professional, he would be unemployed for three years, carrying a mortgage and a family.
One mentor was antifragile: a freelancer making $120/hr, she had a new contract making $150/hr in two weeks, as companies looked to reduce the liabilities of fixed salaries while still keeping the business going.
The lesson I received, which I find Taleb codifies well:
Life contains disorder and downside risks, and often the jobs we think are protecting us from these events are actually only isolating us from the signals of these events. An antifragile job is one where you receive more signal about these events, so you have more room to maneuver and possibly profit.
The best job is robust, but this is rare, and we often can't distinguish between a robust job and a fragile job until its too late.
Given the option of a robust/fragile job or an antifragile job, the antifragile job is better because at least you know the score.
I agree in some ways that there is a reasonable point to be made. The freelancer has to be resourceful and build contacts so when faced with a negative shock they are able to adapt more quickly (because that is what they are used to doing). So the freelancer maybe already had other contacts and opportunities that they could capitalize on immediately while the FTE thought they had a stable job so never built a network or learned skills outside of their narrow domain.
And for what it's worth I think that is a good idea. I guess I just disagree on three levels with how Taleb in particular represents it:
1. He conflates fragility of a system with fragility of an individual within a system. Which I think leads to some perverse outcomes when it comes to individual decision-making.
2. A lot of what he puts forth as a anti-fraglie at an individual-level reeks of drawing incorrect conclusions from survivorship bias. To take your example even. The freelancer is obviously better off but it doesn't follow that the FTE would have been better off being a freelancer. It may be that people who are talented enough to make it as a freelancer are just better at this stuff, so of course when there is a shock they are better able to adapt. I find this particularly frustrating because Taleb talks so much about survivorship bias in Black Swan.
3. After reading the book I have no idea how to change my decision-making at the relevant margin. Your immune-system is anti-fragile and exposing yourself to pathogens at a certain level is good and protects you from more dangerous pathogens in the future. But that doesn't mean I should go around licking doorknobs to try and infect myself with everything. Clearly there is SOME level of exposure that is good but just as obviously there is some level of exposure that is bad. So where does that tipping point happen. It's not just that Taleb doesn't answer that question, he seems openly disdainful of even framing the decision that way. To again take your example. I can definitely see how the anti-fragile freelancer gained useful skills by exposing themselves to volatility and so were able to adapt to a shock more easily. But in my 15 year career in the tech sector, the pattern I see most often is that when companies hit bad times the first people they let go are the freelancers and contractors (because they are easier to fire and also it is better for morale of the remaining FTEs).
So I think there are a lot of good lessons buried in the book but it suffers badly from Taleb just trying to fit everything he doesn't personally like into the fragile category and everything he does like into the anti-fragile category.
IIRC he framed business owners as fragile but entrepreneurship as antifragile. I recall him even writing something to the effect of "we should celebrate the sacrifice business owners make which contributes to the whole system." Would that change your perspective on the conflation of the two levels?
Yes, both the cab driver and the banker can make $0 in a given month, but I think part of Taleb's point is that it signals something different in both cases: if the cab driver has a bad month, he gets to keep playing and try to make up the lost money next month; whereas if the banker makes $0 some month, it means he lost his job completely, or worse, maybe the stock market crashed and he's unlikely to find another job in his field.
For the banker character in the book, this is even more devastating than a mere job loss, because now he can't make his mortgage payment; when your life is built around knowing exactly how much you're going to get paid and knowing where you will spend it, any shock to the system breaks it. I think this is why the banker character is fragile in Taleb's story.
I agree that a stripper may be a more salient example, but I can also tell a story where the cab driver gets a windfall fare (a cross-country trip or some such) and makes 10x what he normally makes in one month.
Is Taleb really suggesting that you can invent something as complex as, say, the modern MRI machine, just by tinkering around with wires and things in your garage ? What ?
I mean, yes, all engineering requires a certain amount of experimentation; but it's guided experimentation, not just random guessing. The theory is the guide.
I agree. Even if Taleb is locally correct both about new discoveries being made by tinkering and about theorists coming in later to systematize what the practitioners already know ("teaching flight to birds"), it seems obvious that the systematic understanding is invaluable to the *next* generation of tinkerers.
You can't get from discovering fire to building space shuttles JUST by tinkering. You've got to periodically consolidate your knowledge along the way.
"But the interesting constant is that when a result is initially discovered by an academic researcher, he is likely to disregard the consequences because it is not what he wanted to find - an academic has a script to follow."
I'm a researcher in experimental biology, so this got me thinking.
My first reaction was to strongly disagree. Scientists love accidental discovery stories."I noticed unexpected thing X and I had enough breadth of mind to realize that meant Y might be true and that led me to make a major discovery I hadn't been looking for" makes you a real hit at conferences. To the extent there is a script (hypothesis-driven research in your discipline, I suppose), the ability to improvise when things go off-script is widely admired. You might imagine granting agencies would be upset if you take your research in unplanned directions, but usually if you get high profile paper out of it they are perfectly happy.
Then it occurred to me that it is true that sometimes my students have made unusual observations and as Taleb predicts I've discouraged them from following up on them. The first reason is that an accidental discovery and an experimental artifact can be hard to distinguish. The second is that when a project drifts too far from your own area of scientific expertise, you have to learn a lot of new literature and you are prone to making stupid beginner errors. When you supervise a bunch of people and have to keep a bunch of projects on track, it's a big time expense to pursue a new field. You don't see many lab with one virology project, one chromatin project, one metabolism project etc. Most professors can't keep up with all those fields of literature well enough to direct them. So there's a natural tendency for projects to be scuttled when they drift too far from the lab's core expertise.
How do you solve this? Collaborations can help: show your weird finding to someone with more specialized expertise and go from there. A few months ago a colleague got a strange result and didn't know what it meant, but he realized it involved a gene that I studied and had his student talk to me. Now it's the most exciting project my lab is working on. Even if you don't have a lot of different expertises in one lab, you will have them in one department or university.
I wonder how one could study this question rigorously "do scientists stick to planned paths too tightly". Unfortunately I lack specific expertise in this area and will not pursue it further.
I'm but a humble code-monkey, not a scientist, but I work with scientists a lot. In my experience, accidental discoveries are indeed somewhat common; but "accidental" does not mean "totally random". What happens often is that the scientist is pursuing some area of research, devises an experiment to distinguish between multiple possible hypotheses, gets an unexpected result, then tries to understand it. But these multiple hypotheses don't just arise out of vacuum, or a voice heard in a dream, or divine inspiration; instead, they are the result of applying detailed understanding of scientific theory to the subject at hand. And interpreting the results -- no matter how surprising -- requires a lot of hard work in organic chemistry/physics/etc.; plus of course the baseline knowledge of statistics and data science/machine learning. You don't just light a random chemical on fire and go, "wow, it turned blue, I guess I'll build an MRI machine with it !"
Collaborations can help, but venturing outside of, and dealing and negotiating with other labs and experts outside of your own organization is costly. People want to keep work inside of their own organization for good reason.
It's not possible to be directly long or short the VIX. The VIX has mean-reverting behavior: when it's low, it's expected to rise over time, and when it's high it's expected to fall over time. Since this is common knowledge, a security whose price tracked the VIX wouldn't clear, because there would be more buyers than sellers whenever the VIX was below the long-term historic average and more sellers than buyers whenever it was above it. What you *can* do is trade cash-settled VIX futures. If the VIX is at 10 today (representing a very placid market), futures settling several months from now might be trading at 15, so you could buy those futures and be long volatility, but if the VIX only rose to 14 in that period, you'd be losing money even though the VIX went up just like you predicted. This is what prevents betting on volatility during seemingly-placid times from being an easy market-beater.
> And everywhere else, people really do underestimate volatility, and antifragility systematically is underpriced?
Haven't read the book, I suspect Taleb's point is that Lindy / folk wisdom tends to ~correctly price antifragility while "legible" / intellectual wisdom tends to underprice it. And that the latter kind of thought controls an increasing amount of modern society and is making a play for more (see also: rationalism). At least, that's the argument I'd make if I were him.
Isn't that robustness rather than antifragility? (On the other hand, aren't half of the examples from the book just robustness rather than antifragility?)
I do like the example though, because it drives home the point that optimising for robustness is done at the expense of something else.
I'm confused about the Syria/Lebanon plot. Was that a version of what was in Taleb's book, or a snide rejoinder to Taleb by Scott? The plot clearly shows that Lebanon was way ahead of Syria for the duration of the measurements, back in 1820. And the additional divergence around 1950 was not that something happened in Syria to suddenly depress growth - Syrian growth continued as before. Instead, there was a massive increase in economic activity in Lebanon in the 1950's which a little Googling shows was due to Beirut being the financial center of the post WWII middle east connections to Europe.
On the other hand, since the plots just look like a plain old exponential to the 1950 value for everything prior to 1950, I'm unconvinced that anything prior to that date is all that reliable.
I'm not convinced it has any data points before 1950. It looks suspiciously like it has a dubious extrapolation before 1950, which someone has drawn dots on.
It even clearly says under the graph that it is not suitable for comparing income levels between countries. I will count this in Taleb's favour. When fact checking goes wrong the author gains extra points.
The examples of evolution and collections of city-states vaguely reminded me of a metaphor from the video game Obduction. I can't find the actual text, but it went something like:
Once there was a gardener who carefully separated their seeds into separate plots, and tended and pruned their plants dutifully to keep the whole garden neat and organized. But despite the gardener's dedication, the plants grew sickly, and their garden never flourished. Eventually, they gave up and stopped tending the garden. Then one day, much later, they came back they found the garden was lush and filled with thriving plants, growing wild in every nook and cranny.
They argued that allowing seeds to be scattered to the wind is often bad for the individual seeds, but it's good for the species. Lots of independent big risks taken by individuals leads to a lot of individual suffering but allows the collective to capitalize on opportunities they couldn't have found otherwise and thereby expand the total resource base of the species.
(Warning: Generalization from fictional evidence. Pretty sure competent agriculture has higher food yields per acre than gathering-from-wilderness does.)
“This chapter (and honestly the rest of the book) only makes sense with an assumption that antifragility is systematically mispriced”
There is no “proper” price.
In financial markets, prices constantly change, sometimes drastically! Prices are not static, they’re dynamic. One could say prices are always wrong, thus always changing, trying to be less wrong.
Investors seek to own what another investor will purchase more for in the future. Wise investors are long term investors.
In the long term the winning investments are antifragile.
Economics classes teach Efficient-Market Hypothesis, the idea that the prices reflect all available information and you can’t “beat the market”
Haha, that is false. Humans misprice *all the time*
Humans may misprice things all the time, but that doesn't mean the market isn't approximately efficient. People sing out of key all the time, but a large enough group of untrained people can sing notes correctly. The usual response to someone saying the EMH is wrong is, so why aren't you a billionaire?
> In the long term the winning investments are antifragile.
That's not true, though, and this a common criticism of Taleb's writing on finance. (There may have been some truth to it when Taleb was a trader, but markets evolved a lot since then!) Put options tend to be overpriced compared to their expected payout. The money-making strategy is to be _selling_ put options...but that's a fragile strategy because of nasty tail risks. Indeed this is basically the business model of insurance: sell lots of tail options that people are willing to overpay for, and hope that you're diversified enough to endure the risks.
I’m not an options trader and can’t comment on put options pricing.
What I refer to as an antifragile business (to invest in) is one that will *benefit* from adversity because they (the humans and tech) will adapt, cope, innovate and grow more robust, intelligent, wise.
It’s easier to identify these companies in hindsight than looking forward.
It also makes more sense discuss antifragility in relation to the variance.
Is your investment antifragile to a recession? Pandemic? Innovation? War? Climate change?
Ah yes, sorry I misunderstood. Fragility of the business model is definitely an important thing to consider when choosing companies to invest in (see also: cyclical vs countercyclical industries). I don't know anything more about whether this factor tends to be under- or over-priced by the market.
I'm thinking of an experimental drug as increasing volatility - it might cure you, or it might have side effects that kill you. A healthy person has little upside (a drug can't make them any healthier) but high downside (a drug could kill them). A terminally ill person has little downside (doesn't matter if it kills them, that would have happened anyway), but high upside (it might cure them).
Drugs can certainly imply volatility. I think humans in general are antifragile to a lot of drugs.
Let’s consider a real experimental drug in testing today - psilocybin.
I tried it when I was healthy, experienced uncomfortableness and volatility, then experienced benefits in the following days. Fresh perspective, hugged a stranger.
A terminally ill person is likely to also experience volatility with psilocybin, then benefits. Could even cure of existential angst!
So regardless of the amount of upside or downside, a sick and healthy person can both be antifragile to an experimental drugs.
Humans appear to be tremendously successful in dominating the biosphere despite being extremely fragile in evolutionary terms:
1. We reproduce slowly and in small batches of offspring compared to lots of other mammalian species, or species in general. That means selection in general happens much slower for us than with, say, rats.
2. Our survival is highly dependent on a socially transmitted set of knowledge that takes years to learn. Take that away, and we're creatures that can freeze to death outside of tropical areas because we don't have any fur.
On one hand, there are all the arguments about us being robust because of our ability to modify the environment/adapt with our brains. Those are boring, though.
The more interesting point is... well, fragility doesn't mean "unsuccessful". It means "tremendously successful until something goes wrong". I won't bet on humanity going extinct in the next 100,000 years - that's the kind of bet on which you can never collect - but surviving that long wouldn't even be halfway to the average lifespan of a mammal species.
I think a lot of the threads around here are coming down to the same thing: we're discussing "fragility" without specifying fragility with respect to _what_, which is a pointless discussion.
There's no such thing as a generalised fragility or robustness with respect to all possible disturbances. A rat can survive many things that can destroy me, and I can survive many things that can destroy a rat. Likewise if you replace the rat with lion, or a cockroach, or an elephant, or a stone wall, or a delicate Ming vase.
Maybe not the sun. I'm struggling to think of anything that would kill the sun but not me, so perhaps there _is_ some kind of generalised notion of fragility in which I am in general less robust than the sun.
Well...we've only been tremendously successful for a maximum of 40,000 years. Rather an eyeblink in evolutionary biology terms. The sauropods could've made the same argument after their first 10 million years of dominance with much greater evidence in its favor. "Clearly massive size and armor plate are the keys to success...."
The taxi driver vs. banker example seems to have been disproven by the current pandemic. The taxi driver is hosed, because the massive reduction in personal travel has outlasted his ability to survive a smaller income. The banker is still collecting his salary while working from his home office. Individual bank branches might be fragile, but banking as an industry seems pretty anti-fragile - it will survive at least as long as capitalism does.
Beyond that, of course, taxi drivers never made as much as bankers. So the bankers could buy themselves some volatility protection via savings and investments that the taxi driver could not afford. The banker might have an income of 100% or 0%, but if he can live for a couple years on 0% while the taxi driver will go broke on 6 months at 50% pay, the banker is less fragile.
The banker working from home in a pandemic does not mean it's anti-fragile. It's like the rock: robust. The institution is robust insofar as it remains unaffected by a pandemic.
In that case, the taxi driver is both fragile and antifragile - there are some forms of volatility that make the cabbie worse off and some that make him better off.
But if you have to split hairs over the exact *type* of shock, that seems to be giving up most of the value in the idea. There's no generalized factor of "preparedness" or "adaptability" that makes you more antifragile against every type of risk.
Yeah, I thought about that after I posted - probably would have added an edit of the comment system here allowed it. Still, even if not “anti-fragile”, the banker seems less fragile than the cabbie.
It actually seems to be a bit of a stretch to say the cabbie is antifragile. I would say he is also “robust”, but he gains his robustness through flexibility rather than strength. Not a rock but a willow branch, or something. The cabbie would be strictly better off in a world with consistent high demand for taxis where he could have a predictable, high income stream. He just adapts (out of necessity) to a world with volatile demand.
Yea I agree. Cabbies are robust, not anti-fragile. But it might be true that cabbies are generally more robust than bankers, and that pandemics are just the exception to the rule. I think that you can still say that one occupation is more robust than another *on average*. I just think the cabbie example isn't very good.
Taleb is right about a lot of stuff and also needs a good dick punch. He talks about skin the game and grit and such, but his books and tweets are all a good example of Matt Levine’s definition of a great hedge fund manager: one who collects more in fees than the investors’ initial capital. Which Taleb does- his fund loses ten years in a row, collecting fees all along and then in year eleven profits enough to make up for all the loses. He also makes a lot then too. A bit like a bodega owner (antifragile?) who makes money selling lottery tickets and then gets a payout when one of its patrons hits the mega millions pot. Which is all fine! But like “news” is really advertising with some news attached, Taleb’s books are really hedge funds with some book ends. Doesn’t make them bad books, but probably explains their heft.
Shameless plug: https://thepdv.wordpress.com/2019/06/03/a-general-theory-of-bigness-and-badness/ is my attempt to specify explicitly why the pattern seen in Book Five w.r.t organizations and countries happens. I think it has more gears than Taleb's take and therefore is more likely to be useful. (Which does not imply it's more likely to be _correct_, TBC.
> He praises Switzerland, which is so federal that it's barely a single country at all, and argues that its small size (or rather, the small size of each canton) has helped it stay one of the world's most stable and prosperous areas (also, Venice!).
> So, a glib take you’ve probably heard is that the problem with Big Government, Big Business, Big Etc. is not the government or the business or the etc. but the “Big”. This is extremely superficial and is essentially elevating a trivial idiosyncrasy of the English language to an important structural principle of the universe, which makes about as much sense as nominative determinism. I think it’s true anyway. Here is my theory of why:
I’m a bit surprised you don’t mention Karl Popper here. If I recall, Popper’s thoughts on induction are behind a lot of Taleb’s thinking. I’m no expert in Popper, but I am curious about how to reconcile Popper’s thinking with the rationalist way of of thinking. Anyone thought about this?
Popper made bad attacks on Bayesianism for his entire career. His stupidest one was actually published in Nature (he and David Miller argue that the confirmation E gives to H can be factored into the contribution E gives to HvE and the contribution E gives to Hv~E, and the former is all deductive, and the latter is negative, so there can be no such thing as positive Bayesian confirmation).
I don't think it's quite true that Lindy = Doomsday. The Doomsday Argument uses one specific generating process: sampling a point on a finite interval, and gets Lindy as a result. But you can get Lindy from lots of generating processes:
- A geomtric series with unknown rate and uniform prior.
- Nick Bostrom's x-risk model of drawing balls from an urn.
- Time until you beat your current highest sample for any given distribution.
- Time to return from a random walk. (Probably. I haven't worked out the details of this one yet.)
Some of these are different representations of the same process, but I'm not sure all of them are. So I suspect Lindy's Law is deeper that the Doomsday Argument.
I would err between Specialization vs Antifragile.
On the one hand, I should specialize into narrow fields to increase efficiency, i.e: I don’t know anything about agriculture, cannot start a fire by myself, I am incredibly fragile if left alone in the wilderness, our civilization gives me all the incentives to ignore these things and focus solely on good performance at workplace.
On the other hand, Antifragile requries me to diversify skills, expose to volatility in environment to maintain survival capability, being a jack-of-all-trades, lose my job to young enthusiats who go all-in due to competitiveness in my industry.
I'm not an expert, but it seems to me that COVID is a pretty refutation of the "theory isn't any help in medicine" theory, at least in its wider sense. Even if the story about Moderna developing its vaccine in literally two days (https://www.businessinsider.com/how-moderna-developed-coronavirus-vaccine-record-time-2020-11) wasn't quite true, we still saw the development of multiple vaccines which turned out to be effective within weeks or months of the emergence of a new virus. I don't know whether there's a reason to think vaccines are very different in this respect from other drugs (or inventions as a whole), but it does seem to be a striking success story.
The danger of black swans isn't just that they're rare, unpredictable, and large. It's that we don't know how large they can get, even after studying past black swan events in the space.
We tend to talk about the Carrington Event as though it's a worst case event that might be repeated. It's not the worst case. According to the math and physics we know today, we have no idea how big the worst case might be. (Source: A keynote talk at the 2020 New England Complex Systems Institute conference.)
We look at deadly wildfires and think that the fire in Paradise, CA was shockingly horrible (it was) and so it must be a worst case. It's not close to a worst case. A fire tornado followed the 1923 Great Kantō Earthquake. That fire tornado killed 38,000 people. What if the Paradise fire had started upwind of a major city in similar conditions? Could we see 100,000 dead? A million?
My takeaway is: We are not prepared, and perhaps we can't be prepared, for some of the actual plausible worst case events.
Well the good news is: there's an upper limit to how bad disasters can get; they can kill everyone.
We already know a few low (but not _that_ low) probability events that can kill everyone, so we don't have to worry that we're neglecting something that's a thousand times less likely but ten thousand times worse.
Also according to my several minutes of research on the subject it looks like the 38,000 people who died in the fire tornado were all in the same building or building complex, having fled there after the earthquake.
Good point on upper limit of disaster magnitude. However, short of extinction, there are a number of underappreciated disasters.
Yes, the people were all in one shelter area. If wind driven wildfire swept a major city, there would be lots of people in shelter and last stand areas. You can't evacuate that fast.
I feel like my own field of academia -- astrophysics -- runs counter to a lot of Taleb's "theory vs practice" argument.
A lot of the 20th century's major discoveries were not accidental bolts from the blue, but resulted from people being guided by theory in order to design their experiments just right.
Einstein came up with General Relativity - the most important theory in modern cosmology - by immersing himself in the theory, incorporating work by lots other scientists (people like Maxwell and Lorentz), and trying to attack a particular problem. And he famously succeeded.
Accidental discoveries did happen, of course. Like Hubble discovering that the Universe is expanding. But still, Taleb's version of the process - someone makes a practical discovery, and theorists come in along later and hastily try to explain what's going on -- just doesn't really fit here. Even before Hubble, scientists were aware that Einstein's equations really seemed to imply an expanding Universe (and people came up with all kinds of kludges to 'fix' the problem). Hubble showing that the Universe is really expanding caused a feeling of 'oh thank goodness, the theory was right all along'.
Or take the discovery of the Higgs Boson. Or gravitational waves. Or the first exoplanet. In all of these cases *theory came first*, and experimenters, guided by the theory, knew where to look.
I don't think Einstein's equations require an expanding universe any more than Newton's. In either, you have the question of why a static gravitationally bound universe would not be visibly collapsing. Einstein's equations allowed for a constant of expansion, but it is still not really used. And exoplanets are kind of obvious. The other two I will give you.
The truth is that both theory and observation go together, and a field lacking either will be sick.
The point is that, to produce a universe as we see it, it has to be evolving. Which is the obvious answer if you take Einstein's equations with no cosmological constant. The initial conditions implied were computed by Lemaître.
In a Newtonian universe, there was just no solution possible.
>I don't think Einstein's equations require an expanding universe any more than Newton's.
That's not true, I'm afraid. Newton did of course realise that his law of universal gravitation made the Universe prone to collapse (in 1692 he wrote to a friend, saying the whole Universe might "‘fall down to the middle of the whole space & there compose one great spherical mass"), but it was possible to just say that the Universe was infinite, and therefore didn't have a centre of mass.
In Einstein's Universe, change is inevitable. Alexander Friedmann that first realised this in 1922. The equations of GR directly imply a Universe that is either expanding or contracting -- which is why Einstein invented the 'cosmological constant' kludge to hold the Universe static.
When Hubble discovered the expanding Universe, Einstein threw the cosmological constant away and embraced the dynamic Universe his equations implied.
Well, pehaps Newtonian gravity has more wiggle room for a static universe. But in any case Hubble's observation that the universe was visibly expanding was really just a synthesis of earlier observations - such as those of Vesto Slipher who observed that most distant galaxies were receding around 1912, before Einstein's theory of general relativity was published.
You know, I'm always bemused by the number of people (even within education) who assert that the purpose of formal education is to imbue the student with a beautiful theoretical framework that will allow him to easily predict and calculate all he needs to know about the Real World that he is about to enter.
This is deeply and even obviously silly (although plenty of people doing the educating think this way, perhaps to pamper their own egos). The only rational purpose of education is to summarize and distill the past, so that the student learns all that has been done before (in some relevant area) far more efficiently and quickly than if he had to stumble upon it himself by chance in the Real World.
It is (or ought to be) a *past-focussed* process to greatly shorten the time of complete n00b apprenticeship, so that the student can become a journeyman in the Real World much sooner and achieve mastery at a younger age. That is, it's logical foundation *is* skepticism about theory versus Real World experience. It says "learn all the ways people have tried X and Y and theory Z and T and why they didn't work as fast as possible, in a planned firehose of information dump, so that you can go out in the Real World sooner and NOT repeat any of the umpty-six dumb mistakes people have made since AD 800 or so."
That doesn't, of course, mean that the real purpose of education has remained uncorrupted, or that nitwits, both within and without education haven't enthusiastically debauched it. That they have -- the Church of Education cultists is almost as obnoxious as the Church of Science cultists. But in principle education should be a big buffer against volatility -- an "antifragile" enterprise -- because by allowing students to learn of the experience of far more people, in far more situations, than would easily be possible in any Real World situation of equivalent duration, it makes far fewer of the curveballs life and Nature throw at us come as an utter surprise.
Personally I vew the modern fashionable disdain for the institutions that helped us tame and ride chaos (as well as the descent of those same institutions into fossilized rococo courtly competitions) as a kind of broad late-Empire intellectual decadence, the kind of hazy sentimentality that might've led the late Empire artisan chafing under Imperial taxation and corruption to fantasize that the life of a medieval village smith would turn out to be a tremendous improvement for his grandsons. Ah! The fresh air! The simple joys of the peasant life in the harmonious shtetl nestled in the bucolic countryside, free of any distant scheming Senators. From which mistake follows 1000 years of muddy plague-eaten misery, but maybe that's what happens when (a) some of us mistake a rational system for a religion, and (b) the rest of us are too impatient to scrape away the barnacles and decide to jus tgo all Canticles of Liebowitz on the whole thing. If rationality has been so thickly coated in ritual that is hard to recognize any more -- why not treat *all* rationality as ritual and just give yourself over to impulse? That'll work out well.
> to imbue the student with a beautiful theoretical framework that will allow him to easily predict and calculate all he needs to know about the Real World that he is about to enter.
> to summarize and distill the past, so that the student learns all that has been done before far more efficiently and quickly than if he had to stumble upon it himself by chance in the Real World.
I don't think these two things are as mutually exclusive as you seem to think they are. That beautiful theoretical framework (ideally) *is* a distillation of everything we've learned by doing things the hard way.
Yes. But it's not so much a framework that we think is robustly predictive, but rather one that rationalizes a ton of experience. Theories are required to summarize the past very, very well. Do we also construct them to predict the future? Kinda sorta. We usually use them to rule out experiments or ideas that can be shown to be too similar to what has not worked in the past. But we wouldn't *do* research at all unless we hoped and expected the theories to *not* be accuratively predictive in some area or another.
That is, if the academy was equal to its caricature, something that professed to believe it could precisely predict anything anywhere on the basis of its theories, then it wouldn't attract intellectually curious people at all. If you *believe* there's a Theory of Everything (or at least Everything Of Interest To Me) and once I learn it I can just run a computer program or something and calculate anything -- why bother? Why study, why think about things, why even hope for having the idea that nobody else has yet has?
The comments on small versus large nations made me imagine the US as 50 sovereign countries. Imagine the diversity of culture, social systems, and economic systems in such a world. Of course, who knows how many "intra-US" wars would have been fought over a couple centuries. If you had a choice between one bigass US (as today), or 50 sovereign nation states, which would you choose?
In a vacuum? The intra-US wars. I already suspect (although I hope I'm wrong) that we're in the early stages of gearing up for another civil war that will be way worse (even more so than the last one) than any interstate bickering. However the US exists in the world, and I'm not sure the "Pax Americana" hasn't been worth it from a purely utilitarian standpoint. Although per the whole subject of debate maybe that too is a false tranquility. It is (as Taleb repeats ad nauseam to an uncaring world) hard to say.
Yes, in a vacuum. Imagine there was never a union and each state developed its own sovereign history. Perhaps Massachusetts would still be Puritan. Louisiana would have an unrecognizable language. Some states might have open borders with others. Others might have border walls.
The question is more epistemic versus utilitarian, since it's quite impossible to model the net utilitarian impact of such a massive divergence. So, everything being equal in the two scenarious - avg. GDP; avg life expectancy; overall lives lost in last 2.5 centuries to wars, disease, famine - which sounds more appealing?
Doesn't seem like a possible comparison. You can ask what we might see if the 50 present states decided to all split into separate nations now, but they wouldn't exist if we'd never reached the present of one nation. Which of the 13 original states was going to buy or conquer the rest of North American if they'd never united? Borders and ownership would look a lot different. Presumably some of the original 13 would have merged anyway. New England as X distinct nations doesn't make much sense. Louisiana Territory is unlikely to be subdivided in such a way as to balance slave with non slave US Senate when there is no US Senate. Former parts of Mexico are most likely still a part of Mexico. Hawaii and Alaska would probably just be part of Japan and Russia.
Net effect is pretty hard to predict. I guess maybe better for former plains tribes that might still exist? Vastly different Europe if we're not spending two and a half centuries clearing the frontier of natives so they can send their huddled masses. Most of the Pacific Rim probably belongs to Japan, which likely doesn't make a huge material difference to the people there. No Pax Americana, but there's probably something like a Pax USSR anyway. Maybe communism even works without a vast capitalist beast to force them into an arms race to bankruptcy? Does the whole swath of world from Egypt to Afghanistan look a lot different with only Russian and European meddling but no American meddling or does it look basically the same?
What the hell does East Asia look like? Seemingly Japan can probably conquer China in the 1930s if they don't have to fight an eastern front, but they can't seriously hold on for 90 years after that, right?
If you just mean me personally without thinking about how the rest of the world gets impacted, I rather like the union. I've been able to live in and freely move between many different states without ever having to go through an immigration process. But man, you can go levels deep with this. I'd be Mexican, not Mexican-American, which is worse in the real world, but is it worse when all the oil riches of California and Texas belonged to Mexico? My family might be oil barons. Does Spain buy Louisiana instead of the US doing so? Oil-rich bread basket of the world Mexico doesn't become a craphole narco state and comes to dominate North America while Virginia and Pennsylvania wage centuries of petty cross-border squabbles , the eastern seaboard being balkanized and plagued by never ending religious wars and dictators rising that Britain, France, and Germany periodically have to send in squads to put down to help Mexico keep its northern border safe so the beef, grain, and oil keeps flowing?
Surely, the rest of us couldn't just stay European colonies forever, right? They lost all the other ones too at some point. I'm kind of just assuming Napoleon still invades Spain and Mexico takes advantage to win its independence at about the same time. Do we ally with Virginia and Maryland in a great Catholic alliance against the protestants like Saudi Arabia and Iran fighting their proxy battles for cultural dominance in the middle east?
"Only make sense with an assumption that antifragility is systematically mispriced" it is. Antifragility benefits systems over cases and the collective over individuals: again, evolution. *Individuals* don't want big shocks, and there is is certainly anti-humanitarian to say that the weak and the unlucky should die for the benefit of the strong and lucky as both you and he point out. So we tend to seek the stable, the predictable, the smooth, meaning that such things are overpriced due to (misguided) demand.
Nobody (for the most part) *likes* the idea, I think Taleb is just arguing that it's a better model of the world than the ones currently being used, and that it *matters* because the current at-odds-with-reality models are disaster-prone. I also agree that I don't think he'd take umbrage to the Rationalist movement: the whole thing (and I realize I'm grossly oversimplifying and unlike you I wasn't there at the beginning so correct me if I'm wrong) seems to me to have started when Yudkowsky looked around and said "hey, why do all of these intelligent educated people believe in and do all these patently absurd things? There must be some important thing here besides intelligence and education that we're failing to reify".
"Taleb never makes this claim, and I think it would be hard to argue that an entire category of instrument has been consistently mispriced since forever. But then what is he trying to say here?"
I think this is exactly the claim he makes. Just in a reverse "picking up pennies in front of the steam roller" sense that it will take a long time. I don't think Taleb believes in the EMH.
Re mergers: "The combined unit is now much larger, hence more powerful, and according to the theories of economies of scale, it should be more "efficient". But the numbers show, at best, no gain from such increases in size [...] There seems to be something about size which is harmful for corporations."
Ronald Coase did work on this in The Nature of the Firm (1937). In a nutshell, the size of a firm is a function of economies of scale (favoring expansion) and transaction costs (favoring contraction). Firm sizes equilibrate at the intersection of these lines. In other words any given firm is probably roughly as big as it ought to be, and if you merge two firms you're likely to introduce higher transaction costs, which your gains in economies of scale are not large enough to offset.
Transaction cost in this case is basically the friction with which information flows inside the firm. So overhead, the likelihood of managers poorly allocating resources etc.
I think in some cases, mergers and acquisitions actually reduce transaction cost via vertical integration (i.e. company A is your largest supplier, and you are by far company A’s biggest customer. It might be a net reduction in transaction costs to merge and turn your external purchases into internal transfers)
I think the economy of scale vs transaction cost model assumes mergers of similar firms and “mature” firms (that is they need to have had time to grow to the size they “ought” to be). Also no major leaps in technology - certain forms of transaction are of course cheaper and occur with less latency than they were in 1937.
Yea, it's true. Integrating a supplier can reduce transaction costs, because there are also costs associated with bargaining, adverse selection, and keeping trade secrets etc. That's actually also part of Coase's work. The reason integration can reduce transaction costs is why we have firms in the first place; technically a perfectly efficient market would have every worker be a freelancer, where they bargained over compensation every time they performed a task, and they hopped between employers all the time to optimize talent. But because we don't live in a zero transaction cost environment, that becomes prohibitively expensive and we create firms. Mergers, where suppliers are successfully integrated, are cases when transaction costs are on net reduced even though the organization grows.
So in short firms form because integration is a way to reduce transaction costs, economies of scale puts more upward pressure on optimal firm size, and then transaction costs ultimately constrain firm growth at the top-end.
Also an interesting observation is that leaps in tech, as you mentioned, is what made transaction costs sufficiently low as to allow freelance work in what we today refer to as the gig economy. It's an extension of the Coase theorem.
This distinction between discovery by "accident" and by research seems very arbitrary to me. What do you think those scientists/engineers were doing when those accidents happened? Most scientists (most good ones anyway) are well aware that the direction of research sometimes has a life of its own, but that doesn't mean you can eliminate research while keeping the "accidents" to which it leads!
The Portuguese most likely discovered Brazil by accident in the context of their programme to find a maritime route to India by following a coherent theory that you could just sail around Africa. Does this show that all their work trying to map the African coast and trying to model the Atlantic wind patterns could have been ignored in favour of just sailing aimlessly around the Atlantic? Or does it illustrate that a deliberate programme to discover/explore one thing/aspect/field/whatever will often yield unexpected results with unforseen benefits - but which wouldn't have happened if they weren't doing the "research" in the first place? Hint: without the wider programme to find the maritime route to India, the Portuguese wouldn't even have developed ships capable of making it to Brazil.
Now, there is an argument to be made that maybe currently there's too much effort dedicated to incremental research compared to looking for breakthroughs (which would arguably make these "accidents" more likely). It's still all research though, and it doesn't invalidate that both types of approaches are important - even if one is obviously sexier.
"John fancies himself protected from volatility. But he is only protected from small volatilities. Add a big enough shock, and his bank goes under, and he makes nothing. George is exposed to small volatilities, but relatively protected from large ones. He can never have a day as bad as the day John gets fired."
Of course he can. George gets into a car accident - pretty likely when you spend all your time driving - and not only has he lost his job, he's lost his cab, which he needs to get further employment as a cab driver. If John gets fired, the only thing he needs to find another job as a banker is his brain.
Everything is antifragile until they encounter a risk they didn't include in their model.
"according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
That seems like crazy talk. Any time you have N items, and M people who want to share them, you divide N by M. Even if you do it like a Turing Machine would do it (going around the M people and having them take one until you have less than M items left all the while incrementing a counter for the number of rounds), you're still dividing.
Is Guy's claim that you never have N items and M people? How is that even possible?
In the up-front trichotomy between fragility, robustness, and anti-fragility, it is not at all obvious to me why one would prefer anti-fragility over robustness. I suppose the argument is that most of modern society has a false confidence about how robust it really is, but it doesn't follow that the solution is "embrace volatility" as opposed to "anticipate under-appreciated possible sources of volatility and take steps to avoid them."
I would say you're right about hoplite and phalanx formations - they're quite powerful, but also fragile, and once they start to crack, it's often all over from there.
If you absolutely wanted to force the fragile/antifragile pattern, then the Roman legion would be the at least less fragile one - while a phalanx has all the tactical flexibility of a thrown brick, legions were designed to be maneuverable, to swap units in and out to combat fatigue, and so on (this is why we get the original Pyrrhic victory - even when they kinda lost, the legions inflicted a ton of punishment, because they could be defeated without things just snowballing from there).
It's also worth mentioning that Spartan _society_ was incredibly fragile, and when subjected to just the effects of passing time couldn't maintain itself one bit.
The notion of 'antifragility' seems to completely break down when applied to military formations. Battle tactics and traditions arise in response to particular situations, enemies and experiences, and cannot be expected to succeed outside of those parameters. The legions looked great against the centuries-old phalanx, which in turn looked fantastic against the chariots and light infantry fielded by their Persian Enemy. Presumably, these same Persian formations were highly effective at defeating whatever enemy they evolved to face. I know next to nothing about Persian history, yet strongly suspect that had Taleb written during the Persian Golden age, he would have described the invincible armies of king Darius as 'antifragile' without hesitation.
On the other hand, you might argue that the Roman Republic during the Punic Wars was antifragile. Their institutions absorbed immense pain and uncertainty, seemingly growing mightier with each new curveball thrown at them. But then again, maybe this is 'adaptability', much more than it is antifragility. I'm still iffy on the distinction.
What came to mind for me with the hoplite example was that they were citizen soldiers who equipped themselves and fought for their own benefit, rather than slaves or conscripts as was usually the case for opponents. Contemplating how this increases antifragility was informative.
IOW it wasn't the phalanx tactic that defined hoplites - that was a common approach used long before Sparta - and it wasn't the particular weapons and armor since those varied and evolved with circumstances and opponents.
Would a society fielding self supported citizen soldiers be antifragile by Taleb's definition?
"Evolution is antifragile. In a stable system, animals won't evolve. In a volatile system, they will. At times I became concerned Taleb was getting this wrong - animals will evolve to more perfectly fit whatever niche they find themselves in."
If I recall correctly, Taleb adopts a gene-centric, "selfish gene" perspective on evolution. The antifragility of evolution seems pretty straightforward under that view. For example, a population of animals will have genetic variation related to in which temperature they thrive. If the temperature stays the same for long periods of time, genetic variants associated with fitness at that temperature will become more common. If the temperature then changes, those rare individuals with variants associated with fitness at the new temperature will thrive, while the majority adapted to the previous temperature may go extinct. Or if there's no genetic variation left and no fortuitous mutation occurs, the whole population may go extinct, with other animals taking over the newly vacant habitats. (The high polygenicity of many traits could be thought of as an antifragile mechanism: even under strong selection, not all variation is exhausted, meaning that if the environment changes, organisms can still evolve towards the new optimum.)
I believe Taleb says something to the effect that no individual or population or even species is antifragile in the evolutionary scheme. Rather, it is life itself (or genes embodying life) that is antifragile.
I read this around the time it came out and have been thinking about revisiting, but this may have scratched the itch. I too, enjoy Table, but I realized with antifragile that part of the reason I'm so engaged is that I love to hate his arrogant tone, and because it's a challenge to accept that he's seemingly correct with so many of his points, but also contradicts himself terribly throughout the book. I.e. warning about halo effect, but acting as if he's an expert in exercise physiology when he brags about his weightlifting routine in the middle of the book.
I've long wanted to see an experiment in which Taleb and Pinker switch exercise routines (Pinker starts lifting Taleb's deadweights and Taleb starts riding Pinker's racing bike) to see if their attitudes and opinions reverse as well.
I like that the critique of this book is exactly what you'd expect from Taleb's intellectual attitude - he doesn't have a grand overarching theory of antifragility, but instead a series of anecdotes and thought experiments with some grounding in the real world that you can chew on in order to improve your thinking about the subject.
Writing from a farm in central Kansas, I wonder what Taleb would say about agriculture. There is immense variance involved in the practice of agriculture. However, rather than fostering antifragility, nearly all agricultural practices I can think of are designed to stamp out variance, in order to permit fragile, but hugely efficient practices.
Grain prices jumping up and down? Why be antifragile when we can kill the variance by building silos and storing our grain.
Weather getting you down? Why be antifragile when we can tame the variance with irrigation, state-of-the-art forecasting systems (the daily forecast is probably the highest-rated show around here), and genetic engineering.
Random calving complications? Why be antifragile when we can flip variance off by hiring a vet to oversee tough cases?
Not to mention the reliance on increasingly-complex machinery (combines and trucks, of course, but also increasingly GPS, and many others) which requires parts, fuel, maintenance, an uplink to space, incomprehensible supply chains, and a million other things without which the whole thing comes crashing down.
A world where we designed agriculture to be antifragile, is almost certainly a world where Taleb goes hungry.
I imagine Taleb would point to the Famine in Ireland which seems to have been due to an over reliance on two high yielding types of potato, both which turned out to be fragile to the blight.
A system that has 1000 varieties of crop will yield less on average, but the minimum yield will be higher.
So the core question is whether you optimize for the highest average return, or the highest minimum return.
Taleb would also argue that his dying from hunger is not the worst thing that can happen to his bloodline.
I might be getting the nomenclature wrong, but I would presume that planting many diverse crops makes the system more stable and redundant. The word 'resilient' comes to mind, but I'm not sure that 'antifragile' is it.
Limited mechanization, small acreage farming, or foraging seem like clearer examples of an anti-fragile food supply. I'm just not sure they're anywhere close to desirable given the efficiency tradeoff, and I suspect the same is true in most settings.
Agreed! There isn't enough conversation here about what the actual tradeoff is. Fwiw Taleb is introducing the possibility that there is a tradeoff at all (maybe common in your industry but less common in others) and noticing that we consistently overvalue efficiency over resiliency.
The intellectual demands of being a farmer these days are striking. Being a farmer in Kansas increasingly demands a STEM undergrad degree and the equivalent of an MBA.
No doubt! (I should clarify that my ties to the Kansas farming community are through marriage and affection, rather than profession or upbringing).
It takes an impressively broad skillset to run a family farm, let alone do so profitably. You are so right about the often overlooked 'MBA' part of it: successfully marketing a harvest takes serious strategic thinking, and can greatly impact a year's profits.
I have no idea what it takes in general, in spite of actually living next door to a few farms several times in my life but not being a particularly friendly neighbor, but the one farmer I have known, who was a girlfriend's dad, was an electrical engineer who decided he'd rather be a farmer and bought up some land in Amish country. Only reason it worked in such a shitty market for small timers is he had the skill to build his own generators and run the entire operation off of waste vegetable oil he got for free from the restaurants he sold his vegetables to.
I tried to read this book after hearing much praise for Taleb. I had to give up in frustration pretty quickly as, to me, it was just a lot of arm waiving. In particular, he employs the pseudo-intellectual practice of first creating, and then discussing, his own private terms of art. But the terms are never defined and change at will to fit whatever point is supposedly being made. There is no clear hypothesis that could ever be tested, and no useful rule or insight is ever forthcoming.
Shorn of all the jargon, he seems to be saying nothing more than: "stuff happens, it's hard to predict, act accordingly." I don't get what people think they are getting from his books.
Making up your own terms like "lindy" seems to be a way to get an enthusiastic audience. I guess it creates a community of people who know what Taleb means by "lindy."
I looked it up just now and apparently it a version of "the test of time" heuristic. You know, like if the Pyramids have been around a long time, they will probably last a while longer also. I guess it's named after a "Lindy's" restaurant in NY that is famous for being around forever despite having objectively crappy food.
So invoking the "Lindy" effect sounds like a cute way of saying "past trends tend to continue, until they don't."
One of my (many) problems with Taleb's writing is that it doesn't lead to any sort of practical model or decision procedure. The fragile/robust/antifragile classification suffers from not being mutually exclusive nor well-defined, and so it's mostly useless when it comes to applications. The comments here have already highlighted many problems with the classifications Taleb gives in his book, suggesting that the classifications are not well-defined enough that people can agree how to classify things. Moreover, something might be antifragile to small changes, and fragile to large changes: an example is muscles, which Taleb points out are antifragile to small stresses (getting stronger with exercise), but they are fragile to large stresses (strains and tears can cause permanent damage).
Without the ability to clearly classify things as fragile/robust/antifragile, the theory lacks any predictive ability and greatly limits its usefulness. There's definitely interesting things to be said about systems that take advantage of natural disorder, but I feel like the framework Taleb sets up falls short of a working theory.
I totally agree with the idea that anti-rationalism isn't opposed to rationalism - it seems like a natural result of using rationality on itself. This is where 'why philosophers should care about computational complexity theory' feels relevant:
If you aren't consciously thinking about how accurate your model might be, what its limits are, and where it's going to be wrong, you're probably assuming some naive model of computational complexity which says the model is cheap and easy to compute totally accurate answers really fast.
Likewise, if you ignore the fact that people are computers, you might naively think we should be able to scale societies arbitrarily largely. Once you understand that human beings _are_ computers and that our societies are networks of computers, it becomes reasonable to conclude that governance systems are network topologies, and not all network topologies are going to scale to arbitrarily large degrees.
If he's opposed to anything, i think it's something like a blind faith in experts, and trust in an existing system, rather than a willingness to prioritize evidence-based thinking, and skin-in-the-game predictions, over "what those smart people think."
I'm very curious about this. What insight about human societies requires the explicit description of humans as computers? Does Scott believe sociologists are unable to effectively contemplate issues arising from the size of human settlements, without appealing to big-O notation?
- people can only have close relationships with ~150 other humans (Dunbar's number)
- people will look out for friends of friends, and friends of friends of friends, but consider anyone beyond that a stranger
This implies an upper limit of 150^3 = 3,375,000 human beings can interact with each other, because you eventually have people who are connected so distally that they aren't really able to care about each other.
I have no idea what Scott believes on this issue.
I can't tell if you're being snarky about big o here, but there are a bunch more examples.
Thanks for clarifying - the internet is were. The idea that computational limits affect the human experience has been my main blogging focus. Here's another example:
The thing I find most notable about Taleb is how many clearly intelligent people hate him and his theories, without reading one of his books.
If you hate him, I wonder: compared to whom?
Taleb is generally more right, more insightful, and more actionable than Malcolm Gladwell and the TED crowd.
Taleb is more correct and useful than most academics in social sciences.
Taleb is approximately as correct and insightful as Ted Kaczynski, and likely less dangerous on net.
Taleb is likely less accurate on hard sciences and the patterns of pure invention. His discovery here is a phenomenon where you can win without being right, yet he's seemingly preoccupied with being right.
I suspect the biggest umbrage is not that Taleb is a bully, but that he packages his philosophy with just enough math that those who live and breathe math find they have to deal with the ramblings of a mad philosopher when interacting with others.
I'm not sure I understand your last sentence - math-savvy people are annoyed by this mad philosopher? But how is this annoyance specifically triggered when interacting with "others" (other philosophers? other "math people")? Can you rephrase it for me, please?
By the way, like it's the case with many people who have a point but are widely disliked, it's not really a mystery - Taleb just seems to act a bit obnoxious as a person. You'd be surprised how much even highly intelligent people care about that kind of stuff, beyond whether someone's technically right. Also, many of the "intelligent people" are the sort of the people he directly antagonizes, plus, in most current intellectual/academic spheres it's good practice to frame your ideas as adding on to/furthering discourse rather than stating that most of the people before and around you are basically idiots, and you are now going to enlighten them.
I personally don't hate Taleb by the way - I read Black Swan and found it interesting (if too long) and I was a bit taken aback when I discovered his online blogging and tweeting persona, but I'm not invested enough to feel strongly either way. I just don't think it's surprising at all that many people would ("hate" him).
I think the basic thesis in the first few paragraph falls apart as soon as you try and pick apart what is meant by "the better it does" and "does well" here. These imply value judgements or objective functions of some kind, which glasses and rocks do not inherently have. What does it mean for a glass to do well? Why assume that glasses have an inherent goal of continuing to be vessel-shaped rather than transform into entropy-maximizing piles of shards?
Intuitively, glasses are "do better" by being vessel-shaped because that makes them more valuable to conscious, value-judgement-having observers. But if you define "better" as being a value judgement on the part of conscious observers, rather than the entity itself, then the hydra example falls apart, because arguably a hydra growing more heads is a *worse* state of affairs for the hero fighting it.
If instead you try and salvage this by replacing "better" with "more stable or resistant to being altered", then both the hydra and the evolution examples fall apart.
Maybe I'm overthinking this and it's fine as long as fragility is always assessed for a (object, objective function) pair instead of just for the object itself.
The paragraph "For example, if some very smart scientists tell you that there's an 80% chance the coronavirus won't be a big deal, you thank them for their contribution and then prepare for the coronavirus anyway. In the world where they were right, you've lost some small amount of preparation money; in the world where they were wrong, you've saved hundreds of thousands of lives," bothers the crap out of me, because it seems completely at odds with the message of your post "a failure, but not of prediction." The point is that if the scientists gave a 20% chance of a pandemic, you don't "prepare anyways;" you prepare because a 20% chance of hundreds of thousands of lives being saved justifies an 80% chance of wasting a small amount of preparation money on something that wasn't going to be a big deal.
My point is you're preparing *because* the experts estimated a 20% chance. If they estimated a 0.2% chance, you wouldn't make any special preparations because that wouldn't significantly increase the chance of a pandemic more than usual. You are preparing because of the experts' prediction, not in spite of it.
Yes. But my point -- that I know I didnt spell out, sorry -- is that I read what Scott said as exactly what you are saying. That is, you are preparing because it makes sense given the predictions (but in spite of the fact that 20% sounds low and a naive system-1 response that doesnt take payoffs into account could lead to not prepare). He is not contradicting his other post. I wonder if many other people interpreted this as you did.
I'll finish up last year's pasta store sometime later this year. The opportunity cost of stacking up was minimal, so why not do it in the face of real uncertainty? Same thing with the toilet paper (although it's been thoroughly debunked that the shortage was because of hoarding - rather, it just took a while for supply chains to switch from corporate to consumer).
I'm not convinced that "prediction" in the sense of putting a probability on an outcome is so totally divorced from recognizing tail risk. It's not possible to be robust or antifragile to every conceivable event, and it's certainly not cost effective to protect against all of them equally. Some tail risks are more likely than others, or harder to protect against. A "tail risk" of 1 in 100 or 1 in 10,000 is very different. There are steps you can take to be protected against broad categories of events (e.g. stocking your house with nonperishable food could help in case of pandemic, natural disaster, social unrest, military attack, or a variety of other events that make leaving home or acquiring food difficult) but inevitably most agents will have to choose what to prepare for and that necessarily implies a model about how the world works.
> This is one reason (among many) Taleb disagrees so strongly with Steven Pinker's contention that war is declining. Pinker's data shows far fewer small wars, but does show that World Wars I and II were very large; he interprets the World Wars as outliers, and notes that since WWII the trend has been excellent.
FWIW, Pinker has acknowledged that if WW3 were to happen the death count would probably be astronomical and he still allows the possibility despite the persuasive arguing in Better Angels of our Nature that violence is decreasing.
I ran the same Lebanon v. Syria comparison on the same website as you did, and it shows only a roughly 10% difference in GDP per capita in 1913. Same Maddison data, very different picture when I generated the chart.
Same here. I suspect the review might have been written long ago and that Our World In Data probably changed its methodology since then. One difference between both graphs is that Scott's screenshot says the data were "adjusted based on a single benchmark year (2011) which makes them suitable for comparisons over time but unsuitable for comparison between countries", while the current OWID graph now says "These series are adjusted for price differences between countries using multiple benchmark years, and are therefore suitable for cross-country comparisons of income levels at different points in time" and shows near-identical GDP per capita for Syria and Lebanon until 1913.
Antifragility, the idea that some systems benefit from fragility, seems like an insightful idea. But every example I work through in my head tells me that robustness is the ultimate goal, and antifragility is useful only as a reminder that not all systems need to depend on stability.
Exercise is a great example. The goal is fitness, to be able to physically overcome a wide variety of situations. That's robustness. Exercise is an antifragile system that helps you get fit. But eating, hydration, and sleep are also important, and those are fragile systems. (Maybe eating is antifragile? Intermittent fasting says yes.)
Evolution? Robust is another way of saying fit. Natural selection is definitely an antifragile system, but there are plenty of fragile systems like symbiosis or food chains too.
What about computers, or even smart phones? Fragile in the volatility sense, but not in the practical sense! Are there any antifragile systems in a smart phone? Seems to me it's just fragility surrounded by a robust case. Sure, sometimes there's a catastrophic failure like dropping on a sidewalk or going through the wash, but the cost of those is small compared to the value of the working phone.
So I like the idea of antifragility, but I don't believe it's the best idea ever. It's just one more too l in the toolbox.
I have a notion that anti-fragility exists over ranges, and in particular, that enough stress makes an anti-fragile system stronger, but too much stress will break it.
I *think* this is important because Taleb is so much in love with anti-fragility that he doesn't want to think about his favorite systems having limits.
Which gets to that I think Taleb's boasting and insults might be part of his charm*, but they have real epistemological risks-- his style makes it hard for him to notice whatever errors he might be making.
"But haven't theories given us all sorts of useful things, like science, which leads to technology?"
I massively recommend the book Shock of the Old, dedicated to this subject. It is very brief and dense and really enjoyable. It persuasively argues that the answer is "no" by arguing both the history of how important inventions arise and fall (in the 20th century) but also that our ideas about which technologies are important to us are wrong. A novel (to me) example he gives of how technology works is that he claims that poor mechanics in India understand American cars much better than the people who designed and built them: because in much of India they have to understand how to keep them running for many times their designed lifetime.
I got a bit disillusioned with science in university when it became clear that the epistemology of science was actually to keep fiddling with your model (adding more and more free variables, appropriately justified by this or that idea of reality) until your predictions matched reality. And that the prediction of novel phenomena from these models (that is, these models teach us things we didn't already know -- rather than just allowing us to make accurate predictions in line with statistical or machine learning methods) is really the exception rather than the rule. You can really see this when you note that as a whole Newtonian Mechanics has no scientific justification given Quantum Mechanics -- you learn them entirely separately, and just wave your hands, or hold them over your heart, and you have it as an article of faith that if we were omnipotent we would understand how quantum-mechanisms give rise to Newtonian dynamics, but honestly: there is no other way to describe Newtonian Mechanics today other than statistical curve fitting: where concepts like "force", and so on, are just meaningless free variables (pleasing to our intuition) we're using to fit reality. Dark matter is another example of this (a thing that is the vast majority of the universe, but only detectable as a magic free variable that helps our equations better fit reality). My current understanding of physics is that it's statistical curve-fitting, but where everyone involved is constantly lying to themselves about what they're doing, even though nobody apart from those on the current-bottom-level (I guess string theory? About which I know nothing, sorry.) should have any reason to believe they are doing.
Anyway, sorry -- the point is that once planes exist, we can curve-fit the behaviour of planes, and use that to guide our development of better planes. But there's an issue that before planes existed, we had nothing to curve-fit to -- so engineers just had to try out stuff until planes were invented. And then the same thing happened once we got to super-sonic planes -- scientists weren't much help -- except when they were being engineers. The Old New Thing makes this point about the Manhattan Project that it was an engineering project employing well-known engineers (who are generally known as scientists because that is a more fashionable title -- compare Galileo who went by the title Philosopher because "Mathematician" didn't command respect."
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
One thing that confuses me in statements like this. Is it implicit when people talk about "Europe" or the "whole of Europe" in these days that they are talking about Christian Europe? Or do people making statements like this have a blind spot about Islamic Europe in these days?
In any case I think this statement is unfair, because there were plenty of Muslims who were into maths and technology, and there were quiet imports of technology into Christian Europe.
For example, officially, the Catholic Church believes The Pope invented mechanical clocks in 963AD (an accurate pendulum clock that rang bells for specific hours)... and that it was a pure coincidence that this was after an extended trip to spend time conversing with some Muslim experts and various things.
The internet is presumably anti-fragile-- it considers censorship to be damage and routes around it. Pretty good censorship (as in China) is still possible, though.
****
Unrelated question: Has Taleb influenced enough people that he's affected what investments get made?
Speaking of the final note on 'anti-rationalism', I think Taleb rather thinks he belongs/actually belongs to the tradition of critical rationalism alongside with Hume, Popper and Hayek. Many of his remarks on antifragile systems seem to me to relate to Hayek's on 'spontaneous orders', just as his praise of risk and adaptation to volatility is slightly reminiscent of Popper's point about making conjectures as bold and therefore unlikely and specific as you can. I think it's in Objective Knowledge where Popper deals with how people seek for regularities in daily life, stability, balance, then don't find it and become unhappy because of that.
History person chiming in here - Spartan Warriors were the definition of fragile. The problem is we see them as being lone soldiers, or in a phalanx with other Spartans, and don't look at the society as a whole. Spartan soldiers were essentially idle - they fought, but they didn't 'work', and the society as a whole was structured with a vast, vast underclass of helots and semi-independent Greeks supplying a tiny elite at the top, maintained by terror.
Any serious disruption to this system did far, far more damage than an equivalent state like Athens or Thebes, for almost no benefit to the society at all. - there is no Spartan art, poetry, music, drama or even architecture.
I kind of wish substack would implement something like what webnovel has for comments, where you can comment on specific paragraphs, and to keep them from getting in the way, it just shows a number at the end of the paragraphs with comments on them that represents how many comments there are for it, and clicking the number opens a pop-up with all the comments, in branched format.
Mostly so I could have left a quick comment on the large vs small stone paragraph: the method of gain is from the square-cube law. For (my favorite) example: going from a cube with edges of length 4 units to length 5 units almost DOUBLES the volume.
Evolution works because the sun is continuously throwing lots of energy on this planet. Doesn't "anti-fragile" just mean "things that eat energy from the entropy of others"?
>>> "he suggests theory is much less important for technology than we give it credit for. He makes the same point I made in Is Pharma Research Worse Than Chance - a whole lot of drug design seems to happen more by accident (or more politely, through tinkering and investigating) than by smart people using theory to discover drugs" and
"I was surprised to see Taleb point out the same effect in fields like physics and engineering. For example, he argues that jet engines just sort of happened when engineers played around with airplane engines enough"
Matt Ridley's recent book How Innovation Works goes into this in depth; it's almost the thesis of it.
I partook in a Model-UN representing Libya in late 2011, and quite clearly remember the Syrian delegation refusing to partake in any negotiations because they'd been sanctioned by the entire world as a result of the uprisings there. So as far as I'm aware and can remember (Wikipedia backs me up) the Syrian civil war began in 2011, meaning Scott's comment about how "Antifragile was published in 2012, before the Syrian Civil War" isn't quite true.
> For example, suppose I am long (or short) VIX. If something unpredictable changed to make the world much more volatile, that would be a positive black swan. If something unpredictable changed to make the world much less volatile, that would be a negative black swan.
I assume Taleb would say there is a tail of volatility/fuckery so insane that it rocks the entire financial system to the point where your position on VIX is just paper. VIX operates, like the banker, within a region and just trails off to zero at the point of counterparty risk.
Hi, there is no such thing as "anti-fragile." All things in the universe are fragile, material and immaterial. Some things may gain from disorder but they are not "anti-fragile". Too much disorder will make them fragile. Put options may gain if volatility increases but beyond a certain point exchanges may go bust due to counterparty risk. Taleb is inventing artificial images to attack. He is excellent in marketing and convincing crowds he got something. He got nothing. He is confused and his books are repetitions and can be summarized in 500 words.
A populist trying to generalize personal experiences and who thinks the whole universe is options trading. Antifragile is stupid idea because everything is fragile even whole universe can collapse down to singular point with everything in it. His examples comparing taxi drivers to bankers are beyond stupid. I tried reading Fooled by Randomness many years ago. I never finished the book because it was painful read. He must have used a novel editor from UK or Australia or something. You must admit the man has top notch marketing department. For him anyone who doesn't agree is an idiot who gets blocked in twitter. All his books are full of repetitions and can be compressed to a 5 minute read in a blog. Finally, a fierce proponent of lockdowns while advising a tail risk fund that benefits from stock market pain. This is pure skin in the game.
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
Ok about the middle ages, but the Romans themselves had very advanced abacuses that could do math in their weird system of decimal for integers and dozenal for fractions. (Also, unlike the legend says, they obviously knew about the zero.)
Taleb is full of contradiction because his "principles" are weak personal experiences. He's sort of a populist. He advocates anything that can boost his book sales.
As a reminder, the "rationalist" community is very anti-rationalist too (which you kind of suggest), the proper term for it would instead be empiricist ?
For some reason i missed this post when it came out; i am only commenting here on how you presented evolution. if in fact Nassim Taleb presented his material as you indicate; it is inaccurate. for one thing there is no such thing as stable environment. Secondly, there is no such thing as a stable niche. one of the over simplifications of Darwin's approach (though he actually was far more complex than he is made out to be and did not say most of what people think he did) is adaptation to static niches. in actual fact environment and organism continually alter each other. environment is not background with holes in it into which organisms fit themselves. it is more akin to a living field that organisms adapt to. environment then changes causing changes in organism and so on. over simply, the entire ecological scenario is a self-organized nonlinear emergent behavior dynamic which operates best when close to the moment of self-organization. if it moves too far from that orientation, it becomes static and begins to fail, if it moves too close to the line across which self-organization occurs it fall apart. the healthiest situation maintains a balance point where constant change occurs, neither too far from or close to the line across which self-organization occurs. western science has too long over simplified evolutionary thinking for the masses which has resulted in a great many misconceptions. in part because most scientists never really understood it themselves.
Isn't your concern about Taleb's understanding of evolution misplaced, since evolution tends towards punctuated equilibrium rather than incremental improvement?
Doesn't he repeat some of the anecdotes?
I'm sorry to be the one to break it to you but slaves and slaveowners exist and the developed world's material excess is 100% built off the backs and blood of slave-labour at a certain level.
100% seems a little excessive, the most biased estimates put out by anti-trafficking charities claim that around 40 million people are currently enslaved, which is a lot in absolute terms but low in relative terms.
I mean, if you assume that wage labour is slavery then I guess we're all either slaves or slaveholders, but I think it's reasonable to say that we've made progress away from slavery, which is pretty impressive considering how common it's been in the past.
He/she might've just used "100%" to say that it's "100% the case" that we owe some of our excesses to slavery..
That "40 million" is the "acceptable by the mainstream" version, which only accepts the traditional mechanisms of slavery, and not new and advanced ways to force people to work for you - which cover billions in the developing world.
I hardly think forcing people to work for you requires new and advanced techniques - serfdom and slavery are the oldest tricks in the book (both predate writing).
The main innovation in the modern economy is paying people money - I think that's a good thing on balance, for most of history threats of violence were the primary means of exchange between the upper and lower classes. Now the ruling class only has to resort to violent coercion when they can't find anyone willing to do the job for less money! (Obviously multinational corporations still resort to violence more often that most consumers think).
Honestly, the main reason I object to arguments like yours is not because they're factually untrue, but because I feel like they promote a fatalistic apathy that discourages us from actually trying to make the world a better place.
this
This 2 is something I hadn't picked up in him, but it does sound plausible.
To be fair, the northern parts of the modern territory of the state of Italy *are* Germanized, and have been for a long time.
"People were saying millions were going to die."
2.73 million people worldwide have died from coronavirus though, thus far
I guess we chronically underestimate the tail risks but overestimate the magnitude of damage they'll do once they happen? In this case, it's clear the very few people who care enough to estimate pandemic risk at all thought this was fairly likely to happen but it didn't seem to be on most people's or government's radars, but once it happened, the early drama from Italy and New York City made it seem a lot more dire than it ended up being in most places.
Sometimes you're on the brink, though. Next time, it could be a deadlier, even more transmissible virus. Sometimes these overestimates of how bad it will get are dependent on human factors, too. Putting aside whatever job security was permanently lost to the lower end of the wage scale with the appearance of the gig economy, we recovered from that fairly well, but in some counterfactual universe where a few personalities are different and we don't bail out AIG, then what? I guess, according to Nassim Taleb, we become even stronger?
Another issue is just that the world at large is always going to recover on a long enough timescale, until finally it doesn't. I feel like this personally is my own big bias and blind spot. It keeps me from ever really worrying about anything or taking any claim seriously when people think the world is bad or getting bad. I look at the fact that tremendous world wars and genocides and nuclear arms races dominated much of the 20th century, yet things still got better overall almost everywhere, and it makes me way too confident that will always be the case. We can reach back from the brink and recover infinite times, but extinction only needs to finally happen once. It's why the true enemy always wins.
I got in loads of toilet paper weeks before the rush #ANTIFRAGILE
Same. I stocked up in February (on canned food etc. as well) -- not because I took the coronavirus seriously, but because my mom did and I didn't want her to worry. Maybe there's a lesson there!
"Listen to your mom"
"Listen to Act_II's mom"
As noted, millions did die from Covid-19, and millions more died at least according to some report from side effects of lockdowns, deferred treatment (e.g. for cancer), hunger and associated physical decline (e.g. in Bangladesh), etc.
This pandemic is probably not "once in multiple lifetimes." The Spanish flu was worse, the Hong Kong flu was at least as bad, and there are probably more coming our way.
It could easily have been MUCH worse in impact on supply chains, e.g. the supply chains for vital medicines imported from China might have been gutted. The Chinese lockdown was short. It could have been much longer, or it could have been long enough to cut supplies for vital medicines to a level sufficient only for China, or only for China and high bidders.
HIV also was worse, and isn't anywhere near over yet (though we're at the stage with anti-retrovirals that is comparable to a really expensive vaccine for covid).
Aren't condoms as effective as a great vaccine in preventing HIV?
Only in the same way that social distancing is as effective as a great vaccine in preventing covid. It's a continuing series of decisions you have to make, and those decisions do have costs to your actual experiential pleasure in life.
As a useful concept to have in the back of your mind, I think some of the examples are clearly real.
When the claims start to get sweeping, there isn't room for all the evidence that would be needed, it has to make way for more sweeping claims.
"animals will evolve to perfectly fit whatever niche they find themselves in"
can somebody who's better than me at evolution address this
I ask because in undergrad I was taught that evolution does not produce perfectly fit organisms, it only eliminates those so unfit that they cannot survive to pass down offspring.
I've edited it to "more perfectly fit" - I think it approaches the limit of perfectly fitted to its niche over time, but doesn't get there.
(other caveats - the niche can change, the adaptive landscape might have unclimbable mountains, etc)
It can achieve the telos ordained for its given form with greater felicity.
Yes, "more perfectly" is a linguistic error. "Perfect" is an ultimate, like "unique"
Nothing succeeds like success. Whatever life currently exists is thus by definition the most "perfect" that was possible under all the actual real world circumstances.
That's my story and I'm sticking to it.
"better'
https://www.lesswrong.com/posts/XC7Kry5q6CD9TyG4K/no-evolutions-for-corporations-or-nanodevices touched on the basics. High fidelity of replication requires prohibitively long timescales for adaptations to accumulate, while low fidelity limits the total complexity of adaptations. Evolution can only meaningfully be said to occur in a specific range of conditions (which happens to include known biology).
Wait - isn't it possible that ALL organisms in an environment die, rather than a few always adapting to survive it? In other words, my understanding is that the theory of evolution does not imply that there will be a "fittest" that is guaranteed survival. Trying to refine my own understanding here. I'm no expert in this.
That's correct. The vast majority of species that have ever lived, have gone extinct.
If you're asking this for the purpose of understanding Taleb's example, the thing I took away from this post (and I think this is what Scott implied as well), is that "antifragility" does not have a real, consistent definition, and the examples are therefore "things to think about," and nothing more.
Thanks!
Like individuals, every species dies eventually, but the successful ones have descendants.
Exactly, our humanoid descendants did pass on their genes to us, which can be seen as success.
After all, many people consider it a success if they procreate and their kids are not an exact clone of themselves.
Kind of. I mean, we can be pretty sure that Earthly life isn't going to survive in the corona (though bacteria can last a surprising time in space, enough so that panspermia isn't entirely implausible).
Within reasonable limits, though, life does usually find a way. The bit that often trips people up is that evolution isn't just selecting *within* species, but *between* them as well.
The granularity at which competition happens is at the gene level, not species level or organism level, according to The Selfish Gene, by Dawkins. Great book. Maybe that idea has been debunked though. I read it many years ago.
It's not one or the other. It's all of them.
As Scott once said, your cells will be selected for cancer as you age and they divide - but you can expect all the genes you inherited from your parents to be selected *against* cancer - or, at least, against cancer that kills you before you finish reproducing.
Similarly, evolution selects for selfish genes within an interbreeding population, but a separated population will speciate and evolution will select for whoever has less selfish genes when an interchange event happens. This is why most organisms aren't 99.9% retrotransposons.
It depends how you define "environment," I guess. Something is always lurking around somewhere to take the place of anything that leaves a space by dying. Since life got a decent foothold a couple billion years ago, "all organisms" have never died out. It's just a continuous process of "out with the old, in with the new." Forever. (For example, if there was ever life on Mars when it had liquid water that life has probably continuously evolved to the current conditions, and we'll eventually find those microorganisms.)
You're correct-ish. Over a long timespan, if every organism is allowed to compete, the most fit species for that environment will end up being the only species left. But that's an ideal scenario, which is how silly species like the dodo last for a long time (nothing competed with them for a long time).
But the dodo wasn't a silly species for the niche for which it evolved, which is why it evolved in the first place. Once the niche changed, or course it didn't do well, but that had more to do with specific circumstances (like the introduction of predators) than anything else.
More generally, their extinction didn't really have to do with the fact that they had a stable environment for a long time. There are plenty of examples of invasive species coming in and wiping out the natives, and not just because the native species had become decadent or something. Especially in the case of the dodo, the history of the species doesn't really matter. If a sufficient change occurs in the environment, it won't be able to evolve its way out of it, and it only took about 10 generations to go from first contact to extinction.
But there is a reason why rats and cats (and humans) thrived in the Dodo's territory and not the other way around.
Yes, and that reason is that those species are robust to everything (can live in most climate, can eat everything or something very common), not that they are antifragile.
The argument is that they're more robust *because* of antifragility. They're more robust as a result of a higher-risk environment, and so they did better when the environment changed than the dodos (a species from an exceptionally low-risk environment).
In fairness, there are other reasons; for example, Eurasia is a much *bigger* environment than the tiny island of Mauritius, so rats/cats/humans are the three *most* invasive species drawn from a vastly wider range of species than were available on Mauritius.
The reason is simple: ships brought them to Mauritius, ships never took Dodos away. They might have thrived somewhere else but they were never given a chance.
Ships totally took a few captive dodos away.
But the very fact that rats and cats did an exceptional job of getting on boats and then escaping and breeding in the wild is a big part of why they survived and drove the dodo to extinction. Or in the case of humans, the fact that they were capable of making boats.
They arose on a larger continent, with larger populations, and thus more competing mutations and more natural selection (as opposed to drift, which is more powerful in small populations).
Yeah, I noted that below. It is true that they are in some general sense "fitter"/more robust/able to fit in a wider variety of environments than Dodos though, and it is at least *partly* because of the more hostile and varying environment they developed in.
Being a large, tasty flightless bird is "Antifragile." According to Taleb, you should avoid being one.
I'm not so sure about this. It could be that the optimum involves some kind of symbiosis between multiple species. Exploiting the environment in the most successful way might include behaving in a way that allows other species in that environment to flourish.
I would recommend taking a look at chapter 3 of Dawkins's The Extended Phenotype, which addresses practical limits on the degree of perfection reachable via natural selection; it contains a good list of factors and a fairly comprehensive explanation of each, along with some practical examples.
-"it only eliminates those so unfit that they cannot survive to pass down offspring."
This sounds like a wrong way of putting it to me; two organisms which both survive to produce offspring may produce different numbers of offspring, and that goes into the calculation of what evolution will favor. You want to say "so unfit that they cannot produce as many offspring as the best organisms do" -- but this is just another way of saying they evolve to produce a maximal number of offspring. (Modulo the concerns mentioned by other commentators about how this is an asymptotic limit never actually reached; but over evolutionary timescales it can get pretty close.)
Natural selection eliminates the most unfit -> average fitness goes up a little. Do that 10.000 times - > average fitness goes up a lot. Squint a little -> high fitness animals look perfectly fit.
"Fisherian runaway" is an example where evolution can reduce the fitness of a species. One example is a male peacock's extremely long tail. If women like males with such tails men who do not have this kind of tail will be at a disadvantage even if peacocks overall do worse because of the existence of this preference.
But surely their fitness as defined by ability to reproduce hasn't been reduced.
I think Taleb would argue that it's reduced their long-term fitness as a species. Note: I do not endorse this idea. I think Taleb's idea of "fitness" is a bit superficial.
I'm also not necessarily endorsing Taleb, but it's clear he means fitness of the species itself, which is, of course, evolutionarily meaningless. As long as life continues in some form at all, the genes persist even if it's in another species.
It has because they will be more vulnerable to predators and so each member of the species will be less likely to reproduce.
You mean to some external threat introduced later? If the predators are currently there, then even if females prefer longer tails, if the longer-tailed birds aren't as effective at reproducing they'll never reach fixation.
No, even if the threat has always been there. The peacocks could evolve into a bad equilibrium where they are all worse off because of the female preference for long-tails.
Things can certainly evolve their way into a corner, but what would that look like in this case? How would they be worse off in this case? If there was some trait that lowered reproduction, I'd expect it to pretty quickly be eliminated.
They can, I think, because it's an arms race.
The mere existence of longer-tailed rivals can make short-tailed peacocks less reproductively successful than they would otherwise be by drawing away potential reproductive partners.
Imagine there are two types of peacock, long-tail and short-tail; long-tailed peacocks die twice as often, but a peahen will always choose a long-tail over a short-tail. A group of only short-tailed peacocks will do better than a group of only long-tailed peacocks, but if both exist then the long-tailed peacocks will rapidly outcompete the short-tails by attracting all the females, resulting in an entirely long-tail group (which, as noted, does 50% worse than an entirely short-tail group).
Admittedly this doesn't address the question of *why* the females prefer the long-tails; wouldn't it make more sense to prefer the short-tails?
Empirically, mate preferences are indeed most often for fitness-excluding-mate-preference-enhancing things, but pretty often for fitness-excluding-mate-preference-reducing things. What differentiates the two situations? Possibly the sexy son/daughter hypothesis is to blame here; mate preferences arise because they're adaptive (or at least non-harmful), but once present they become locked-in and potentially exaggerated by the self-fulfilling prophecy of "X is reproductively successful because it attracts mates because it's reproductively successful because it attracts mates because..." But this doesn't actually make clear predictions for when species will or won't go into such spirals, at least not to a layman like me; possibly more formal versions do?
Note: I'm pretty sure you can specify the parameters such that the peacocks, after multiple iterations of increasingly-long tails, would actually be outcompeted by their better-at-surviving-but-less-sexy extinct ancestors if they were reintroduced. But it isn't necessarily the case; obviously it isn't in this toy binary long-or-short model.
The thing that surprises me is that the species hasn't forked. Yes, one peahen with a preference for short-tailed males probably doesn't get a reproductive win, but I'm imagining a difference scenario.
Suppose she just wants (or is willing to settle for) a shorter-tailed male, a male who has a slight advantage in avoiding predation.
It's at least plausible that mutations frequently don't just happen once. There are peahens who eventually don't want maximally flashy tails.
Obviously, this hasn't happened. Maybe the big tails are closely linked to other traits that aren't handicaps.
The main issue I see with this is that a huge portion of the environmental pressures are from other organisms, so there's no real guarantee that either the individual or the ecosystem will converge to a steady state (i.e. it's chaotic in the sense of dynamical systems)
Re: "Evolution is antifragile"
I figured the idea was this: If the environment is stable for many generations, there is no environmental driver for evolution. If the environment is volotile, not over the lifespan of individual creatures but over the timespan of many generations, then the environment is driving the creatures to change to evolve to fit it.
I think it's weird to say that Evolution is either antifragile or fragile at all. Evolution is a process, not an entity. I don't think the property of antifragileness applies to it either way.
It's the biological organisms/ecosystems that are antifragile. And evolution is the process that makes them antifragile. I guess it works if you parse "Evolution is antifragile" to mean "Evolution promotes antifragility", but I think it's a confusing use of the terms.
/pedantry
I don't have an opinion of whether it's correct or not, but I think in this case the idea really is that evolution itself is antifragile. You just have to have a weird concept of what counts as "good" or "strong". If you say that the process of evolution is a "thing", and you say that evolution is "doing good" when it is making things change, then indeed environmental change kind of feeds evolution to do what it does.
This is how I've always understood Taleb's description of evolution being antifragile, although it's been many years since I've read the book.
I agree. If something goes from Status Quo A to Status Quo B, is it evidence of "fragility" or "anti-fragility?" It solely depends on your frame of reference. If your portfolio adapts to the market by shrinking, it's "fragile" only because you like money.
But if natural selection gets rid of Dodo bird genes because they are no longer adaptive, Taleb says this means natural selection is "antifragile." If you are a Dodo, however, you would think this was "fragility" (or that's what you would think, if you weren't extinct.). Personally, I think Talebism is nothing but semantic word-game nonsense.
Evolution can make species fragile. There exists for example a species of moth that's exclusively adapted to live in sloth fur and reproduce in sloth feces, which is exactly as absurd as it sounds. Given a stable environment, evolution will often hyperspecialize species and they can get royally fucked over by environmental change.
Evolution tends to take short steps in the fitness landscape; they are the most probable. So it gets stuck at a local optimum. If there is a long enough period of stability, then after some absurd number of generations, eventually a longer step may be taken towards a higher optimum.
Also, the fact that mutations are constantly happening means there is always the potential to evolve if the environment changes suddenly. That speaks to one of the few things I do credit _Antifragile_ for saying: if some function is vitally important, there should be more than one way to do it. (Yes, it's a stretch.)
"Perhaps it would be much kinder if somebody gave unfit animals some Animal Chow to prevent them from starving. But such kindness would prevent natural selection, and gradually weaken the species (or, more technically, the species' suitability to its niche) until eventual cataclysm."
Hmmm ... wouldn't this be exposing them to variation in their environment (sometimes there is Animal Chow, sometimes not) which surely should make them stronger!?
Never read any Taleb, but my impression of him from listening to him on EconTalk is that he is not a clear thinker when it comes to biology.
Yeah, that'd give the animals a new niche to exploit - finding ways to get humans to give them food. That process of accomodation and urbanization is why some animals (deer, raccoons, coyote) have succeeded in the Anthropocene era and so many others have failed.
Have the geologists decided that it's the Anthropocene? Personally I would consider the Anthropic Extinction Event a much better name. Every mass extinction is called an event, and with the exception of one they all took a while, up to 20 million years for the late Devonian. Still an event. Humans have existed for what, 2 million years?
Over hundreds of generations it would eventually make them stronger. In the near term it would cause them to breed beyond the niche's non-animal-chow carrying capacity, which would cause starvation (and, likely, violence) every time the chow is withdrawn.
"People" are a subset of "animals", and this is essentially the conservative objection to the welfare state - that it creates dependency.
On the other hand, if we cut off things like the the Supplemental Nutritional Assistance Program (which could reasonably, if impolitely, referred to as "animal chow"), people would starve now. That's not good either.
<a href=https://www.robkhenderson.com/past-newsletter/who-was-machiavelli>Machiavelli</a> had much to say on this dilemma.
Like you seem to think, Taleb is best as a corrective (and effective Twitter partisan against Intellectuals-yet-idiots) rather than as a starting point.
Of the first few examples, I'd have to say stock options is the most egregious. Yes, the option gains value when the volatility *of the underlying* increases. Notice anything peculiar there? It benefits when chaos is applied *to something else*. On the other hand, when the value *of the option* is fluctuating wildly, that's really no fun for the option holder..
It's like saying "my company is antifragile, because when our supplier company is experiencing chaos, they get desperate and give us better deals".
There's a similar problem with the evolution example.
The "volatility" is that organisms have random mutations, which cause some of them to do better or worse. Mutations are not on average beneficial. So where's the benefit? At the population level, the population evolves in a direction that is a better fit for the environment. So volatility in the mutations of an individual benefits the population. But shouldn't the definition of "antifragile" mean that volatility in a *population* benefits the population? What does volatility in a population mean, anyway? Random changes in social interaction or pecking order? Change in number of individuals? Relocation to a new geographic area?
I think what Scott was saying at the start of the post is essentially "antifragility is not actually defined, and the author is just using a fancy-sounding word to talk about a bunch of things he finds interesting."
Why would the value of the option fluctuating be no fun to the option holder? The volatility is the point of options - you want it to be the wildest ride possible, as your losses are limited to 100% and gains scale with volatility.
I mean, an option wildly seesawing between ITM and OTM isn't the most fun, but that's a really specific form of volatility. I agree with you - as a rule, option buyers are going to have more fun in high-volatility environments.
And more sophisticated option holders can "delta hedge" their option by selling a fraction of the underlying if they are long a call. Then volatility is unambiguously good. In that case you make money if the underlying goes up big and also if it goes down big. And if you change you hedge every day or hour you make money even if it settles right at the strike as long as ride to get there is very wild.
As in: would you rather have an option gain gradual value every day, up to 3x on day n, or take on completely random values every day, and then suddenly be at 3x on day n? You make 3x either way, but your cortisol levels will show the difference.
Right; but I think we're claiming that both 3x over n days and random noise over n days are both examples of stock volatility. A non-volatile stock would remain at x.
Interestingly gaining 3x at a constant rate over n days is an example of extremely low volatility. A option seller who hedged dynamically according to the standard option pricing formula would make money due to low volatility. An option buyer would definitely prefer a wild path.
Yes, stock price volatility is good for options. Option price volatility is neutral-to-bad for options.
Isn't it true for any stock that the losses are limited to 100%, and gain scale with volatility? In this sense, any financial product that has a defined lower bound but no defined upper bound is anti-fragile. And perhaps not just financial products. A species can never have less than zero members, but can always grow further by any factor - does it make it anti-fragile?
Yes, it's a bit weird for Taleb to highlight options as anti-fragile, because stocks are already a form of option. You gain from the upside of the company's assets, and are shielded from the downside (generally the debtholders absorb the downside). See also "Merton Model".
How do debtholders absorb the downside? They get paid before the stockholders. The whole idea of a corporation is NO ONE gets a downside of more than 100%.
Options are effectively leveraged, because the payoff is option premium / difference in price. A 5% price move of the underlying can put you at +300% profit, while a move in the other direction will put you at the maximum loss of -100%.
This asymmetric payoff is a huge reason why people bother with options in the first place, and is to an extent priced into the premiums.
My impression is that options speculators take that as "Part of the Game," which is in contrast with say your local gas company, who buys natural gas futures in the summer, when prices are generally lower, for delivery in winter when they're higher, though they sometimes lose spectacularly, e.g. this year in Texas.
Taleb's use of options for anti-fragility is usually focused on buying deeply OTM puts, such that they're fairly neutral $1/mo -> expiry and repeat, until a chaotic event happens and those puts skyrocket in value. The approach is more specific than you're thinking, it sounds like
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
There is a literature on "volatility investment." The big risk is totally losing your shirt - see this paper, which decorates its margins with Death wielding a scythe:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2255327
There was a big blowup that burned a lot of people who invested in volatility in February 2018. See Matt Levine for some details:
https://www.bloomberg.com/opinion/articles/2018-02-09/inverse-volatility-products-almost-worked
According to the second one, they didn't invest in volatility but rather shorted it (and lost all their money when volatility went up).
(Also, by definition anyone investing in volatility through the stock market is betting on only a little bit of volatility. Truly-extreme volatility potentially makes the stock market, currency, and/or property rights irrelevant.)
> Maybe this doesn't work in investing, but does work in real life?
Public markets make a poor parallel to real life. Public markets come extremely close to the efficient market hypothesis, and it's very hard to consistently beat them. But very few other markets are efficient.
The area I know -- early stage tech startups -- has huge amounts of value sitting around waiting to be taken. People no smarter than you make billions with simple strategies. Renting an office in SF is inefficient -- you can pay less than half price if you really shop around and negotiate.
It's frustrating that so many books about strategy and forecasting use public markets as an example, because it's the exception where simple strategies don't work.
> Renting an office in SF is inefficient -- you can pay less than half price if you really shop around and negotiate.
This tells me that it's inefficient if you grade it from the lens of how much money you spend for the office space that you get, but the fact that startups continue to do so suggests that there are other aspects that make efficient in other ways that counterbalance the inefficiency in office costs.
In reality, the reason they do this is because a significant percentage of the top tech talent already lives in the SF, (or at least is much more willing to move there), and either the "talent" doesn't want to work remotely or else the startup believes that the benefits of not having people work remotely makes the rental of expensive office space worthwhile. (Obviously COVID has changed the dynamics here a lot, though)
Like, I say this as someone who works in programming in the (non-Chicago) midwest. On the one hand, it's great - programmer salary in a place with *much* cheaper cost of living. But on the other hand, you ever try convincing someone who lives in California to come move to Indiana?
You're talking about a different kind of inefficiency. I meant failing to conform to the https://en.wikipedia.org/wiki/Efficient-market_hypothesis, where the price of everything reflects all available information. If doing extra research means you can get a better price, the market isn't efficient.
No, that's what I'm talking about too. The price of office space in SF includes available information like "top tech talent lives in or is willing to move to SF".
Whereas your "better price" is only from a strictly monetary perspective and something like "making it harder to hire talent" is a non-monetary cost which is not factored into your better price.
I'm not saying it's not possible that tech companies are leaving money on the table by their heavy concentration in the Bay Area - (in fact many will argue that COVID has proved that they were, though you can argue that their judgement was sound in the pre-COVID world).
But your argument seems to be "they're spending a lot of money on office space, when they could get it for cheaper instead, therefore not efficient market", which I think is just misunderstanding what the efficient market hypothesis means.
Really, this just sounds like something like Hotelling's Law, the same sort of logic that drives 4 gas stations to all build on the same corner. Sure they could probably build cheaper elsewhere, but that doesn't mean that's the efficient thing to do.
I think you've still misunderstood Trevor. He means something more like "the prices of different office rental agreements within SF are inefficient, as evidenced by the difference in price of nearby rental agreements of comparable spaces. You seem to think he meant "it is an inefficient choice for startups to rent office space in SF."
On the topic of the Lindy Effect, why coin a new, worse term to describe eustress?
Antifragile is framed more as a property of the target, eustress is framed more as a property of the environment.
I was going to say "... which may be why the term Antifragile has become more popular, contra the Lindy effect", but when I checked it seems eustress (a term I'd never heard before) is actually the considerably more common term for this phenomenon.
https://trends.google.com/trends/explore?geo=US&q=eustress,antifragile
https://books.google.com/ngrams/graph?content=eustress%2Cantifragile
Frankly, I think eustress does a better job of capturing the fact that nothing is *completely* antifragile and few things are completely fragile. Conversely, in fairness, antifragile does a slightly better job of capturing the fact that things have radically different ranges of what counts as eustress vs distress and it can be a positive thing to increase those ranges (especially if reducing the environmental stress is impractical.)
I may switch to using eustress on the rare occasions I have to refer to this concept.
Taleb always bothered me with his grand, sweeping claims and just-so stories. They never seem to have much relation to the real world.
Like, the taxicab (or Uber) drivers I've known are generally one bad week/month away from bankruptcy, because they rely so much on their bodies to do their work and their equipment is on a vicious depreciation cycle. This is not true of the bankers I've known, some of whom take a year off from work and are fine.
Taleb would probably argue that he's speaking of some hypothetical banker and some hypothetical taxicab driver, but then what's the point? Why not just argue that dragons are fragile, which is why they never ruled Westeros? Either his arguments are grounded in the real world or they aren't.
Yeah, between the combined risk of the taxi driver killing someone by accident, or getting ill at a bad moment, or having too many bad week in a row, etc... and the risk of the banker losing his job and being unemployable for some reason, I don't think the banker has the riskier situation.
I feel the same way. I really liked his books the first time I've read it 12(?) years ago. Was a big fan of the Black Swan and everything. But years had passed, I ve read more of his books, followed his twitter, actually got into trading myself and ..... the glow had fallen off. I still like his books I guess, most of them are fun to read and think about, but I kinda stopped taking them seriously. As I've realized that things are different in the real world.
As someone who hasn't read his books, it seems that you can get most of the value out of Taleb by reading reviews of his books, because the grand ideas are more interesting and thought provoking than the details.
That’s true for a lot of non-fiction
Hasn't uber basically killed the taxi industry everywhere taxi companies haven't actively coopted local government?
It seems weird to pick your titular antifragile industry one that was, in retrospect, pretty darn fragile.
Doing some research on it, it looks like they're not quite as dead as I stated, but still pretty diminished. There are places where the government has deemed it required to possess a taxi medallion to carry passengers (some city centers, some airports); in these areas taxis can outcompete Uber, so they're still present.
There are also some larger institutions that have standing contracts with taxis to carry passengers regularly (some hospitals, some schools, a few companies) so some still exist primarily in that space. Also, I guess taxis can take cash while Uber can't, so presumably that might carve out some market space for them.
This is a common myth. Uber's taxi business is wildly profitable and has been at least ramen-profitable for nearly a decade. The VC money didn't subsidize fares (except presumably at the very beginning), it subsidized all the other ridiculous BS like their godawful self-driving car arm.
Uber fares are low because Uber's routing algorithms are <b>fantastic</b>. Pre-pandemic, it cost literally 10% as much to get a shared ride from point to point within San Francisco on Uber, as it cost to get an entire Lyft. (I know because I was still using Lyft, and I split a Lyft ride with a friend, and he told me that paying half was 5x the cost he'd pay for a solo ride. I checked - he was correct.) Lyft's routing is not great, so the shared ride is a small discount. (Taxis, presumably, are even worse.)
Uber drivers pre-pandemic spent roughly 20% as much time idling at the curb waiting for the next fare as Lyft drivers. <i>That's</i> what "subsidized" Uber fares.
Note that requiring a smartphone and a credit card to get an Uber almost certainly filters out the lowest-tier of potential customers, who are probably also the most likely to be various kinds of trouble--more likely to rob you, more likely to run off and stiff you, etc.
I still think he's wrong, but he didn't pick the industry if I'm reading Scott's writeup correctly, but rather the drivers, who are presumably still driving for Uber, so that's seemingly an argument that they "survived" the death of their former industry just fine, presumably for a value of survival that nets them less profit than whatever they earned when the profession was more moated.
I really enjoyed this comment, especially "why they never ruled Westeros."
Well, they did for something like hundreds of years in the books, right?
> This is not true of the bankers I've known, some of whom take a year off from work and are fine.
Do the bankers you know make only $36,000 a year like the one in the example, though?
I think the bank employee vs taxi driver example is a bad example in terms of actually being true to life, but a good one in terms of illustrating a much broader point that being robust to small shocks can make you more vulnerable to large shocks.
Maybe some truth in the sense of crash of 1929 caused some bankers to throw themselves off of buildings, but if you were already a hobo, you probably didn't even notice.
But it's flatly ridiculous to suggest in general that economic shocks are worse to the upper classes than lower. Even in a true post-apocalypse, I would doubt we'd actually see the fantasy literature type stuff where the earth is inherited by particularly brutal sheriffs and used car salesmen. The aristocracy has a hell of an ability to reproduce itself even across societal collapse, external conquest, and internal revolution. Most street tough people just die in the streets.
> But it's flatly ridiculous to suggest in general that economic shocks are worse to the upper classes than lower
Yes, and the point that Melvin is making is that "comparing an upper class profession to a lower class profession" is an unfortunate byproduct of the point being made. I think the analogy is essentially asking you to pretend that banker isn't a high-class profession that means they're just going to naturally have a lot more money because they earn *much* more than $3000 in practice.
To avoid the apples to oranges comparison, it'd have been better to either pick an anti-fragile, but higher earning job (maybe a real-estate broker?) instead of taxi-driver or else pick a fragile but low-earning job (maybe fast food industry).
Preach. This review has only convinced me to never read Taleb.
> Maybe changes are inherently towards more volatility, and the only reason being long VIX isn't a guaranteed-market-beater is because it's one of the rare cases where people take this seriously and quantify it, because taking it seriously and quantifying it is their job?
I'm not aware of a way to go long on VIX which doesn't (naturally) decay over time to 0. There are expenses and weirdness naturally built in to the products. I'm not a super derivatives guy, though, so... maybe there is a way and someone will be kind enough to mention/explain it?
(Also, perhaps that property is less relevant than I think.)
There's that (you can't buy and sell VIX directly and can only use derivatives that don't represent permanent long positions). But there's also that the index itself has no long run trend at all. Volatility has certainly not always increased, and though there is nothing mathematically preventing the existence of unbounded volatility, high enough values seem to imply a level of societal collapse such that you can't just ride the wave forever. In reality, the index always returns to the historical "normal" level. This isn't like value indices that can actually go up forever as long as the economy keeps growing.
Its not without expenses but you can roll option contracts while hedging them with the underlying to get pretty close. Essentially you buy say a 3 month out option hold it for one month then sell it and buy the next 3 month out option. If you do this while also shorting the under lying you have pretty much a pure vol position.
This is only tangentially related but Acoup recently wrote a series of blog posts about how Spartans actually sucked, and I found it pretty fun to read. Extremely fragile, rather than antifragile. https://acoup.blog/category/collections/this-isnt-sparta/
It's funny that right at the beginning he gives the movie 300 so much shade. Yet it's based on a comic by Frank Miller who shouldn't be let off the hook so easily.
Thanks for the share, this looks like a great read!
It's pretty amazing. His other stuff about the way iron was made, grain was grown, the silliness of a universal warrior going back to Roman times is also good.
I find acoup a little frustrating. Clearly he knows his stuff and does a good job of dismantling the simplistic views of ancient and medieval history that we normies tend to have, but on the other hand he spends far too much time beating up on straw men instead of getting to the good stuff.
I mean, yes, 300 is not an accurate rendition of history. But did anyone ever _really_ think it was? Really really? I'm interested in hearing about how real Sparta differs from the mythologised version, but do I really need to read another twenty paragraphs explaining the very obvious point that "300" is inaccurate and that actually Sparta _wasn't_ perfect and that anyone who enjoys "300" too much just _might_ be suspected of political wrongthink, before we get to the interesting stuff?
And he keeps telling me that Sparta was "not the ideal society that some have made it out to be"... but again, did anyone ever really think it was? People throughout history have admired Sparta's dedication to purpose, but as far as I know nobody has ever made any kind of effort to actually emulate them.
I likewise find the focus on “debunking” those bad positions a bit tiresome, but I think it’s understandable how he ended up in a position where he feels that emphasis is deserved. Namely, his day job consists, in large part, of making undergrads unlearn whatever wacky historical ideas they’ve absorbed over their childhoods. I am inclined to believe him when he says that many of those misconceptions are held by a substantial portion of his students.
>but again, did anyone ever really think it was?
On the series "the universal warrior" he gives examples of a bunch of people who really idolize the idea of Sparta. And also his whole point in the Sparta series is that this "dedication to purpose" is just a myth, and that nothing about the actual Sparta was desirable on a society at all
Saying the dedication to purpose is a myth and Spartan society isn't desirable are two different things, though, which is the point the parent comment was making. I don't think he shows that Sparta wasn't dedicated to its ideals, does he? Just that it wasn't a very nice place to live?
Along with undergrads (who might be more representative of the general population) and the examples he explicitly mentions, the other important "strawman" that he is dealing with is the US military. Even if the ideology of the US military is not the steelman you are hoping for, it is important to engage with it.
The way he totally ignores the criticism of his Spartan battle record thesis is also annoying.
Pretty much any expert who gets too deeply into 'debunking' the misconceptions of the general public turns into an arrogant twat, eventually. They both become assholes about their area of expertise but they also develop a blindspot for the possibility that they can be wrong.
Interesting, thanks for the link.
The most fascinating part for me was how seemingly all parties completely bought into the propaganda surrounding Sparta's military prowess. Sparta legitimately believed they were supermen and their enemies also believed it. Evidence to the contrary was ignored or discarded.
Yeah, I don't buy his argument that the Spartans weren't militarily excellent. He says their battlefield record is about .500, but that's what it should be if opponents can choose whether or not to fight, which he makes clear they can and do in that period. No one should fight a battle they expect to lose if they have other options, so battles should only occur when both sides think they have a good chance to win. Assuming that the two sides are equally good at judging their relative power, battlefield records should always be .500 for everybody, regardless of how powerful the army.
In fact, if his hypothesis that the Spartans are overrated is true, then the Spartans should have a well-below-.500 winning percentage, because their enemies should only be willing to face them on the field when they have overwhelming force that gives them [the enemies] a really good chance to win, and the Spartans should foolishly choose to give battle under those unfavorable circumstances.
I think his data are meaningless, and we probably need to default to the judgements of their contemporaries if we want to assess their military prowess.
(I found all of his other analysis of Sparta pretty compelling).
There certainly are examples of classical societies that did bat well above 0.5: the Macedonians and the Romans. Alexander the Great had a near perfect if not perfect score and the Romans had 0.84 against Fremen societies (although closer to 0.5 against the Persians). They also both conquered much of the known world.
Devereaux's argument is that they were slightly better than other Greek city-states but not legendarily good. Since Spartans today have at least as big of a reputation as the Macedonians or Romans, they are clearly overrated.
My point is not that departures from .500 are impossible (I point out that if Bret were right, we would expect the Spartans to be well under .500 after all), especially not when you are looking at single campaigns (cough Alexander cough). Just that it tells you very little about the underlying military potency of the group in question, except in certain very narrow situations that Bret goes out of his way to make clear don't apply to the Spartans.
Did the Romans bat above 0.5? They are not around anymore, which suggests they had a losing streak eventually.
But even if they did very well, that shows they were better a picking their battles than others, which is not the same thing as being better at fighting them.
Roughly speaking, to bat above 0.500 you need to be unusually lucky, you need to be better than people expect you to be, or you need to be able to force battle on people who can't refuse. Between the invention of walled cities and the invention of cannon, it was rather difficult to force battle on civilized people who didn't want battle; at most you can force them to endure a siege and then surrender. Deveraux IIRC was not counting bloodless sieges in his list, and certainly didn't count "we'd rather not even go through the trouble of a siege, let's negotiate terms up front".
The Romans, as you note, batted ~0.5 against civilized societies and did much better conquering "barbarians" who didn't have walled cities. Alexander, was better than anyone expected because he was a singularly talented general and because his dad had built an army better than Macedonia had ever had before (but didn't use it conspicuously enough for the world to have taken notice).
The legendary Spartans, were the legendary Spartans for a couple of centuries, and they were surrounded by people who knew how to build walled cities. Is there anyone who consistently batted >>0.500 against walled-city-builders for more than a generation or two in the pre-gunpowder era?
The mongols and the crusaders? And if the Spartans were so much better than their neighbor and everyone knew this, they should have been able to simply conquer them without battle, and create their own huge empire.
Acoup's essay on Spartan grand strategy explains why this isn't the case, but basically, they specialized on battlefield power at the expense of being able to prosecute sieges.
The Mongols and Crusaders each got a generation or so of stunning success, before they fell back to trying to hold on to their gains and slowly losing ground. Maybe three generations for the Mongols, but only if you count conquests on the far edges of the known world where the Europeans don't understand how effective the Mongols were against the Chinese.
As boylermaker notes, the Spartans weren't very good at sieges (or naval warfare) which limited their potential for expansion in that environment. But even Deveraux I think admits they built a far larger mini-empire than five two-bit towns would normally have been able to manage. They had the best heavy infantry on the planet at the time, and they conquered about all there was to be conquered without ships, cavalry, or siege engines. And then other people learned the trick of making heavy infantry even better than the Spartans.
Since you mentioned Sparta, I can't help but promote Bret Devereaux's series on the mythology that grew up around Sparta, even in its own time, and on Sparta's reality: https://acoup.blog/2019/08/16/collections-this-isnt-sparta-part-i-spartan-school/
Yes reading this really kill the idea of the spartan as antifragile. It's a society that survive on ideal condition and was destroyed as soon as those condition changed
I was about to point to this as well, but you beat me to it.
Regarding the part about sick v healthy people, I'm guessing he'd say that antifragility > fragility with the ceteris paribus qualifier?
So, you'd probably rather live in rich, fragile country than a dirt poor, antifragile one, but if GDP was equal than the situation is different.
What I immediately thought is that peaks are always fragile, and valleys anti fragile. So being fragile correlates with being high, which is what we wants. Being anti fragile is like the quote "there's no way to go but up!"
I'm having trouble squaring what seems to be this view that the attempt at theory is worthless with the fact that Taleb has an MBA and a PhD and has been a professor at multiple universities and the editor of an academic journal. He's trying to produce a theory that theory is stupid. This seems like the same basic impossibility of true moral relativism or non-Pyrrhic skepticism. Someone can make a convincing argument for them, but the very act of making an argument at all is inconsistent with what is being argued for.
I think it makes sense through a more pragmatic lens. Taleb thinks the theory is worthless, and he wants to promote this idea. In order to reach more people, he produces both popular and academic takes, catering to different groups. I guess in some sense, it is the philosophy of antifragility applied to the propagation of itself.
Glad someone called this out. Antifragile was my second attempt at reading Taleb. Made it through The Black Swan, but stalled in the middle of this one. One of the most annoying aspects of his work for me is the tendency to deprecate / distance himself from attributes that he himself seems to possess in abundance.
As Scott points out, Taleb is one of the most intellectual anti-intellectuals out there. For someone allergic to the ossification of academia and theories, he certainly spends a lot of energy producing the thing he decries. He clearly craves attention for his ideas and cultivates an aura of "unconventional" genius at every opportunity.
The contradictions are puzzling.
I've only read Fooled by Randomness, but from what I've heard that's better than Black Swan.
I read Fooled by Randomness and then The Black Swan. I couldn't tell if I liked the former better, or if it's just that the second dose of Taleb doesn't add much beyond the first dose. I've had students really try to push Antifragile on me though.
Those that can't do, teach. If Taleb could make a ton of money directly employing theories of antifragility, he would. But second best is making a ton of money writing about how theories of antifragility could make the reader a ton of money.
I believe that his thinking and writing is in large part motivated and informed by his success as a trader.
Bless your heart
I haven't done a thorough investigation, but I think he's actually an independently wealthy crusader for his particular hobby horse? Like, even before his books?
Taleb makes tons of terrible arguments on social media, so I'm not surprised he also does so in his books.
> So think of this less as a sober attempt to quantify antifragility, and more as an adventure through Taleb's intellectual milieu.
Am I incorrect in interpreting this as 'Taleb takes refuge in unfalsifiability'? So many of these examples seem to hinge on their specific framing and level of focus; you point out a few and contradict a few more with the Fact Checks. Antifragility is a powerful concept to keep around, but I'm *extremely* skeptical of the prescriptions that are coming out of how it's being used.
I think it's sometimes fair to have philosophical principles that you can't immediately reduce to falsifiable facts or studies, but I'm not sure Taleb does this responsibly.
Sure. I'm reminded of Continental philosophy - ideas can be unsuitable for testing and still extremely valuable... but at the same time, it's hard to build very far off of a shaky foundation.
I'm more concerned by things like the Syria v. Lebanon GDP comparison. Mistakenly interpreting a signal from noise is one thing, but that looks more like a case of deriving a signal from *error*. Worse (maybe?) when the signal doesn't seem to be much larger in magnitude that the initial mistake. I'm feeling something like a philosophical version of Gell-Mann amnesia: I see that someone's being irresponsible where they can be checked, therefore when they are difficult to check I conclude that... I should check out the book to hedge against selection bias? ¯\_(ツ)_/¯
Would the Ten Commandments be another example of "teaching birds how to fly"?
Which ten?
The 10 out of 15 of course.
According to Mel Brooks, they weren't so antifragile after all....
I've quoted that scene so many times that it's become the true version of events in my mind.
Prescriptions are not theories.
In many cases a prescription implies a theory, and in the case of the Ten Commandments I think it does. The theory is "following these prescriptions leads to favorable outcomes." If you didn't have that as a theory, then there'd be no point in giving the prescriptions.
Anyway the pattern holds for prescriptions regardless of whether there were a theory attached. The pattern is, "People already had this idea. The now-canonical written form didn't teach them the idea; it was a formalization of the idea they already had."
It's not a theory if you're God and you know for certain you're going to punish whoever doesn't follow your prescriptions.
It's unclear if whoever actually came up with codifying these really believed they were sent from God or they were trying to predict optimal social outcomes based on factors other than pleasing the almighty.
Although, even if you generalize to laws at all, I think at least some if not most laws have as much "please the lawmaker" as the intended outcome as "do whatever it optimal for whole society."
"Please the lawmaker" is a perfectly legitimate outcome, even if it's not one we'd wish to aim for.
But again I don't think the connection between prescriptions and theories is necessary for the "teaching birds to fly" concept to apply.
Some of them are fairly obvious, but the Sabbath and the ban on idols aren't, and one could argue that codifying the prescriptions gives them greater power.
The Ten Commandments are immediately followed by a load of less intuitive rules about diet and clothing, I suspect there's a benefit to putting some relatively uncontroversial rules against murder and theft up front before you start on the mildew regulations.
No, that would be teaching the birds to cage themselves so that the cage-wardens have easier time to manage them.
If you buy an option and you’re wrong you lose all your money (they expire worthless). This seems similar to lottery tickets and insurance. There are worse risks than that, but it seems like the risk reduction comes more from the ability to hedge, and this hedging happens when you don’t spend much money on such things.
If you buy/sell combinations of options, you can pretty easily hedge against pretty much any scenario short of “the entire options exchange collapses”, even without reserving any money outside of options.
I think the difference is that its cheap enough to be wrong that you can be wrong 99 times and right once and still come out ahead
So the risk reduction comes from getting to make lots of small bets rather than one big one to get an expected level of return.
That would seem to violate no-arbitrage conditions
It doesn't violate no-arbitrage because it exchanges risk for return.
For example, historically there's been a really easy way to beat the S&P 500: invest in the S&P 500 with moderate amounts of leverage. Returns go up, but volatility also goes up with the leverage ratio.
Then I misunderstood the original claim (in fact not quite sure what the claim is then)
A no-arbitrage principle would mean that there is no _risk free_ way to earn a profit in excess of the risk-free interest rate (something like US Treasury Bonds of the appropriate duration).
This means, for example, that a stock that trades on two different exchanges should trade for the same price on both.
This is not a constraint of the market, but rather an outcome in that any arbitrage opportunity gets quickly snapped up by parties who can exploit it.
Yes and more generally, risk arbitrage should also ensure that the same amount of risk earns the same expected return.
I took the original strategy to imply that the small bets would yield the same expected return at lower total risk which in my view shouldn't happen - assuming diversifiable risks are diversified away properly in either case.
"At some point you have to do a thing, which usually means using some system but also being aware of its limitations." Or some heuristic. That sometimes seems to make Taleb's distinction indeterminate.
My takeaway has always been, let different people try different things with volunteer participants, whenever that is possible. Then the hard cases are just those where it is difficult for pluralism to work, because the circumstances absolutely demand a unified response. Of course, as Covid has demonstrated, we do not currently have an alternative better approach to such situations, although sometimes people try to use compulsion to approximate it. Compulsion is fragile?
I have to question Taleb's statement on jet engines. The first patent on a gas turbine was issued in 1791, and the thermodynamics behind them were worked out by 1900, AIUI. I'm sure there is some aspect of their operation which was solved empirically before the theory was worked out, but it absolutely was not a matter of "people just tinkered with it before they understood how it worked".
OK. This bothered me enough to go looking for Taleb's source, and Taleb screwed this one up. The source in question isn't claiming that nobody understood what was going on at all. There was definitely theory for the basic operation of the jet engine. But a lot of the problems of making a jet engine work had to be solved practically, which surprises nobody who knows about this kind of stuff. He cites his son not knowing this as evidence of something, but I'm also an aerospace engineer, and we didn't talk a whole lot about history in propulsion class. What I know about this comes from personal reading.
Isn't there the problem that fluid dynamics follow the Navier-Stokes equation which is really hard to solve and so you have to use experiments?
Yes. Lots of fluid dynamics is fundamentally unsolved, and possibly unsolvable. We use CFD which takes an approximation with a lot of points, and things like wind tunnels and experiments. But this is obvious to anyone with a passing knowledge of fluid dynamics. If that's what Taleb is claiming, then he's saying something banal that doesn't mean what he thinks it does.
Why do I get the feeling that might apply to a number of his claims?
As someone who used to do quant finance, he has a pretty bad record for his claims in that area too, despite the fact he used to be a trader. He definitely has a recurring issue of accusing others of being wrong without understanding their field.
I've heard that contrary to Taleb's claims, Value At Risk and other fragile formulae are no longer actually used much to make decisions in the finance industry. They've moved on to better methods. True?
The Navier-Stokes equations may be absolutely insoluble in the strict sense. This isn't a problem, because we *can* use experiments. And simplified models. And, yes, our understanding of fluid mechanics. "Understanding" and "rigorous analytical solution" are two different things.
The people who developed jet engines were not just "playing around with aircraft engines until jet engines sort of happened"; there's no plausible amount of "playing around" with a reciprocating-piston internal combustion engine that gets you a gas turbine optimized for exhaust thrust. The people built the first jet engines, understood what they were doing.
They probably understood jet engines before they machined their first turbine blade, better than Taleb understood antifragility after he finished writing a book about it. Taleb is in the business of Thinking Real Hard with his Mighty Brain until he has come up with something he believes is true and important and that he can sell; I speculate that he thinks this is what other smart people like scientists and engineers ought to be doing (possibly with a side order of Feeding the Numbers into a Computer), and if we're not doing that then we must be just flailing around randomly. In which case, no. You have to do experiments if you want to do design useful engines, and you have to understand the problem if you want to design useful experiments. Michelson and Morley weren't just playing around with mirrors until they accidentally vanquished the luminiferous aether.
Yes, the theory is fairly old, and steam turbines were well understood by 1900. The hard part for a jet engine is that the compressor blades have to move at least 3/4 of the speed of sound in order to compress air efficiently, which requires spinning very fast. For the compressor to not fly apart at that speed & temperature it has to be made of exotic high-temperature alloys. These only became available starting around WWII.
Which is why the turbojet engine was separately created twice: once by von Ohain in Germany (hydrogen fueled!) and later (albeit patented prior to von Ohain's work) by Whittle in Britain (hydrocarbon fueled).
On the discovery of things, I would argue that we need to be very cautious. Accidental discovery makes for fun story, and are thus remembered. But we don't remember the thousand of things needed to make car evolve from what they were to what they are.
We don't know who, when and even how many times the discovery made from following theory are made, because following the theory to find something make that something "not a real discovery/invention": if you follow a map that tell you there is a river here and there is indeed a river here, nobody care. We remember how and who made the first vaccine, but most of us are incapable of giving the names of those who use Pasteur's idea to eradicate other disease.
I can think of a few examples where theory definitely preceded invention:
Maxwell's theory of electromagnetism predicted radio waves, which was later confirmed
Einstein (?) predicted the feasibility of atomic bombs
Actually... I'm gonna stop the list here already, because basically every technology I can think of was preceded by theory: the computer (see eg Turing and von Neumann), camera sensors, ...
(Also, think about how precise your theory of optics has to be in order to produce glasses that correctly correct vision deficiency without chromatic aberration and all the other problems. Is optics fragile?)
Einstein is generally credited with E=mc^2, which shows how much energy is locked up in matter. Though Oliver Heaviside came up with the same equation fifteen years earlier under the less general assumption that the only fundamental force was electromagnetism.
However, the energy released by hydrogen bombs is only a hundredth of this amount, and atom bombs about four times less again. Atom bombs were made because means were found to encourage 'autocatalytic combustion' of unstable elements. In principle, their relation to E=mc^2 is no more than that of a coal fire, though a coal fire releases only about a millionth as much of the fuel mass as an atom bomb. The equation just indicates the maximum conceivable power of any bomb of the same mass, even if it uses technologies unknown to us.
So the theory didn't predict the bomb - it incentivised it and limited the parameter space, perhaps. But the bomb was developed out of observing the properties of uranium and heavier elements.
The theory did predict the bomb. The atomic bomb wasn't discovered by accident or by experimenting randomly. What Thomas is referencing is I think the letter that Einstein wrote about the feasibility of the atomic bomb, letter that prove that the atomic bomb was an applied theory, not a discovery.
The "properties" of uranium you speak of can't be measured without an advanced theory of how matter works.
The famous letter was written by Leo Szilard in collaboration with Edward Teller and Eugene Wigner, they just pulled Einstein in at the last minute to sign it because he was more famous and better connected.
Oliver Heaviside is so underrated!
From https://en.wikipedia.org/wiki/History_of_Maxwell%27s_equations:
> Oliver Heaviside studied Maxwell's A Treatise on Electricity and Magnetism and employed vector calculus to synthesize Maxwell's over 20 equations into the four recognizable ones which modern physicists use.
"The Maxwellians" (B. J. Hunt) is a semi-fun read if you don't know it.
E=mc^2 *did* show that the loss in mass during radioactive decay came out as energy; that was the key insight that got people thinking "oh shit, this is a doomsday device if harnessed", rather than just "huh, this is weird".
The energy from radioactive decay was observed before the mass defect was adduced.
Apologies.
That's not what E=mc² is used for. Sure, the E of a nuke is much less than the m (times c²) of the nuke, but you're applying the wrong m.
E=mc² applies to the difference in mass between the reactants and the products. To use a standard example, one of the nuclei used to make nukes is ²³⁵U (uranium-235). The reaction is ²³⁵U + n (neutron) --> ¹⁴¹Ba (barium-141) + ⁹²Kr (krypton-92) + 3n.
m(²³⁵U) + m(n) > m(¹⁴¹Ba) + m(⁹²Kr) + 3m(n)
The *mass deficit* becomes energy, and that is what is released in a nuke.
Though of course, this doesn't directly predict nuclear weapons, but this was a necessary first step. It along with the discovery of nuclear fission motivated the Manhattan Project. Pretty theoretical if you ask me.
The only thing needed to see the possibility of any bomb is (1) a large energy release, and (2) a way to rapidly auto-catalyse it (or just catalyse it, I suppose, to be maximally general). It's as true of the atom bomb as any other. If nobody knew that fission released a lot of energy, Einstein's mass equivalent of energy would have implied a large energy release (for the fusion bomb, we can say the prediction came before the experiment, but by that stage the issue was certainly well understood). However, it seems obvious that it was already known that fission released a great deal of energy.
"Though of course, this doesn't directly predict nuclear weapons, but this was a necessary first step."
How is this a necessary first step for the atomic bomb, without also being a necessary first step for gunpowder?
If you've got Meitner and Fermi, then you've got atom bombs even if you mistakenly believe the mass remains unchanged during nuclear fission and the phlogiston fairy is just extra-generous when free neutrons get involved in the chemistry. Bomb-makers don't care where the energy comes from, so long as there's experimental proof that some process reliably generates lots of heat quickly. They also don't care that some theory says that a golf ball contains a megaton of city-busting energy, if there *isn't* an experimentally verified process for getting that energy out in a hurry.
²³⁵U + n (neutron) --> Kaboom, is the only equation that matters if what you care about is blowing up cities; the rest only matters if you care *why* there's a crater where the city used to be. And I'm pretty sure Leslie Groves didn't give a damn about that, any more than the Chinese Prometheus who tinkered about and invented gunpowder.
The laser is another very good example, where we knew in theory that it could work before we manage to build the first one fifty years after theorizing it.
In real life, theory feed from experimentation and reciprocally.
Let me just say that my experience working in geointelligence showed me acutely the importance of theory. We could only do what we did because of extremely precise earth models and ephemeris readings from the vehicles combined with a level of understanding of physics that is awe-inspiring to see. I remember Trump rather callously declassifying an image from a system I spent the better part of three years developing the image formation software for and supposed "experts" in imaging not believing we could achieve GSD at that resolution just due to atmospheric effects. They have absolutely no idea what we can really achieve and it's all possible because some extremely smart people out there have spent the last four decades perfecting the physics. There is no practical way to achieve this kind of thing with tinkering alone when it costs billions just to get a single vehicle into orbit.
Granted, the cost is coming down a lot with smaller form factor vehicles and reusable rockets, but we've been doing this for a long time.
Why "callously"?
>the computer (see eg Turing and von Neumann)
Lovelace & Babbage
I don't see how relativity (Einstein) has boo to do with atomic bombs. Everyone knew at least by 1917 or so, when Rutherford first showed that nuclear fission was possible, that the force holding the nucleus together had to be tremendously strong (to be able to hold identically-charged protons within a femtometer of each other -- the Coulumb repulsion is staggering). If it could be released, obviously it would be energy on a scale that dwarfed anything chemical. The fact that this energy release implies a mass defect (what special relativity tells us) is kind of neither here nor there, it would still be true and important even if relativity was nonsense or undiscovered.
But up until Meitner and Frisch worked out (in 1938) that what Otto Hahn had unexpectedly observed was a *natural* process of fission, which could take place *without* the enormous input energy per particle everyone had had to use before, there was no plausible idea on how to unlock that energy.
So that one actually is a good example of serendipity. If Hahn hadn't been bombarding *thorium* with neutrons, and had instead been using one of the 90-odd natural elements that *don't* easily fission, the discovery wouldn't have been made and nobody would've had a clue that deliberate nuclear fission on a military scale was possible.
Although one could argue that people were messing around with neutrons all over the world anyway and sooner or later someone was bound to stumble over fission by thermal neutrons, but it could have happened later, perhaps many years later depending on the unexpected twists and turns of what interested people.
Special relativity was 1905 and his E=mc² paper was at most a couple years later iirc. 1917 was after general relativity, and E=mc² was definitely published by then.
Yes of course, but neither special nor general relativity say anything at all about the strong nuclear force, and both are classical theories from which it is impossible to deduce that mass could transform to energy *within the same reference frame* (which is what we're talking about when we talk about mass defects and radioactivity). The only way m turns into E in a classical theory like pure relativity is when you change references frames.
The theoretical applicability of E=mc^2 to nuclear reactions only becomes apparent when (1) you have an idea nuclear reactions are possible, because you recognize the existence of the strong nuclear force -- Eugene Wigner proposed its existence in the 1930s -- and (2) you have a quantum mechanics which you can make relativistic -- Dirac did this in 1928 -- and discover that particles can transform into other particles, i.e. mass is not conserved *even in the same reference frame*.
Relativity, or more precisely relativistic quantum mechanics (which was the work of people other than Einstein, since Einstein didn't really like QM), was largely *retrofitted* onto the observations of radioactivity and nuclear transmutation (in the 1890s through early 1910s) and later fission (1930s-40s) to provide a satisfactory theoretical explanation. But from what I understand it played no role in driving the initial recognition that (1) nuclear reactions could release a lot of energy (which one might reasonably attribute to Rutherford's experiments on radioactivity and nuclear transmutation), and (2) that nuclear reactions could be sparked by low-energy particles (which should be attributed to Otto Hahn's lucky choice of an experimental substrate, and Lise Meitner's and Otto Frisch's realization that he had observed fission.
As I said, I think this is one case where experimental noodling around led the way, and at that there is a nontrivial element of random chance involved, since natural fission is a pretty rare form of nuclear decay, and it was just luck Hahn stumbled across it when he did.
That's not to say theory played no role at all, of course, but if anything it would be the early theories of the structure of the nucleus and what was holding it together, which are rooted more in early quantum mechanics than relativity per se.
That's just not true. You don't have to go quantum to convert mass to energy, as mass IS energy in its rest frame. An example would be a box of relativistic classical particles. The mass of the box of particles would be greater than the mass of each particle and the box added together.
"Einstein (?) predicted the feasibility of atomic bombs"
Good that you included the question mark. As Gerry Quinn notes, Einstein "predicted" the feasibility atomic bomb in the same way that he "predicted" the feasibility of the hand grenade, the antimatter bomb, and the hafnium bomb.
The people who made the actually useful predictions were primarily Lise Meitner and Enrico Fermi. Einstein added name recognition and gravitas when it came to convincing mundanes like FDR, who had a big checkbook but probably didn't know who Meitner and Fermi even were.
Another couple of examples of theory preceding engineering.
The Nyquist-Shannon coding theorem. Shannon established that it was mathematically possible to first measure the bandwidth capacity of a noisy channel and second, to exploit all of this bandwidth. The next 40 years was the engineering half of the field slowly marching toward the Shannon limit.
The RSA cryptosystem rests on mathematics that was hundreds of years old by the time it was employed (i.e. the Chinese remainder theorem and Fermat's little theorem).
As with every bit of writing I have seen on this blog, I'm thrilled by the original perspective, the beautiful language, the humor... I am also excited to find that in a previous collection of your essays, you have addressed the question of fats (the different types, healthy/unhealthy etc).
I mean, he kind of completely misses the point of why we science (TM) which is not at all to build stuff and create new technology.
At least, not for all of us.
I think he's perfectly aware that there are a bunch of nerds who do science as a hobby. But if you tell Bobby Taxpayer "hey, I'll take a chunk of your paycheque and use it to give Dr Science a salary and some expensive machines so he can cultivate his hobby", that might not go over so well.
Does anyone have info on how black swan-ish funds have performed generally? I understand Spitznagel publicized his amazing performance at the start of COVID, but Taleb writes like black swan investing is a billion dollar bill lying on the sidewalk. Yet my sense is black swan followers have not by and large made a killing.
That's one of Falkenstein's critiques of Taleb: Taleb's own funds haven't done that well.
I didn't know what you were referencing so I looked it up and boy was that great. Falkenstein really nailed the stuff which bothered me about Taleb's books.
http://falkenblog.blogspot.com/2012/11/taleb-mishandles-fragility.html
An analysis of whether Taleb made more money from the black swan strategy or *The Black Swan* book:
https://www.reddit.com/r/slatestarcodex/comments/eot23o/is_there_proof_that_nassim_taleb_is_a_succesful/fefck9e/
Using a not-very-serious methodology, it comes up with a tie.
Black Swan funds like Universa are not designed to make a killing. Comparing them to any index misses the point. They are designed as an insurance policy and are supposed to make up around 3% of your portfolio.
A 97/3 split with the sp500 has done better than 100 in the sp500 since 2008, at least.
I was happy to read a review of this book, because there is no chance I’ll ever pick it up myself. I tried to read The Black Swan a few years ago and quit halfway through. I’m used to reading pompous academics, but Taleb was just over the top. Plus there were weird contradictions, like how he would go on and on about how useless and stupid philosophers are, and then praise Karl Popper and Bertrand Russell. Some people have laser intellects. Taleb is more like an old blunderbuss stuffed full of nails, rocks, and too much gunpowder.
His over the top praise of Popper made me think that he's really trying to get a jab in edgewise at George Soros, who is supposed to be the famous investor whose ideas magically all came from Popper.
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
Not an expert in medieval architecture, but I am pretty sure this is total nonsense, as long as geometry is included as part of mathematics. Getting two ends of an arch to meet requires decent geometry. And making two lengths of wall match without adding is probably impossible.
Doing basic arithmetic with Roman numerals isn't hard (in fact, adding in particular is super easy!) - you aren't any good at it, but that's cause you haven't practiced it ever. How many times have you added Arabic numerals? Do it that many times with Roman numerals, then tell me it's "too unwieldy".
It's true that they were built with rules-of-thumb and principles-of-practice rather than a defined theory of weight, mass, gravity, and structural engineering (in fact, a lot of stuff in the 19th and early 20th century was built with pretty ad hoc theory to back it up - it was extensions of stuff that had previously worked and been well measured). But to say it was built without mathematics seems ludicrous, and I'd want to read something with a LOT of evidence to back that up.
I'm glad I ctrl-Fed "Roman" before commenting, because I was about to say something very similar! I got curious about this exact topic several years ago and yeah, Taleb is very wrong on this. Arithmetic was difficult and they didn't know algebra, but they were great at geometry.
A military engineer called Vitruvius wrote what is considered the Big Book of Roman Architectural Theory called *De Architectura*. There are all sorts of little mathematical tricks for architecture in it, all based on geometry. (Though I would say that's the minority of the content; if I remember right, there was much more about which materials to use for what purpose and so on.) This book was really influential even after the Romans were dust! In all, a pretty poor example for someone trying to argue that theory is useless and ineffective.
Alright, even though nobody cares I decided to fact check myself. Corrections (that I think actually reinforce the broader point):
-De Architectura is ten books, not just one.
-Book nine is ENTIRELY about the science and math underlying architecture!
-Even in the other books, he's very concerned with theory in addition to practical advice and descriptions of techniques/tools.
Excellent research. Two points for people reading this who may not be familiar with the ancient world:
1)The word "book", when referring to ancient Greek and Roman texts, is basically equivalent to a modern chapter or section. The Iliad is 24 "books", but it's still only one book in the modern sense.
2) For the most part, the Romans didn't use their numerals for mathematical operations the way we use ours. Simple calculations were likely memorized and complicated calculations would done using an abacus.
"[A]ccording to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
I don't buy that. You don't have to understand any math to divide a basket of apples in three, and you can adapt the same principles to Roman numerals if you have to.
Most people don't know how to do long division even now. As for extracting square roots, probably 1% at most know the 'official' method. I know there is a formal method for cube roots but I never learned it myself - that doesn't mean I can't calculate a cube root by a series of approximations if I have to. So can millions today. And maybe millions couldn't have done cube roots in medieval Europe, but more than five could have done division.
Division is easy if you don't weirdly insist on decimal notation. What's 17 divided by 69? 17/69. Done. What's 17 divided by 69 multiplied by 3? (17x3)/69 = 17/23. All very easy and known back to the Romans, at least. It's only when you absolutely insist that all your fractions have denominators that are powers of 10 that things get computationally challenging.
Mind you, it's true that living within a realm of rational numbers means you can be bemused by some nasty little mason impertinently asking you to write down the corner-to-corner distance of a 1 cubit square block.
This works fine in the example you mentioned, but as soon as your numbers are large enough, you bump your head into prime factorization, which is notoriously computationally intensive. Also, comparing fractions is also not particularly easy. So yeah, there is nothing "weird" in insisting on decimal (or positional) notation.
I can't imagine any practical engineering problem which would result in serious difficultly reducing your fraction to lowest terms. That implies staggering levels of precision needed.
I also disagree that comparing fractions is difficult. It may be so for people who very used to decimals, but that's just a QWERTY v. Dvorak argument and a priori unpersuasive. The fact that the ancient world *entirely* used fractions in their everyday practical engineering problems is all by itself pretty decent evidence that fractions are very easy to compare -- if you're used to them.
> It's only when you absolutely insist that all your fractions have denominators that are powers of 10 that things get computationally challenging.
This isn't true at all. Traditional Egyptian mathematics found fractions challenging while allowing denominators to have any value. (The only numerator allowed was 1.)
What makes you say the Egyptians found fractions challenging?
My first argument that fractions are easier than decimals is simply the evidence that fractions dominated noninteger math throughout the centuries when most math had to be done in your head. There's a very good reason ancient number systems had duodecimal annexes (like Roman fractions) or even sexagesimal (the Babylonians), and why so many systems of measurement are base 12. People didn't do that because they were dummies who couldn't imagine the obvious benefits of decimal math.
I'd say a decent argument can be made that the architects and engineers of the time did a lot of stuff empirically less because they *couldn't* do the math than because the precision of the math wasn't sufficiently matched by the precision of the materials and instruments available at the time.
I mean, it's not much good calculating the perfect proportions of your stone arch if your model of the properties of granite differs nontrivially from the properties of the granite *you can actually source* and if the instruments the builders must use to cut, dress, and build it won't allow tolerances of 1mm to be specified anyway.
I think anyone who does practical amateur carpentry or stonemasonry himself understands this. Sure, you can calculate exactly and precisely on your computer the dimensions of each piece, to 20 decimal places if you like, but unless you are using some kind of phenomenally expensive precision-cut lumber and/or stone, it's pointless. You might as well do some approximate calculation with paper and pencil, because you're going to have to fudge things a little when it comes time to actually build anyway. You can't guarantee a cut (with your home table saw) is going to be sufficiently accurate, that the wood won't have some tiny warp to it, the stone might be a little off here and there, et cetera.
Re: strategies that succeed by taking things away instead of adding them.
I agree that this isn't always the right approach, but I like the idea so much that I've been trying to collect where it applies. So far I have:
* Probablistic conjuctions (occam's razor)
* Mindfulness meditation (to reduce thoughts that cause suffering, intrusive thoughts, etc)
* Conciseness in writing
* Software written with suckless / unix philosophy in mind
* Simplicity in mechanical systems
* Exercising the 5th amendment to avoid self-incrimination
* Exercising restraint in art to increase impact (examples: powerful film scenes lacking score; also I write down the song "Trio" by King Crimson as an example where the drummer was praised for not playing anything on the track)
* Tidying up your room
* "Too many cooks" -- in arguments, in artistic endeavors, etc
* Traveling light, allowing for traveling faster and freer (applies anywhere from taking a plane trip, to photons which literally "travel light" and move faster than anything else).
* Operational Security -- Reduce the number of components in your identity to avoid associations that compromize you
* InfoSec -- Reduce the number of components in your system to reduce your attack surface.
* Martial Arts -- Sometimes the best strategy is to wait for your opponents actions and use their momentum against them.
* "One bad apple spoils the bunch" -- So reduce your number of apples.
* "Nothing to Lose" -- Freedom resulting from having little.
* Large concentrations of population as ripe for epidemics.
Edit: been trying to collect *situations where it applies.
These are good, but they are often the opposite of anti-fragility - many of these are about producing a distinctive and unique work that stands out in one way, even if it's hated by most and ignored by others, rather than a crowd-pleasing thing, which I would think anti-fragility is about.
Ah yeah these are just specific to the idea of "succeeding by reduction", I didn't have antifragility in mind.
Although, does your comment apply only to the artistically-oriented points on the list? Some of them, for example the software and mechanical engineering examples, I think clearly succeed better in their functionality because of their lack of components.
These are all instantiations of the Delphic Oracle's central maxim: "nothing too much (or too little)." It applies for all action -- since "too" is by definition to be avoided.
The implication is that all action entails some sort of balance -- if I flip too much or too little, I do not actually flip, and so on.
Maybe so, but it seems to me the Delphic maxim is a little too broad for this idea, for two reasons:
1) I think "nothing too much" and "nothing too little" deserve to be their own classes of guidelines, to be analyzed separately
2) The situations I've described I think are more generally described as "minimizations" than as "balances"; ie "get as close to 0 as you can". I think you raise a good point though, that even in most of these things you can't go *fully* to 0 or the thing doesn't work (ie "too many cooks"... you need at least one cook). But still, this is a special kind of balancing act, in which you aim for the most minimal balancing point that works, as opposed to the most maximal one that works.
I think these occupy an important sub-category of the Delphic maxim, because if given a choice between maximizing and minimizing to solve a problem, all else being equal, we should *prefer to minimize*, in most circumstances. There are multiple arguments for why that is, but I guess the broat main point is that it simply costs less in resources to acquire and maintain fewer things than more things.
In summary, I think you're right that these fit under the Delphic maxim, but they fit even tighter under a more specific, preferred subset of it.
1) the primary insight of the maxim is grouping too much and too little imo. The greek word used is ἄγαν [agan] which strictly just means "too" and can function as an adjective and an adverb. The practical implication here is that all failure is alike. Drinking too much water and drinking too little water are in a certain way, wrong for the same reason -- both ignore the deeper reality of human hydration requirements.
2) The most minimal and the most maximal will be the same if you are strict about what "works" means. There is a proper amount of cooks you need in the kitchen to do the job of cooking well. That is both the maximum and the minimum (maybe better, the optimum) for cooking well, and you can calibrate to that optimum by asking whether you are "agan" in number of cooks.
3) If I hear what you're saying, you're saying that in some cases, it's better to err on one side of agan or another. E.g. better to buy bananas when they're too green rather than too brown! I agree that this works, but note that this will imply a parallel subset that should be just as powerful.
* Occam's Razor -- most possible explanation in fewest possible hypotheses. Parsimony in itself is not valuable if it does not explain.
* Mindfulness meditation (to reduce thoughts that cause suffering, intrusive thoughts, etc) -- a balance between peripheral and conscious awareness. Too much mindfulness leads to mind wandering, too much focus leads getting lost in thought. Cf. Scott's review of "The Mind Illuminated"
* Conciseness in writing -- same as Occam. Conciseness only good when it conveys meaning. Otherwise it's just terse.
* Software written with suckless / unix philosophy in mind -- dk
* Simplicity in mechanical systems -- I imagine they actually have to do the job -- i.e. pure simplicity is worthless unless you eg move the rock. But given that you move the rock, simplicity is a virtue.
* Exercising the 5th amendment to avoid self-incrimination -- Silence is a virtue but only when paired with speech. One can talk too much or too little. It seems like proper speech is a balance aimed at communication again perhaps?
* Exercising restraint in art to increase impact (examples: powerful film scenes lacking score; also I write down the song "Trio" by King Crimson as an example where the drummer was praised for not playing anything on the track)
* Tidying up your room -- a room can both be too tidy and too messy. Funnily both seem to lead to stress (cf. ADHD and OCD).
* "Too many cooks" -- in arguments, in artistic endeavors, etc -- covered, but to be clear, teamwork produces great dishes; there is a proper amount of cooks to create any given dish (and always at least one) :)
* Traveling light, allowing for traveling faster and freer (applies anywhere from taking a plane trip, to photons which literally "travel light" and move faster than anything else). -- out of my depth, but a general and valuable point is that freedom is not good simpliciter and nor is order, and that good government is a balance between freedom and order (among other things). Libertarian philosophy seems to demonstrate the "leaning" principles you want -- i.e. as little order as possible, while still preserving freedom. (there's a point at which not having a police force is actually detrimental to order).
* Operational Security -- Reduce the number of components in your identity to avoid associations that compromise you -- ...while still expressing your identity :)
* InfoSec -- Reduce the number of components in your system to reduce your attack surface. --- dk!
> you're saying that in some cases, it's better to err on one side of agan or another ... I agree that this works, but note that this will imply a parallel subset that should be just as powerful.
I think is where our disconnect is. I suspect that, not just sometimes, but in most cases we look at, the optimal point will be far closer to minimization than to maximization, and that those "minimization" cases deserve special attention. I feel this way for two reasons:
1) The fact that, in the physical world, acquiring and maintaining more stuff generally entails more cost to the holder. It's less expensive to have less than to have more, in almost any situation you can name. This will cause a natural asymettry in favor of "prefer to minimize".
2) Empiracally, looking at my list above, these things really do comprise a huge proportion of the things that are materially relevant to my life. If there's an equally large and powerful set of things where it's best to get as much as you can (without hitting some threshold), either it doesn't apply as much to me (or I'm just missing it).
I see you've given responses to each of my things above. I'll try to reply to that in a separate comment.
Some of the examples seen a little weird to me. It seems like the term "anti-fragile" should be applied to a system of some sort. For example I'm not really sure how exercise is anti-fragile, surely the claim should be about the human body? In which case it is anti-fragile in some ways, as lying in bed all day isn't great, but not in others, as raising the internal temperature a mere 5 degrees can be deadly.
The case of the banker/taxi driver also doesn't seem right. He frames John as doing fine until something bad happens and he gets laid off, while George can adapt his business if it's slow in one neighborhood or something. But those seem to be very different scales of hardship. Something that causes a bank to fail very likely will affect taxi drivers pretty badly too (think of the current pandemic! Taxi drivers are much worse off than bankers). And while he allows George to change his business to a courier, he neglects the possibility that John can also get a different job.
I guess he's using this as a parable to show the benefits of allowing volatility, so there may be 50 Georges and some will go bankrupt while the others will flourish. But it seems a little odd especially in light of his real world examples. It's also not clear when you can turn the term "anti-fragile" back on itself. In the case of the forest fires, the periodic small burns prevent a large all-encompassing fire, so he presumably calls the forest anti-fragile. Alternatively, without these regular, periodic burns, everything will burn down at once. Does this make it fragile with respect the the small burns? A few measly humans with water hoses came in and ruined everything. Others have pointed out that this is also apparently the case for Sparta.
I'm not sure how much weight it's given in the book, but it's also worth remembering that anti-fragile systems work well in volatility, while fragile systems work better in stability. It's not said outright here, but he seems to imply that we should be making our systems more anti-fragile. And given black swans and all that, it's probably not bad to keep an eye on it. In the end though, they do have a cost, just like drinking out of a cup of silly putty would be a pretty terrible experience.
As a side note on the Lindy effect: he seems to conflate "lasting longer" with "better". For classical texts, people in antiquity were probably about as good at stories as people now, so you'd expect the stories that last to be preselected to be good. For a physical object, it probably undergoes about the same amount of stress regardless of how old it is, so older ones are probably sturdier. But I'd rather use a phone from today than one from 1960, even if that one is sturdier. Same thing with studies. It feels like the classic economics joke of not picking up money on the ground because if it was real, someone else would already have done so. If this older thing weren't as good, someone would already have discarded it. Maybe, but sometimes that someone is you!
The taxi driver is more fragile than the banker. If the bank goes under, the banker can probably get a job at another bank. If the taxi driver loses his car, he's screwed. And sure, he could switch to some other kind of job, but so could the banker.
It's about the system
Many small agents in a system, bring with them volatility and incremental or random enhancements, make it more antifragile than a centralized system such as wage-earning. For the individual actors maybe it isn't as clear-cut but if you compare a Bank Teller vs a Driver (which I think is the better comparable than a Bank*er* vs Driver) it makes more sense.
Banks can go under; if you're a 55 year old bank teller you don't stand much of a chance to get another job at a senior wage level if your employer goes under.
Drivers are (more-or-less) independant and can manage their own fate. Some fail, some lose everything, but the overall system improves as the failures (those who drive drunk, don't take care of their car, etc.) fall out.
Antifragility is a cool concept and it makes me feel like going out and exposing myself to disorder to get stronger. But aside from coolness I don't think antifragility is necessarily better than plain old robustness. For example, my bones get stronger under stress (eventually) and titanium gets weaker. But slightly fractured titanium is still probably stronger than a weightlifter's bones. And taking the idea to the extreme could lead to hoarding canned water and VIX instead of profiting from a calm period.
Well you know what they say, what doesn't kill you makes you stronger. And of course the corollary, that which doesn't make you stronger, kills you.
The corollary to that is that, while alive, it's impossible to not continuously get stronger.
One thing bugging me here is Taleb's insistence that "antifragile" is different from "robust" -- I mean, certainly, antifragile is different from Taleb-robust, because he's defined them that way. But I don't think Taleb-robust is the same thing as robust-in-the-ordinary-sense, which seems to have quite a bit of overlap with what Taleb calls "antifragile" (e.g. the options example -- benefitting from upside but being protected against downside would ordinarily be called "robust"). This wouldn't be a problem, except that as best I can tell, Taleb doesn't seem to notice that his use of the word differs from common use, and so just says "antifragile is not the same as robust", leading to a lot of confusion.
Presumably he's just making the observation that there are different kinds of "robust." You can build a Maginot line or you can build mobile armor -- they are robust in different ways. You can build fast fighter jets, well-armored fighter jets, or stealthy fighter jets -- they are robust in different ways. You can build your muscle strength and mental endurance, or you could build your knowledge of mechanical advantage and set of handy sharp tools -- also robust in different ways.
After that, you can observe that there are situations in which one type of "robust" is...well, more robust than another, since "robust" at its core has a purely functional definition -- that which survives the challenge better.
> according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.
This struck me as very wrong. A post on Skeptics Stack Exchange agrees: (https://skeptics.stackexchange.com/questions/15130/did-only-a-handful-of-people-in-europe-know-how-to-do-division-before-the-13th-c).
The Byzantines, and the Muslims in Spain, both certainly knew arithmetic, and higher math as well. But even if Taleb means "Latin Christendom" and not Europe per se, basic arithmetic (as part of the quadrivium) was part of a 'standard' higher education. There wasn't any progress made, and architecture regressed, but people didn't forget how to divide integers!
I really don't get how Taleb could have claimed this with a straight face. I tried to look up Guy Beaujouan, but he wrote in French (which I don't speak) and before the age of the e-book, so I can't easily find a reference.
I suspect it may be a "Shakespeare invented half of the English language" kind of thing, where all we have to go on is written accounts so we assume that Shakespeare was constantly inventing new idioms and that only five people know how to divide. That kind of assumption always annoys me.
It may just depend on what we mean by division, I think humans intrusively understand division on some level, but we're generally not great with large numbers so we do need techniques like long division for 53/7, not so much for 10/3.
> Instead of reading the latest studies, read older studies!
Contra Scott's "As practical advice, this suffers from a certain having-obvious-transparent-flaws," I think this is generally very good advice, at least as far as it goes.
If you're looking to understand a field, you absolutely should read the older, foundational, most-cited papers before diving into the newest ones. If you want to learn about something on the cutting edge, taking the paper you think is interesting and going through its bibliography and first reading the oldest paper you see is probably a better play than reading the paper you want to learn about.
Books are the same way. In most circumstances, you're better off reading an older writer who everyone agrees is a classic than hoping the new hotshot will live up the their impressive debut novel.
News media and cultural commentary is the same. There's a reason the subreddit doesn't allow discussion of current events in real-time. I'd much prefer a world where the stories about "news" were all written with the benefit of a week's hindsight instead of a mad rush to be 'first'.
This is actually one of the things I liked about the "old internet". 10-15 years ago, the results at the top of your Google search were, nearly without fail, the best things about the topic you searched for. The most comprehensive. The best-written. These days, the internet (google, reddit, youtube, etc.) are biased towards new and ongoing and "engaging" in the social-media-analytics sense of the word. Its much harder to find the thing that was clearly the best article/essay/review of the thing you want to learn about because instead your directed to the scads of newer things, most of which are far worse.
Perhaps I've missed Taleb's point here. I certainly agree that reading the most recent research can be important for some academics, but unless you're trying to publish in the specific subfield of the stuff you're reading, you're probably safe ignoring it for at least a few years.
What field are you in?
In psychiatry and psychology, any study older than a few decades was done with such terrible statistics as to be almost meaningless. There are often smart people and good books from before then, but the further they are away from statistics-reliant formal studies, the better.
Statistics, though I didn't stay in the academy after graduation.
It's actually a pretty interesting literature, since you basically had "no computers", "computers-but-they-kinda-suck", and "everyone has computers and your new method better have an associated R/python package or no one is going to use it" periods.* You get different types of problems that folks are interested in and different approaches/solutions in each period.
My perspective is that the newest most wiz-bang things might be really cool, but most publications are useless outside of a pretty narrow application. Much more interesting to look back on foundational works and the articles that demonstrated methods that people would continue to build on, develop, and use. I might view things differently if I had to publish regularly.
*There was also an "everyone's a eugenicist" period, but we don't like to talk about that. And currently there's a "Wait, why is everyone focused on AI/ML instead of us??" period.
It must depend on the field of study or endeavor. If you're trying to learn physics, there is no way you would want to start with Maxwell, Newton, Boltzmann, Heisenberg, etc. It's not that the foundational material is wrong, just that the expression, summary, and notation is vastly improved. Learning Newtonian mechanics by reading Newton is like learning vector arithmetic with Roman numerals - it's possible, but why would you do it?
In the sciences at least, you don't read the original materials, but the classic reviews. The annual review articles, Feynman's lectures, the textbooks like Griffiths, Jackson, or Goldstein. I don't know if the difference between "original" verses "classic" works operates the same in other fields, but it seems like a useful distinction.
Certainly depends on the field. But if you had a PhD in a specific sub-branch of physics, and you wanted to learn something about a different branch of physics, you probably wouldn't reach for the latest publication in that subfield.
Physics is interesting because its literature goes back centuries. The distinction I was trying to draw was between "published last year" vs. "published 20 years ago", not "published 50 years ago" vs. "published 500 years ago".
Anyhow, I think we basically agree on this.
I think that misses the point a bit. If I want to learn a different branch of physics or mathematics, you will not start with an (old or new) research article, but with an introductory book or a survey; but within this category the more recent ones will generally be preferable to old ones.
Based on my experience in math stuff:
Textbooks are definitely the best place to learn a subject, but good ones don't always exist, especially with niche or newer topics. Surveys often times don't go into the details you want--that's kinda what "survey" means, after all--but at least they're very useful as a guide to the literature. Like Matt A, I've often found original papers to be the most readable presentation of the idea, probably because since the idea was new at the time, they really focus on what the new insight is and don't accidentally assume you know stuff already. I'm having trouble thinking of examples on the spot, but there were definitely math topics that never fully clicked for me until I read the original paper.
I was going to disagree with this, but now I think it totally depends on the field.
If you are starting in Statistical Mechanics or E&M, then I'd bet the old Berkeley series (Reif of SM and Purcell on EM) will be better than a random new intro text. On the other hand when I wanted learn more Cosmology this year I picked a modern intro text, 'cause the field has changed much in the past ~50 years.
There's a pair of schools in the United States (St. John's College in Annapolis and Santa Fe, NM) that tries to do exactly this. They have the students learn proofs by reading Euclid and learn calculus by reading Newton. They don't even have majors. Everyone studies the same thing. Classics only. My ex-girlfriend from back when I was in my early 20s went there. I'm not really sure how it worked out. She ended up becoming the only person I have ever known who became a primatologist, which took forever because so few universities even offer a PhD in primatology.
Fun fact: In China the Lindy effect is state-sponsored through the official designation of "Time-honoured brand" - https://en.wikipedia.org/wiki/China_Time-honored_Brand
Fun fact: The Netherlands has exactly such a designation, being that "medium-to-large companies, associations and institutions with a very good reputation, which have existed for at least 100 years" can call themselves 'Koninklijke' (translation: Royal) as in 'Koninklijke Philips NV'. I wouldn't be surprised if there's something similar in other countries too.
UK has it with the Royal Warrant Holders seal on products but it's a bit more arbitrary
Singapore vs Malaysia is a matched counterexample to Lebanon vs Syria. All four countries started out as islamic kingdoms, albeit at opposite ends of the crescent. To the extent that either Malaysia or Singapore was a country in 1920, they were the same one. It was in 1965 that Singapore won its independence, or, if you bought your newspaper at the other end of the causeway, a certain cancer was excised from the Malay body politic.
Lee Kwan Yew is a paragon of authoritarian high modernism, Malaysia is where James C. Scott spent his 18 months as a padi farmer. But, on any material measure, Singapore is winning.
James Scott is much more concerned with Upland Burma than Malaysia. Also, Malaysia and Singapore weren't the same, Malaysia was a collection of historical small sultanates including cities and countryside and inhabited by mostly Malays.
Singapore was a created cosmopolitan port with very little countryside, no history, and inhabited by Chinese immigrants.
Ethnic Chinese are the majority but there were and are significant Malay and Indian ethnic minorities.
The idea of anti-fragility is very important, but this is really an example of someone having a Big Idea.
Ironically, by application of his own argument, this Big Idea is itself fragile. It is exactly the kind of theory he complains about.
The problem is that he is just flat-out wrong about it in many ways, and we already have a much more useful model that is more generally applicable - natural selection.
Natural selection is where environmental pressures put pressure on a system and results in "survival of the fittest". The result is higher efficiency.
But if you look at what actually results in the best results, it's actually *artificial* selection. Artificial selection works many orders of magnitude faster than natural selection does. We have made crops that are vastly better than wild plants, and genetic engineering has allowed us to make even better ones in just a few decades.
Many good systems are irreducibly complex and will never arise naturally as a result. Likewise, natural selection doesn't always select for positive traits - again, the example of the dodo, the dodo evolved the way it did because it would be wasteful for it to evolve to be otherwise. The fact that so many island species evolved this same way shows exactly this. Natural selection is no defense against going down a blind turn and smashing into a wall.
Indeed, natural selection works at its best with a moderate level of pressure - too high and the animals tend to die out before selection can even really affect them. When a gigantic meteorite struck the Earth 65 million years ago, most things didn't adapt - they just died.
By way of analogy, if you have an event that destroys most businesses, you might not be promoting only the best businesses, you might be promoting businesses which happened to have a characteristic which protected them from that event. That doesn't necessarily mean that those businesses were necessarily "better" in a macro sense. For example, the COVID-19 pandemic has killed a lot of in-person things and promoted online things - but that doesn't actually mean in-person stuff is *bad*, it is just that the selective pressure on them forced people in a certain direction. If we spent a year under severe cyberwarfare conditions that almost shut down the Internet, then in-person businesses might thrive.
Blind selective pressure is not "good" or "bad". Evolution lacks foresight. An island population might be very fragile to outside invasion, but it is also less likely to get external pathogens in the first place. If a pathogen gets introduced to Maine, it will likely spread to Florida; if a pathogen gets introduced to Hawaii, it is less likely to be introduced to Midway.
Indeed, there's little evidence that being on a large landmass even makes you antifragile in the first place; the "fragility" of island ecosystems is really because humans got there recently enough to see the effects. Humans already killed almost all the North American and Eurasian megafauna in prehistoric times.
Really, the fact that more advanced, sophisticated, interconnected societies tend to dominate their neighbors is a strong point against the idea that they are inherently fragile; indeed, the supposedly "anti-fragile" city states have almost entirely died out or become much bigger countries.
His whole thesis is really just scattered and full of motivated reasoning.
Competition *is* desirable, but he is trying to connect a lot of disconnected ideas because he has this Big Idea, and so he is awkwardly cramming everything into it, no matter whether or not it makes sense.
I don't disagree with your actual point, but I disagree on natural vs. artificial selection. Artificial selection is better at producing plants that are useful to us because natural selection isn't trying to do that.
Also, while artificial selection can work pretty quickly, so can natural selection if the environment changes suddenly. There's a famous example of British moths evolving darker camouflage in response to pollution staining trees black. In the past century or so, African elephants have increasingly become tuskless, making them unattractive to ivory hunters.
Edward Luttwak wrote "Give War a Chance" along similar lines, but I think that was less about controlled burns than "war making as state making".
Erik Falkenstein also said that Taleb's theories imply that selling insurance should be a terrible business that frequently results in bankruptcy, which doesn't actually fit our reality of relatively long-lived insurance sellers.
https://falkenblog.blogspot.com/2009/03/review-of-talebs-black-swan.html
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract"
I don't think that's actually true for people used to using them. It's really large numbers where they get too long compared to a base-10 numeral system.
Willmoore Kendall, the "wild Yale Don" involved in National Review's early days, argued that Socrates' death was justified... based on Socrates' own beliefs (and that he willingly drank the hemlock rather than escape with his supporters because it was his only philosophically permissible action).
Robin Hanson has also noted that mergers tend to be value-destroying, and thinks that they are undertaken anyway for reasons of internal corporate politics (similar to his reasoning for management bringing in "consultants" to recommend the thing they wanted to do anyway).
https://www.overcomingbias.com/2014/05/big-signals.html
First, evolution and exercise are processes, not systems; the systems are the ecosystem and the muscles. When an environment is stable, life does not lose the ability to evolve; it just evolves to the stable system. When things change, the process of evolution will still occur. If the environment is volatile, species will adapt to the specific nature of that system, and may need to evolve differently should the volatility patterns change.
Likewise, with exercise; if muscles were truly antifragile, why would trainers, physical therapists, and orthopedists be so busy? Muscles grow in response to the proper stresses; if the type of "volatility" is wrong injury occurs.
The common ground between rationality and Taleb's project is an area well worth exploring - I'm glad you raised it in the last couple of paragraphs. Taleb's natural tendency to aggressively dismiss attempts to understand systems probably obscures how mutually beneficial the two philosophies can be to each other.
I actually wrote a blog post on the relationship between the two almost exactly a year ago!
https://atlaspragmatica.com/combining-rationality-and-antifragility/
On mergers, some of the diseconomy of scale that results seem to be due to big companies turning into mazes: https://thezvi.wordpress.com/2020/05/23/mazes-sequence-summary/ I think there's a real institutional design / corporate governance problem to be solved here -- how can you scale up without this happening?
Over 80 years ago Ronald Coase wrote about exactly why firm sizes equilibriate at certain levels. Economies of scale implies upward pressure on the size of firms, and transaction costs imply downward pressure on the size of firms. The optimal size of a firm is at the intersection of these lines. The transaction costs are overhead, limitations to management, and basically the "maze of middle management".
It makes sense that your average firm is roughly optimally sized, and that a merger would send the firm into disequilibrium over economies of scale and transaction costs - in other words, the maze is too comprehensive and the gains from economies of scale isn't enough to offset the growing transaction costs.
Are you sure "transaction costs" is the correct term for what you're talking about? My understanding was that "transaction cost" usually referred to the cost of *market* transaction -- i.e., transaction costs lead to larger firms, not smaller. I don't know what the word is for these sort of organizational or coordination costs, but I don't think it's transaction costs.
Accounting wise, they're referred to as transfer pricing, but not sure what the economic term is.
I'm not sure it's 'correct' but 'transaction cost' seems to cover the idea that merging two companies involves real (and often) significant costs that, apparently, often overwhelm the benefits of the formerly-separate companies cooperating as a single company.
What is the principle that connects the Lindy effect and the anthropic assumptions of the Carter Doomsday argument? I can kinda glimpse something, but I don't really see the connection.
The Doomsday argument says that there's a 95% chance we're in the last 95% of humans to exist. So if there have been X humans so far, there's a 95% chance there are no more than X * 20 humans total.
The Lindy effect just expands this to how long things have existed. If something's existed for 10 years, there's a 95% chance it doesn't live beyond 200. However if something has been around for 100 years, there's a 95% chance it doesn't last past 2000. You can make a similar argument for the minimum bound too, so therefore things that have lasted longer are more likely to continue to last.
This seems very different to me. The Lindy effect is like saying "if Alice plays Russian Roulette 5,000 times in front of you and survives, but Bob only plays it once and survives, then it is reasonable to bet that Alice's chance of surviving the next N rounds will be higher than Bob's chance." (In other words: the implicit statement is that if someone survived the past 5,000 iterations it is likely that there is something about their situation that produced this result -- e.g., you have more confidence that Alice's bullet is actually a dud, while Bob's might be a live round.)
On the other hand, the Carter Doomsday argument is weird. It uses the Copernican principle to assume the number of Roulette experiments that have happened or will happy. I'm not comfortable enough with it to say that it's wrong, but it feels very different.
I'm not sure that I see how religion is antifragile. Organized religion appears, in particular, to exist for the purpose of shielding morals and ethics from the memes du jour, for the sake of protecting them as they contain deeper wisdom that may not be apparent at the surface. Every virtue in an organized religion is a Chesterton Fence, but wouldn't the theory of antifragility say something like "you can get rid of all the Chesterton Fences and this thing should get better" ?
It's plausible that an appropriate amount of persecution can strengthen a religion.
Ah I suppose you are right that the religion itself can be an antifragile thing, but I think what I was thinking is that the religious adherent is perhaps a fragile entity, as they are steadfast in their morals and traditions and seem to "not thrive" when those traditions are removed from their life, e.g. church closures during the pandemic.
Re part 1: Did you really not get that antifragility is all about Jensen's inequality?
So... by the same token, moving from a personal wordpress site SSC to a guaranteed-income model of ACX makes Scott more fragile, the opposite of his stated goal.
Starting his own clinic also makes him more antifragile. He swapped his guaranteed income as a psychiatrist for the guaranteed income from the blog, and the uncertain income as a blogger for the uncertain income of his own clinic, so arguably there is no net change in fragility.
“I think part of its response would draw on Taleb's previous arguments that people underestimate the risk of black swans, so the world will be more volatile than they think.”
On my reading of Taleb, the point is that there are two relevant distributions: first, there’s the probability distribution for events, including the tail events he focuses on in Black Swan (e.g. the probability of a big stock market crash); and second, there’s the distribution of outcomes, which is the focus of Antifragile (e.g. the price of your investment).
The first is taken as a given—or, more precisely, it’s taken to be never ever understood properly no matter how hard you try; tail events include things that have never happened yet and no model will capture the probability of things you’ve never seen or thought of. Our failure to model this usually leads us to underestimate its likelihood (hence, the whole Black Swan book).
The second is the focus of this book. The distribution of outcomes has two tails (good or bad, right or left), and the point of antifragility is to open oneself up to the right tail while not being subject to the left. The banker is subject only to left-tail events and is therefore fragile; the taxi driver is antifragile because he is open to right-tail events (the worst he can do in a week is make no money, but the best he can do is “infinite”). Ideally you set yourself up so that the distribution is right-skewed like this; even if your mean outcome is worse (or, looks worse because your model doesn’t properly account for tail events), an increased access to right-tail events is worth it. Hence, Taleb’s “barbell” investment strategy, etc.
If the space of outcomes is non-negative, like [0,\infty), it’s even more important to guard against the left side, because if you go to $0 then you don’t get to keep playing the game any more. (I don’t remember which book this point is from (I don’t think it’s Antifragile, maybe Skin in the Game?).)
The problem I have with this is Taleb never gives us an indication when we ought to ignore his advice and it devolves into a Pascal's Mugging.
Like if we can't assign subjective probabilities to any outcome how do we know what imaginable downsides we should treat as low probability in our day to day lives.
I have a model about how likely there is to be a dragon lurking outside my front door. But if there was one there it would be a disastrous outcome for me. It seems Taleb would say "you can't trust your model so focus on the downside" but in this case I would never go outside. Obviously Taleb ignores all kinds of extreme downside risk in his daily life, and he uses some model of the world to do that. We all do.
I think Taleb's cautions about not falling in love with your model of the world and paying a lot of attention to skewness in the outcome distribution is important, but he never seems to give us a limiting principle.
Yes I agree with you on this. He says "pay close attention to rare but catastrophic downside risk" but does not worry about dragons. Presumably there is some level of low probability that is low enough that we can ignore it? I don't know if Taleb would agree with that statement but it seems clearly true.
That seems exactly backwards though right? The taxi driver has a pretty hard upper-limit on what they can make in a week since revenue scales linearly with number of fares and there is only so much time in a week and limited demand for taxi rides. Also the taxi driver and banker have the exact same lower bound on income, $0 per week. The banker on the other hand could get promoted into a position where they get a bonus or get a job at an investment bank or as a trader where their upside is (while still in practice bounded) much less tied to scaling up effort. It is a bit strange since he spends a lot of time talking about this in Black Swan.
A better example than the taxi driver (and one Taleb uses iirc) is the stripper. Looking only at the income distribution, a stripper is protected from the downside by having income fluctuations, while also getting exposed to the tail event of a billionaire client providing 10x lifetimes of income due to his infatuation.
Taxi drivers describe the limitation of the downside but fail to describe the upside.
It still doesn't really make sense to me though. Both the stripper and the banker have the same downside risk (making $0 per month). The stripper does maybe have more of an upside but the banker also has an upside. More to the point, if we assume the cab driver is working for himself then he has MORE downside risk because he has to invest in a car and keep it maintained. So he can lose not only the revenue stream of cab fares, but also lose his invested capital.
I think the point he is trying to make is that the person with variable income ends up living a lifestyle that is more robust to income shocks because having unsteady income forces them to not take on certain financial commitments. But doesn't that just mean that they live a lower average quality of life for a given level of average income (or else they have to finance the same quality of life by smoothing out income with debt, making them more prone to negative shocks)? Maybe Taleb thinks that is better but it honestly has the ring of a rich guy waxing poetic about the nobility of poverty.
Perhaps my own story can help here.
I worked in restaurants for a decade, much of it as a delivery driver or waiter, receiving minimum wage plus a variable income. A great day of tips was $100, or about 2x the base wage.
I was working at a Denny's on 9/11, and I made $4 in tips. Anyone who saw the news knew that something awful had happened, but I had the benefit of immediate evidence that my income would be affected.
Fast forward to 2007, and I'm working as an intern for a well established public tech company. As the absolute lowest member of the department, my starting salary is more per hour than I made in any job in my life. I'm delighted, of course, and ready to settle into the tech middle class life.
The crash hits, my company's stock goes from $20 to $2 overnight, and the board decides to lay off anyone who is classified as a contractor. As an intern, this includes me.
I get worse than laid off - I'm notified that I will be laid off, three times over two years, but never actually laid off. Instead I receive just above the unemployment compensation, but I have to continue working to receive it.
One mentor was fragile: he lost his job simply because he was the only PM without a critical deadline coming up. A 20 year professional, he would be unemployed for three years, carrying a mortgage and a family.
One mentor was antifragile: a freelancer making $120/hr, she had a new contract making $150/hr in two weeks, as companies looked to reduce the liabilities of fixed salaries while still keeping the business going.
The lesson I received, which I find Taleb codifies well:
Life contains disorder and downside risks, and often the jobs we think are protecting us from these events are actually only isolating us from the signals of these events. An antifragile job is one where you receive more signal about these events, so you have more room to maneuver and possibly profit.
The best job is robust, but this is rare, and we often can't distinguish between a robust job and a fragile job until its too late.
Given the option of a robust/fragile job or an antifragile job, the antifragile job is better because at least you know the score.
Interesting thanks for sharing.
I agree in some ways that there is a reasonable point to be made. The freelancer has to be resourceful and build contacts so when faced with a negative shock they are able to adapt more quickly (because that is what they are used to doing). So the freelancer maybe already had other contacts and opportunities that they could capitalize on immediately while the FTE thought they had a stable job so never built a network or learned skills outside of their narrow domain.
And for what it's worth I think that is a good idea. I guess I just disagree on three levels with how Taleb in particular represents it:
1. He conflates fragility of a system with fragility of an individual within a system. Which I think leads to some perverse outcomes when it comes to individual decision-making.
2. A lot of what he puts forth as a anti-fraglie at an individual-level reeks of drawing incorrect conclusions from survivorship bias. To take your example even. The freelancer is obviously better off but it doesn't follow that the FTE would have been better off being a freelancer. It may be that people who are talented enough to make it as a freelancer are just better at this stuff, so of course when there is a shock they are better able to adapt. I find this particularly frustrating because Taleb talks so much about survivorship bias in Black Swan.
3. After reading the book I have no idea how to change my decision-making at the relevant margin. Your immune-system is anti-fragile and exposing yourself to pathogens at a certain level is good and protects you from more dangerous pathogens in the future. But that doesn't mean I should go around licking doorknobs to try and infect myself with everything. Clearly there is SOME level of exposure that is good but just as obviously there is some level of exposure that is bad. So where does that tipping point happen. It's not just that Taleb doesn't answer that question, he seems openly disdainful of even framing the decision that way. To again take your example. I can definitely see how the anti-fragile freelancer gained useful skills by exposing themselves to volatility and so were able to adapt to a shock more easily. But in my 15 year career in the tech sector, the pattern I see most often is that when companies hit bad times the first people they let go are the freelancers and contractors (because they are easier to fire and also it is better for morale of the remaining FTEs).
So I think there are a lot of good lessons buried in the book but it suffers badly from Taleb just trying to fit everything he doesn't personally like into the fragile category and everything he does like into the anti-fragile category.
IIRC he framed business owners as fragile but entrepreneurship as antifragile. I recall him even writing something to the effect of "we should celebrate the sacrifice business owners make which contributes to the whole system." Would that change your perspective on the conflation of the two levels?
Yes, both the cab driver and the banker can make $0 in a given month, but I think part of Taleb's point is that it signals something different in both cases: if the cab driver has a bad month, he gets to keep playing and try to make up the lost money next month; whereas if the banker makes $0 some month, it means he lost his job completely, or worse, maybe the stock market crashed and he's unlikely to find another job in his field.
For the banker character in the book, this is even more devastating than a mere job loss, because now he can't make his mortgage payment; when your life is built around knowing exactly how much you're going to get paid and knowing where you will spend it, any shock to the system breaks it. I think this is why the banker character is fragile in Taleb's story.
I agree that a stripper may be a more salient example, but I can also tell a story where the cab driver gets a windfall fare (a cross-country trip or some such) and makes 10x what he normally makes in one month.
Tail event.
Is Taleb really suggesting that you can invent something as complex as, say, the modern MRI machine, just by tinkering around with wires and things in your garage ? What ?
I mean, yes, all engineering requires a certain amount of experimentation; but it's guided experimentation, not just random guessing. The theory is the guide.
Taleb isn't a hard scientist so he underestimates the actual complexity of scientific theory and practice.
I agree. Even if Taleb is locally correct both about new discoveries being made by tinkering and about theorists coming in later to systematize what the practitioners already know ("teaching flight to birds"), it seems obvious that the systematic understanding is invaluable to the *next* generation of tinkerers.
You can't get from discovering fire to building space shuttles JUST by tinkering. You've got to periodically consolidate your knowledge along the way.
"But the interesting constant is that when a result is initially discovered by an academic researcher, he is likely to disregard the consequences because it is not what he wanted to find - an academic has a script to follow."
I'm a researcher in experimental biology, so this got me thinking.
My first reaction was to strongly disagree. Scientists love accidental discovery stories."I noticed unexpected thing X and I had enough breadth of mind to realize that meant Y might be true and that led me to make a major discovery I hadn't been looking for" makes you a real hit at conferences. To the extent there is a script (hypothesis-driven research in your discipline, I suppose), the ability to improvise when things go off-script is widely admired. You might imagine granting agencies would be upset if you take your research in unplanned directions, but usually if you get high profile paper out of it they are perfectly happy.
Then it occurred to me that it is true that sometimes my students have made unusual observations and as Taleb predicts I've discouraged them from following up on them. The first reason is that an accidental discovery and an experimental artifact can be hard to distinguish. The second is that when a project drifts too far from your own area of scientific expertise, you have to learn a lot of new literature and you are prone to making stupid beginner errors. When you supervise a bunch of people and have to keep a bunch of projects on track, it's a big time expense to pursue a new field. You don't see many lab with one virology project, one chromatin project, one metabolism project etc. Most professors can't keep up with all those fields of literature well enough to direct them. So there's a natural tendency for projects to be scuttled when they drift too far from the lab's core expertise.
How do you solve this? Collaborations can help: show your weird finding to someone with more specialized expertise and go from there. A few months ago a colleague got a strange result and didn't know what it meant, but he realized it involved a gene that I studied and had his student talk to me. Now it's the most exciting project my lab is working on. Even if you don't have a lot of different expertises in one lab, you will have them in one department or university.
I wonder how one could study this question rigorously "do scientists stick to planned paths too tightly". Unfortunately I lack specific expertise in this area and will not pursue it further.
I often hear repeated that the greatest scientific breakthroughs aren't signaled by "Eureka!" but by "That's strange..."
I'm but a humble code-monkey, not a scientist, but I work with scientists a lot. In my experience, accidental discoveries are indeed somewhat common; but "accidental" does not mean "totally random". What happens often is that the scientist is pursuing some area of research, devises an experiment to distinguish between multiple possible hypotheses, gets an unexpected result, then tries to understand it. But these multiple hypotheses don't just arise out of vacuum, or a voice heard in a dream, or divine inspiration; instead, they are the result of applying detailed understanding of scientific theory to the subject at hand. And interpreting the results -- no matter how surprising -- requires a lot of hard work in organic chemistry/physics/etc.; plus of course the baseline knowledge of statistics and data science/machine learning. You don't just light a random chemical on fire and go, "wow, it turned blue, I guess I'll build an MRI machine with it !"
Collaborations can help, but venturing outside of, and dealing and negotiating with other labs and experts outside of your own organization is costly. People want to keep work inside of their own organization for good reason.
when you got nothing, you got nothing to lose
That doesn't mean you should write a book about how it's good to have nothing.
Also, if you got nothing, you can still lose your life, which is very likely if you got nothing.
It's not possible to be directly long or short the VIX. The VIX has mean-reverting behavior: when it's low, it's expected to rise over time, and when it's high it's expected to fall over time. Since this is common knowledge, a security whose price tracked the VIX wouldn't clear, because there would be more buyers than sellers whenever the VIX was below the long-term historic average and more sellers than buyers whenever it was above it. What you *can* do is trade cash-settled VIX futures. If the VIX is at 10 today (representing a very placid market), futures settling several months from now might be trading at 15, so you could buy those futures and be long volatility, but if the VIX only rose to 14 in that period, you'd be losing money even though the VIX went up just like you predicted. This is what prevents betting on volatility during seemingly-placid times from being an easy market-beater.
> And everywhere else, people really do underestimate volatility, and antifragility systematically is underpriced?
Haven't read the book, I suspect Taleb's point is that Lindy / folk wisdom tends to ~correctly price antifragility while "legible" / intellectual wisdom tends to underprice it. And that the latter kind of thought controls an increasing amount of modern society and is making a play for more (see also: rationalism). At least, that's the argument I'd make if I were him.
My favorite example of antifragility in traditional societies, because it's such a big one, is how premodern farming practices optimize for "least risk of starvation" rather than "highest average production". See e.g. https://acoup.blog/2020/07/24/collections-bread-how-did-they-make-it-part-i-farmers/
Isn't that robustness rather than antifragility? (On the other hand, aren't half of the examples from the book just robustness rather than antifragility?)
I do like the example though, because it drives home the point that optimising for robustness is done at the expense of something else.
Good point, robustness would be a better way to describe this.
I'm confused about the Syria/Lebanon plot. Was that a version of what was in Taleb's book, or a snide rejoinder to Taleb by Scott? The plot clearly shows that Lebanon was way ahead of Syria for the duration of the measurements, back in 1820. And the additional divergence around 1950 was not that something happened in Syria to suddenly depress growth - Syrian growth continued as before. Instead, there was a massive increase in economic activity in Lebanon in the 1950's which a little Googling shows was due to Beirut being the financial center of the post WWII middle east connections to Europe.
It's a snide rejoinder by Scott.
On the other hand, since the plots just look like a plain old exponential to the 1950 value for everything prior to 1950, I'm unconvinced that anything prior to that date is all that reliable.
It also only has 3 data points before 1950, which make it look smoother than it would otherwise.
I'm not convinced it has any data points before 1950. It looks suspiciously like it has a dubious extrapolation before 1950, which someone has drawn dots on.
It even clearly says under the graph that it is not suitable for comparing income levels between countries. I will count this in Taleb's favour. When fact checking goes wrong the author gains extra points.
The examples of evolution and collections of city-states vaguely reminded me of a metaphor from the video game Obduction. I can't find the actual text, but it went something like:
Once there was a gardener who carefully separated their seeds into separate plots, and tended and pruned their plants dutifully to keep the whole garden neat and organized. But despite the gardener's dedication, the plants grew sickly, and their garden never flourished. Eventually, they gave up and stopped tending the garden. Then one day, much later, they came back they found the garden was lush and filled with thriving plants, growing wild in every nook and cranny.
They argued that allowing seeds to be scattered to the wind is often bad for the individual seeds, but it's good for the species. Lots of independent big risks taken by individuals leads to a lot of individual suffering but allows the collective to capitalize on opportunities they couldn't have found otherwise and thereby expand the total resource base of the species.
(Warning: Generalization from fictional evidence. Pretty sure competent agriculture has higher food yields per acre than gathering-from-wilderness does.)
Well, the tale never said all the thriving plants growing wild were *edible*.
“This chapter (and honestly the rest of the book) only makes sense with an assumption that antifragility is systematically mispriced”
There is no “proper” price.
In financial markets, prices constantly change, sometimes drastically! Prices are not static, they’re dynamic. One could say prices are always wrong, thus always changing, trying to be less wrong.
Investors seek to own what another investor will purchase more for in the future. Wise investors are long term investors.
In the long term the winning investments are antifragile.
Economics classes teach Efficient-Market Hypothesis, the idea that the prices reflect all available information and you can’t “beat the market”
Haha, that is false. Humans misprice *all the time*
Humans may misprice things all the time, but that doesn't mean the market isn't approximately efficient. People sing out of key all the time, but a large enough group of untrained people can sing notes correctly. The usual response to someone saying the EMH is wrong is, so why aren't you a billionaire?
You’re right, I’m not a billionaire. But I am a millionaire. Just turned 23.
Look at TSLA, GME, ETSY, etc.
These price changes look efficient?
right... there is a price at which the option is no longer antifragile. so the antifragility is not an inherent property, but a function of the price
Scott says options are antifragile. I disagree.
I was referring to investments in real businesses. Ownership of a business. Capital allocated to produce value.
Not a zero sum numbers game.
Not an options contract.
> In the long term the winning investments are antifragile.
That's not true, though, and this a common criticism of Taleb's writing on finance. (There may have been some truth to it when Taleb was a trader, but markets evolved a lot since then!) Put options tend to be overpriced compared to their expected payout. The money-making strategy is to be _selling_ put options...but that's a fragile strategy because of nasty tail risks. Indeed this is basically the business model of insurance: sell lots of tail options that people are willing to overpay for, and hope that you're diversified enough to endure the risks.
I’m not an options trader and can’t comment on put options pricing.
What I refer to as an antifragile business (to invest in) is one that will *benefit* from adversity because they (the humans and tech) will adapt, cope, innovate and grow more robust, intelligent, wise.
It’s easier to identify these companies in hindsight than looking forward.
It also makes more sense discuss antifragility in relation to the variance.
Is your investment antifragile to a recession? Pandemic? Innovation? War? Climate change?
Ah yes, sorry I misunderstood. Fragility of the business model is definitely an important thing to consider when choosing companies to invest in (see also: cyclical vs countercyclical industries). I don't know anything more about whether this factor tends to be under- or over-priced by the market.
Legacy corpos with stiff, slow moving, entrenched, bureaucracies are fragile.
Nimble, flat, malleable, innovative corpos are antifragile
“Healthy people are fragile” ... “very sick people are antifragile”
Can you elaborate?
If very sick people benefit from increased variance, why do they rest all day?
I’m a healthy person and increased variance makes me strong, resilient, happy, wise. But you say “increased variance can mostly make them worse”
Can you explain?
“Reasonable to give a terminal cancer patient an experimental drug - the worst that can happen is they die”
Do you mean - they suffer immensely from unforeseen effects and die sooner than otherwise?
Antifragile does not mean nothing left to lose.
Antifragile means what doesn’t kill me makes me stronger. A healthy person is antifragile to illness-19, a sick person fragile.
I'm thinking of an experimental drug as increasing volatility - it might cure you, or it might have side effects that kill you. A healthy person has little upside (a drug can't make them any healthier) but high downside (a drug could kill them). A terminally ill person has little downside (doesn't matter if it kills them, that would have happened anyway), but high upside (it might cure them).
Drugs can certainly imply volatility. I think humans in general are antifragile to a lot of drugs.
Let’s consider a real experimental drug in testing today - psilocybin.
I tried it when I was healthy, experienced uncomfortableness and volatility, then experienced benefits in the following days. Fresh perspective, hugged a stranger.
A terminally ill person is likely to also experience volatility with psilocybin, then benefits. Could even cure of existential angst!
So regardless of the amount of upside or downside, a sick and healthy person can both be antifragile to an experimental drugs.
Humans appear to be tremendously successful in dominating the biosphere despite being extremely fragile in evolutionary terms:
1. We reproduce slowly and in small batches of offspring compared to lots of other mammalian species, or species in general. That means selection in general happens much slower for us than with, say, rats.
2. Our survival is highly dependent on a socially transmitted set of knowledge that takes years to learn. Take that away, and we're creatures that can freeze to death outside of tropical areas because we don't have any fur.
It's worse than just freezing. We can't survive in *any* environment without training.
No
doesnt the success imply that we are not fragile?
Yes
On one hand, there are all the arguments about us being robust because of our ability to modify the environment/adapt with our brains. Those are boring, though.
The more interesting point is... well, fragility doesn't mean "unsuccessful". It means "tremendously successful until something goes wrong". I won't bet on humanity going extinct in the next 100,000 years - that's the kind of bet on which you can never collect - but surviving that long wouldn't even be halfway to the average lifespan of a mammal species.
I think a lot of the threads around here are coming down to the same thing: we're discussing "fragility" without specifying fragility with respect to _what_, which is a pointless discussion.
There's no such thing as a generalised fragility or robustness with respect to all possible disturbances. A rat can survive many things that can destroy me, and I can survive many things that can destroy a rat. Likewise if you replace the rat with lion, or a cockroach, or an elephant, or a stone wall, or a delicate Ming vase.
Maybe not the sun. I'm struggling to think of anything that would kill the sun but not me, so perhaps there _is_ some kind of generalised notion of fragility in which I am in general less robust than the sun.
Well...we've only been tremendously successful for a maximum of 40,000 years. Rather an eyeblink in evolutionary biology terms. The sauropods could've made the same argument after their first 10 million years of dominance with much greater evidence in its favor. "Clearly massive size and armor plate are the keys to success...."
The taxi driver vs. banker example seems to have been disproven by the current pandemic. The taxi driver is hosed, because the massive reduction in personal travel has outlasted his ability to survive a smaller income. The banker is still collecting his salary while working from his home office. Individual bank branches might be fragile, but banking as an industry seems pretty anti-fragile - it will survive at least as long as capitalism does.
Beyond that, of course, taxi drivers never made as much as bankers. So the bankers could buy themselves some volatility protection via savings and investments that the taxi driver could not afford. The banker might have an income of 100% or 0%, but if he can live for a couple years on 0% while the taxi driver will go broke on 6 months at 50% pay, the banker is less fragile.
The banker working from home in a pandemic does not mean it's anti-fragile. It's like the rock: robust. The institution is robust insofar as it remains unaffected by a pandemic.
Remember: fragile = harmed by variance
anti-fragile = strengthened by variance
robust = unaffected by variance
In that case, the taxi driver is both fragile and antifragile - there are some forms of volatility that make the cabbie worse off and some that make him better off.
But if you have to split hairs over the exact *type* of shock, that seems to be giving up most of the value in the idea. There's no generalized factor of "preparedness" or "adaptability" that makes you more antifragile against every type of risk.
Yeah, I thought about that after I posted - probably would have added an edit of the comment system here allowed it. Still, even if not “anti-fragile”, the banker seems less fragile than the cabbie.
It actually seems to be a bit of a stretch to say the cabbie is antifragile. I would say he is also “robust”, but he gains his robustness through flexibility rather than strength. Not a rock but a willow branch, or something. The cabbie would be strictly better off in a world with consistent high demand for taxis where he could have a predictable, high income stream. He just adapts (out of necessity) to a world with volatile demand.
Yea I agree. Cabbies are robust, not anti-fragile. But it might be true that cabbies are generally more robust than bankers, and that pandemics are just the exception to the rule. I think that you can still say that one occupation is more robust than another *on average*. I just think the cabbie example isn't very good.
Taleb is right about a lot of stuff and also needs a good dick punch. He talks about skin the game and grit and such, but his books and tweets are all a good example of Matt Levine’s definition of a great hedge fund manager: one who collects more in fees than the investors’ initial capital. Which Taleb does- his fund loses ten years in a row, collecting fees all along and then in year eleven profits enough to make up for all the loses. He also makes a lot then too. A bit like a bodega owner (antifragile?) who makes money selling lottery tickets and then gets a payout when one of its patrons hits the mega millions pot. Which is all fine! But like “news” is really advertising with some news attached, Taleb’s books are really hedge funds with some book ends. Doesn’t make them bad books, but probably explains their heft.
Shameless plug: https://thepdv.wordpress.com/2019/06/03/a-general-theory-of-bigness-and-badness/ is my attempt to specify explicitly why the pattern seen in Book Five w.r.t organizations and countries happens. I think it has more gears than Taleb's take and therefore is more likely to be useful. (Which does not imply it's more likely to be _correct_, TBC.
> He praises Switzerland, which is so federal that it's barely a single country at all, and argues that its small size (or rather, the small size of each canton) has helped it stay one of the world's most stable and prosperous areas (also, Venice!).
> So, a glib take you’ve probably heard is that the problem with Big Government, Big Business, Big Etc. is not the government or the business or the etc. but the “Big”. This is extremely superficial and is essentially elevating a trivial idiosyncrasy of the English language to an important structural principle of the universe, which makes about as much sense as nominative determinism. I think it’s true anyway. Here is my theory of why:
I’m a bit surprised you don’t mention Karl Popper here. If I recall, Popper’s thoughts on induction are behind a lot of Taleb’s thinking. I’m no expert in Popper, but I am curious about how to reconcile Popper’s thinking with the rationalist way of of thinking. Anyone thought about this?
Popper made bad attacks on Bayesianism for his entire career. His stupidest one was actually published in Nature (he and David Miller argue that the confirmation E gives to H can be factored into the contribution E gives to HvE and the contribution E gives to Hv~E, and the former is all deductive, and the latter is negative, so there can be no such thing as positive Bayesian confirmation).
I don't think it's quite true that Lindy = Doomsday. The Doomsday Argument uses one specific generating process: sampling a point on a finite interval, and gets Lindy as a result. But you can get Lindy from lots of generating processes:
- A geomtric series with unknown rate and uniform prior.
- A Poisson process with unknown rate and exponential prior. (This also explains hyperbolic discounting: see https://scholar.google.com/scholar?cluster=13790279530154362968&hl=en&as_sdt=0,5.)
- Nick Bostrom's x-risk model of drawing balls from an urn.
- Time until you beat your current highest sample for any given distribution.
- Time to return from a random walk. (Probably. I haven't worked out the details of this one yet.)
Some of these are different representations of the same process, but I'm not sure all of them are. So I suspect Lindy's Law is deeper that the Doomsday Argument.
I would err between Specialization vs Antifragile.
On the one hand, I should specialize into narrow fields to increase efficiency, i.e: I don’t know anything about agriculture, cannot start a fire by myself, I am incredibly fragile if left alone in the wilderness, our civilization gives me all the incentives to ignore these things and focus solely on good performance at workplace.
On the other hand, Antifragile requries me to diversify skills, expose to volatility in environment to maintain survival capability, being a jack-of-all-trades, lose my job to young enthusiats who go all-in due to competitiveness in my industry.
I'm not an expert, but it seems to me that COVID is a pretty refutation of the "theory isn't any help in medicine" theory, at least in its wider sense. Even if the story about Moderna developing its vaccine in literally two days (https://www.businessinsider.com/how-moderna-developed-coronavirus-vaccine-record-time-2020-11) wasn't quite true, we still saw the development of multiple vaccines which turned out to be effective within weeks or months of the emergence of a new virus. I don't know whether there's a reason to think vaccines are very different in this respect from other drugs (or inventions as a whole), but it does seem to be a striking success story.
The danger of black swans isn't just that they're rare, unpredictable, and large. It's that we don't know how large they can get, even after studying past black swan events in the space.
We tend to talk about the Carrington Event as though it's a worst case event that might be repeated. It's not the worst case. According to the math and physics we know today, we have no idea how big the worst case might be. (Source: A keynote talk at the 2020 New England Complex Systems Institute conference.)
We look at deadly wildfires and think that the fire in Paradise, CA was shockingly horrible (it was) and so it must be a worst case. It's not close to a worst case. A fire tornado followed the 1923 Great Kantō Earthquake. That fire tornado killed 38,000 people. What if the Paradise fire had started upwind of a major city in similar conditions? Could we see 100,000 dead? A million?
My takeaway is: We are not prepared, and perhaps we can't be prepared, for some of the actual plausible worst case events.
Well the good news is: there's an upper limit to how bad disasters can get; they can kill everyone.
We already know a few low (but not _that_ low) probability events that can kill everyone, so we don't have to worry that we're neglecting something that's a thousand times less likely but ten thousand times worse.
Also according to my several minutes of research on the subject it looks like the 38,000 people who died in the fire tornado were all in the same building or building complex, having fled there after the earthquake.
Good point on upper limit of disaster magnitude. However, short of extinction, there are a number of underappreciated disasters.
Yes, the people were all in one shelter area. If wind driven wildfire swept a major city, there would be lots of people in shelter and last stand areas. You can't evacuate that fast.
The nice thing about Taleb is it's easy to install his brain-module. He helps make sense of one's experience a little, perhaps.
I feel like my own field of academia -- astrophysics -- runs counter to a lot of Taleb's "theory vs practice" argument.
A lot of the 20th century's major discoveries were not accidental bolts from the blue, but resulted from people being guided by theory in order to design their experiments just right.
Einstein came up with General Relativity - the most important theory in modern cosmology - by immersing himself in the theory, incorporating work by lots other scientists (people like Maxwell and Lorentz), and trying to attack a particular problem. And he famously succeeded.
Accidental discoveries did happen, of course. Like Hubble discovering that the Universe is expanding. But still, Taleb's version of the process - someone makes a practical discovery, and theorists come in along later and hastily try to explain what's going on -- just doesn't really fit here. Even before Hubble, scientists were aware that Einstein's equations really seemed to imply an expanding Universe (and people came up with all kinds of kludges to 'fix' the problem). Hubble showing that the Universe is really expanding caused a feeling of 'oh thank goodness, the theory was right all along'.
Or take the discovery of the Higgs Boson. Or gravitational waves. Or the first exoplanet. In all of these cases *theory came first*, and experimenters, guided by the theory, knew where to look.
I don't think Einstein's equations require an expanding universe any more than Newton's. In either, you have the question of why a static gravitationally bound universe would not be visibly collapsing. Einstein's equations allowed for a constant of expansion, but it is still not really used. And exoplanets are kind of obvious. The other two I will give you.
The truth is that both theory and observation go together, and a field lacking either will be sick.
The point is that, to produce a universe as we see it, it has to be evolving. Which is the obvious answer if you take Einstein's equations with no cosmological constant. The initial conditions implied were computed by Lemaître.
In a Newtonian universe, there was just no solution possible.
>I don't think Einstein's equations require an expanding universe any more than Newton's.
That's not true, I'm afraid. Newton did of course realise that his law of universal gravitation made the Universe prone to collapse (in 1692 he wrote to a friend, saying the whole Universe might "‘fall down to the middle of the whole space & there compose one great spherical mass"), but it was possible to just say that the Universe was infinite, and therefore didn't have a centre of mass.
In Einstein's Universe, change is inevitable. Alexander Friedmann that first realised this in 1922. The equations of GR directly imply a Universe that is either expanding or contracting -- which is why Einstein invented the 'cosmological constant' kludge to hold the Universe static.
When Hubble discovered the expanding Universe, Einstein threw the cosmological constant away and embraced the dynamic Universe his equations implied.
Well, pehaps Newtonian gravity has more wiggle room for a static universe. But in any case Hubble's observation that the universe was visibly expanding was really just a synthesis of earlier observations - such as those of Vesto Slipher who observed that most distant galaxies were receding around 1912, before Einstein's theory of general relativity was published.
You know, I'm always bemused by the number of people (even within education) who assert that the purpose of formal education is to imbue the student with a beautiful theoretical framework that will allow him to easily predict and calculate all he needs to know about the Real World that he is about to enter.
This is deeply and even obviously silly (although plenty of people doing the educating think this way, perhaps to pamper their own egos). The only rational purpose of education is to summarize and distill the past, so that the student learns all that has been done before (in some relevant area) far more efficiently and quickly than if he had to stumble upon it himself by chance in the Real World.
It is (or ought to be) a *past-focussed* process to greatly shorten the time of complete n00b apprenticeship, so that the student can become a journeyman in the Real World much sooner and achieve mastery at a younger age. That is, it's logical foundation *is* skepticism about theory versus Real World experience. It says "learn all the ways people have tried X and Y and theory Z and T and why they didn't work as fast as possible, in a planned firehose of information dump, so that you can go out in the Real World sooner and NOT repeat any of the umpty-six dumb mistakes people have made since AD 800 or so."
That doesn't, of course, mean that the real purpose of education has remained uncorrupted, or that nitwits, both within and without education haven't enthusiastically debauched it. That they have -- the Church of Education cultists is almost as obnoxious as the Church of Science cultists. But in principle education should be a big buffer against volatility -- an "antifragile" enterprise -- because by allowing students to learn of the experience of far more people, in far more situations, than would easily be possible in any Real World situation of equivalent duration, it makes far fewer of the curveballs life and Nature throw at us come as an utter surprise.
Personally I vew the modern fashionable disdain for the institutions that helped us tame and ride chaos (as well as the descent of those same institutions into fossilized rococo courtly competitions) as a kind of broad late-Empire intellectual decadence, the kind of hazy sentimentality that might've led the late Empire artisan chafing under Imperial taxation and corruption to fantasize that the life of a medieval village smith would turn out to be a tremendous improvement for his grandsons. Ah! The fresh air! The simple joys of the peasant life in the harmonious shtetl nestled in the bucolic countryside, free of any distant scheming Senators. From which mistake follows 1000 years of muddy plague-eaten misery, but maybe that's what happens when (a) some of us mistake a rational system for a religion, and (b) the rest of us are too impatient to scrape away the barnacles and decide to jus tgo all Canticles of Liebowitz on the whole thing. If rationality has been so thickly coated in ritual that is hard to recognize any more -- why not treat *all* rationality as ritual and just give yourself over to impulse? That'll work out well.
> to imbue the student with a beautiful theoretical framework that will allow him to easily predict and calculate all he needs to know about the Real World that he is about to enter.
> to summarize and distill the past, so that the student learns all that has been done before far more efficiently and quickly than if he had to stumble upon it himself by chance in the Real World.
I don't think these two things are as mutually exclusive as you seem to think they are. That beautiful theoretical framework (ideally) *is* a distillation of everything we've learned by doing things the hard way.
Yes. But it's not so much a framework that we think is robustly predictive, but rather one that rationalizes a ton of experience. Theories are required to summarize the past very, very well. Do we also construct them to predict the future? Kinda sorta. We usually use them to rule out experiments or ideas that can be shown to be too similar to what has not worked in the past. But we wouldn't *do* research at all unless we hoped and expected the theories to *not* be accuratively predictive in some area or another.
That is, if the academy was equal to its caricature, something that professed to believe it could precisely predict anything anywhere on the basis of its theories, then it wouldn't attract intellectually curious people at all. If you *believe* there's a Theory of Everything (or at least Everything Of Interest To Me) and once I learn it I can just run a computer program or something and calculate anything -- why bother? Why study, why think about things, why even hope for having the idea that nobody else has yet has?
The comments on small versus large nations made me imagine the US as 50 sovereign countries. Imagine the diversity of culture, social systems, and economic systems in such a world. Of course, who knows how many "intra-US" wars would have been fought over a couple centuries. If you had a choice between one bigass US (as today), or 50 sovereign nation states, which would you choose?
In a vacuum? The intra-US wars. I already suspect (although I hope I'm wrong) that we're in the early stages of gearing up for another civil war that will be way worse (even more so than the last one) than any interstate bickering. However the US exists in the world, and I'm not sure the "Pax Americana" hasn't been worth it from a purely utilitarian standpoint. Although per the whole subject of debate maybe that too is a false tranquility. It is (as Taleb repeats ad nauseam to an uncaring world) hard to say.
Yes, in a vacuum. Imagine there was never a union and each state developed its own sovereign history. Perhaps Massachusetts would still be Puritan. Louisiana would have an unrecognizable language. Some states might have open borders with others. Others might have border walls.
The question is more epistemic versus utilitarian, since it's quite impossible to model the net utilitarian impact of such a massive divergence. So, everything being equal in the two scenarious - avg. GDP; avg life expectancy; overall lives lost in last 2.5 centuries to wars, disease, famine - which sounds more appealing?
Doesn't seem like a possible comparison. You can ask what we might see if the 50 present states decided to all split into separate nations now, but they wouldn't exist if we'd never reached the present of one nation. Which of the 13 original states was going to buy or conquer the rest of North American if they'd never united? Borders and ownership would look a lot different. Presumably some of the original 13 would have merged anyway. New England as X distinct nations doesn't make much sense. Louisiana Territory is unlikely to be subdivided in such a way as to balance slave with non slave US Senate when there is no US Senate. Former parts of Mexico are most likely still a part of Mexico. Hawaii and Alaska would probably just be part of Japan and Russia.
Net effect is pretty hard to predict. I guess maybe better for former plains tribes that might still exist? Vastly different Europe if we're not spending two and a half centuries clearing the frontier of natives so they can send their huddled masses. Most of the Pacific Rim probably belongs to Japan, which likely doesn't make a huge material difference to the people there. No Pax Americana, but there's probably something like a Pax USSR anyway. Maybe communism even works without a vast capitalist beast to force them into an arms race to bankruptcy? Does the whole swath of world from Egypt to Afghanistan look a lot different with only Russian and European meddling but no American meddling or does it look basically the same?
What the hell does East Asia look like? Seemingly Japan can probably conquer China in the 1930s if they don't have to fight an eastern front, but they can't seriously hold on for 90 years after that, right?
If you just mean me personally without thinking about how the rest of the world gets impacted, I rather like the union. I've been able to live in and freely move between many different states without ever having to go through an immigration process. But man, you can go levels deep with this. I'd be Mexican, not Mexican-American, which is worse in the real world, but is it worse when all the oil riches of California and Texas belonged to Mexico? My family might be oil barons. Does Spain buy Louisiana instead of the US doing so? Oil-rich bread basket of the world Mexico doesn't become a craphole narco state and comes to dominate North America while Virginia and Pennsylvania wage centuries of petty cross-border squabbles , the eastern seaboard being balkanized and plagued by never ending religious wars and dictators rising that Britain, France, and Germany periodically have to send in squads to put down to help Mexico keep its northern border safe so the beef, grain, and oil keeps flowing?
Surely, the rest of us couldn't just stay European colonies forever, right? They lost all the other ones too at some point. I'm kind of just assuming Napoleon still invades Spain and Mexico takes advantage to win its independence at about the same time. Do we ally with Virginia and Maryland in a great Catholic alliance against the protestants like Saudi Arabia and Iran fighting their proxy battles for cultural dominance in the middle east?
Wow. Ok. You took it there. Btw, sounds like the makings of a pretty cool game.
"Only make sense with an assumption that antifragility is systematically mispriced" it is. Antifragility benefits systems over cases and the collective over individuals: again, evolution. *Individuals* don't want big shocks, and there is is certainly anti-humanitarian to say that the weak and the unlucky should die for the benefit of the strong and lucky as both you and he point out. So we tend to seek the stable, the predictable, the smooth, meaning that such things are overpriced due to (misguided) demand.
Nobody (for the most part) *likes* the idea, I think Taleb is just arguing that it's a better model of the world than the ones currently being used, and that it *matters* because the current at-odds-with-reality models are disaster-prone. I also agree that I don't think he'd take umbrage to the Rationalist movement: the whole thing (and I realize I'm grossly oversimplifying and unlike you I wasn't there at the beginning so correct me if I'm wrong) seems to me to have started when Yudkowsky looked around and said "hey, why do all of these intelligent educated people believe in and do all these patently absurd things? There must be some important thing here besides intelligence and education that we're failing to reify".
"Taleb never makes this claim, and I think it would be hard to argue that an entire category of instrument has been consistently mispriced since forever. But then what is he trying to say here?"
I think this is exactly the claim he makes. Just in a reverse "picking up pennies in front of the steam roller" sense that it will take a long time. I don't think Taleb believes in the EMH.
Re mergers: "The combined unit is now much larger, hence more powerful, and according to the theories of economies of scale, it should be more "efficient". But the numbers show, at best, no gain from such increases in size [...] There seems to be something about size which is harmful for corporations."
Ronald Coase did work on this in The Nature of the Firm (1937). In a nutshell, the size of a firm is a function of economies of scale (favoring expansion) and transaction costs (favoring contraction). Firm sizes equilibrate at the intersection of these lines. In other words any given firm is probably roughly as big as it ought to be, and if you merge two firms you're likely to introduce higher transaction costs, which your gains in economies of scale are not large enough to offset.
Transaction cost in this case is basically the friction with which information flows inside the firm. So overhead, the likelihood of managers poorly allocating resources etc.
I think in some cases, mergers and acquisitions actually reduce transaction cost via vertical integration (i.e. company A is your largest supplier, and you are by far company A’s biggest customer. It might be a net reduction in transaction costs to merge and turn your external purchases into internal transfers)
I think the economy of scale vs transaction cost model assumes mergers of similar firms and “mature” firms (that is they need to have had time to grow to the size they “ought” to be). Also no major leaps in technology - certain forms of transaction are of course cheaper and occur with less latency than they were in 1937.
Yea, it's true. Integrating a supplier can reduce transaction costs, because there are also costs associated with bargaining, adverse selection, and keeping trade secrets etc. That's actually also part of Coase's work. The reason integration can reduce transaction costs is why we have firms in the first place; technically a perfectly efficient market would have every worker be a freelancer, where they bargained over compensation every time they performed a task, and they hopped between employers all the time to optimize talent. But because we don't live in a zero transaction cost environment, that becomes prohibitively expensive and we create firms. Mergers, where suppliers are successfully integrated, are cases when transaction costs are on net reduced even though the organization grows.
So in short firms form because integration is a way to reduce transaction costs, economies of scale puts more upward pressure on optimal firm size, and then transaction costs ultimately constrain firm growth at the top-end.
Also an interesting observation is that leaps in tech, as you mentioned, is what made transaction costs sufficiently low as to allow freelance work in what we today refer to as the gig economy. It's an extension of the Coase theorem.
This distinction between discovery by "accident" and by research seems very arbitrary to me. What do you think those scientists/engineers were doing when those accidents happened? Most scientists (most good ones anyway) are well aware that the direction of research sometimes has a life of its own, but that doesn't mean you can eliminate research while keeping the "accidents" to which it leads!
The Portuguese most likely discovered Brazil by accident in the context of their programme to find a maritime route to India by following a coherent theory that you could just sail around Africa. Does this show that all their work trying to map the African coast and trying to model the Atlantic wind patterns could have been ignored in favour of just sailing aimlessly around the Atlantic? Or does it illustrate that a deliberate programme to discover/explore one thing/aspect/field/whatever will often yield unexpected results with unforseen benefits - but which wouldn't have happened if they weren't doing the "research" in the first place? Hint: without the wider programme to find the maritime route to India, the Portuguese wouldn't even have developed ships capable of making it to Brazil.
Now, there is an argument to be made that maybe currently there's too much effort dedicated to incremental research compared to looking for breakthroughs (which would arguably make these "accidents" more likely). It's still all research though, and it doesn't invalidate that both types of approaches are important - even if one is obviously sexier.
"John fancies himself protected from volatility. But he is only protected from small volatilities. Add a big enough shock, and his bank goes under, and he makes nothing. George is exposed to small volatilities, but relatively protected from large ones. He can never have a day as bad as the day John gets fired."
Of course he can. George gets into a car accident - pretty likely when you spend all your time driving - and not only has he lost his job, he's lost his cab, which he needs to get further employment as a cab driver. If John gets fired, the only thing he needs to find another job as a banker is his brain.
Everything is antifragile until they encounter a risk they didn't include in their model.
And no amount of hedging can protect you from the risk that a grand piano falls on your head
Kierkegaard said in a few places that the real test of your philosophy is when a roof tile hits you on the head out of nowhere.
"according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division."
That seems like crazy talk. Any time you have N items, and M people who want to share them, you divide N by M. Even if you do it like a Turing Machine would do it (going around the M people and having them take one until you have less than M items left all the while incrementing a counter for the number of rounds), you're still dividing.
Is Guy's claim that you never have N items and M people? How is that even possible?
I'm imagining a family blankly staring at a pie, baffled by the unsolvable problem of cutting it into equal slices.
This is the "paradox" of Buridan's Ass.
In the up-front trichotomy between fragility, robustness, and anti-fragility, it is not at all obvious to me why one would prefer anti-fragility over robustness. I suppose the argument is that most of modern society has a false confidence about how robust it really is, but it doesn't follow that the solution is "embrace volatility" as opposed to "anticipate under-appreciated possible sources of volatility and take steps to avoid them."
A book full of common sense in a world sourly lack it.
I would say you're right about hoplite and phalanx formations - they're quite powerful, but also fragile, and once they start to crack, it's often all over from there.
If you absolutely wanted to force the fragile/antifragile pattern, then the Roman legion would be the at least less fragile one - while a phalanx has all the tactical flexibility of a thrown brick, legions were designed to be maneuverable, to swap units in and out to combat fatigue, and so on (this is why we get the original Pyrrhic victory - even when they kinda lost, the legions inflicted a ton of punishment, because they could be defeated without things just snowballing from there).
It's also worth mentioning that Spartan _society_ was incredibly fragile, and when subjected to just the effects of passing time couldn't maintain itself one bit.
The notion of 'antifragility' seems to completely break down when applied to military formations. Battle tactics and traditions arise in response to particular situations, enemies and experiences, and cannot be expected to succeed outside of those parameters. The legions looked great against the centuries-old phalanx, which in turn looked fantastic against the chariots and light infantry fielded by their Persian Enemy. Presumably, these same Persian formations were highly effective at defeating whatever enemy they evolved to face. I know next to nothing about Persian history, yet strongly suspect that had Taleb written during the Persian Golden age, he would have described the invincible armies of king Darius as 'antifragile' without hesitation.
On the other hand, you might argue that the Roman Republic during the Punic Wars was antifragile. Their institutions absorbed immense pain and uncertainty, seemingly growing mightier with each new curveball thrown at them. But then again, maybe this is 'adaptability', much more than it is antifragility. I'm still iffy on the distinction.
What came to mind for me with the hoplite example was that they were citizen soldiers who equipped themselves and fought for their own benefit, rather than slaves or conscripts as was usually the case for opponents. Contemplating how this increases antifragility was informative.
IOW it wasn't the phalanx tactic that defined hoplites - that was a common approach used long before Sparta - and it wasn't the particular weapons and armor since those varied and evolved with circumstances and opponents.
Would a society fielding self supported citizen soldiers be antifragile by Taleb's definition?
"Evolution is antifragile. In a stable system, animals won't evolve. In a volatile system, they will. At times I became concerned Taleb was getting this wrong - animals will evolve to more perfectly fit whatever niche they find themselves in."
If I recall correctly, Taleb adopts a gene-centric, "selfish gene" perspective on evolution. The antifragility of evolution seems pretty straightforward under that view. For example, a population of animals will have genetic variation related to in which temperature they thrive. If the temperature stays the same for long periods of time, genetic variants associated with fitness at that temperature will become more common. If the temperature then changes, those rare individuals with variants associated with fitness at the new temperature will thrive, while the majority adapted to the previous temperature may go extinct. Or if there's no genetic variation left and no fortuitous mutation occurs, the whole population may go extinct, with other animals taking over the newly vacant habitats. (The high polygenicity of many traits could be thought of as an antifragile mechanism: even under strong selection, not all variation is exhausted, meaning that if the environment changes, organisms can still evolve towards the new optimum.)
I believe Taleb says something to the effect that no individual or population or even species is antifragile in the evolutionary scheme. Rather, it is life itself (or genes embodying life) that is antifragile.
I read this around the time it came out and have been thinking about revisiting, but this may have scratched the itch. I too, enjoy Table, but I realized with antifragile that part of the reason I'm so engaged is that I love to hate his arrogant tone, and because it's a challenge to accept that he's seemingly correct with so many of his points, but also contradicts himself terribly throughout the book. I.e. warning about halo effect, but acting as if he's an expert in exercise physiology when he brags about his weightlifting routine in the middle of the book.
I've long wanted to see an experiment in which Taleb and Pinker switch exercise routines (Pinker starts lifting Taleb's deadweights and Taleb starts riding Pinker's racing bike) to see if their attitudes and opinions reverse as well.
I like that the critique of this book is exactly what you'd expect from Taleb's intellectual attitude - he doesn't have a grand overarching theory of antifragility, but instead a series of anecdotes and thought experiments with some grounding in the real world that you can chew on in order to improve your thinking about the subject.
Writing from a farm in central Kansas, I wonder what Taleb would say about agriculture. There is immense variance involved in the practice of agriculture. However, rather than fostering antifragility, nearly all agricultural practices I can think of are designed to stamp out variance, in order to permit fragile, but hugely efficient practices.
Grain prices jumping up and down? Why be antifragile when we can kill the variance by building silos and storing our grain.
Weather getting you down? Why be antifragile when we can tame the variance with irrigation, state-of-the-art forecasting systems (the daily forecast is probably the highest-rated show around here), and genetic engineering.
Random calving complications? Why be antifragile when we can flip variance off by hiring a vet to oversee tough cases?
Not to mention the reliance on increasingly-complex machinery (combines and trucks, of course, but also increasingly GPS, and many others) which requires parts, fuel, maintenance, an uplink to space, incomprehensible supply chains, and a million other things without which the whole thing comes crashing down.
A world where we designed agriculture to be antifragile, is almost certainly a world where Taleb goes hungry.
I imagine Taleb would point to the Famine in Ireland which seems to have been due to an over reliance on two high yielding types of potato, both which turned out to be fragile to the blight.
A system that has 1000 varieties of crop will yield less on average, but the minimum yield will be higher.
So the core question is whether you optimize for the highest average return, or the highest minimum return.
Taleb would also argue that his dying from hunger is not the worst thing that can happen to his bloodline.
I might be getting the nomenclature wrong, but I would presume that planting many diverse crops makes the system more stable and redundant. The word 'resilient' comes to mind, but I'm not sure that 'antifragile' is it.
Limited mechanization, small acreage farming, or foraging seem like clearer examples of an anti-fragile food supply. I'm just not sure they're anywhere close to desirable given the efficiency tradeoff, and I suspect the same is true in most settings.
Agreed! There isn't enough conversation here about what the actual tradeoff is. Fwiw Taleb is introducing the possibility that there is a tradeoff at all (maybe common in your industry but less common in others) and noticing that we consistently overvalue efficiency over resiliency.
The intellectual demands of being a farmer these days are striking. Being a farmer in Kansas increasingly demands a STEM undergrad degree and the equivalent of an MBA.
No doubt! (I should clarify that my ties to the Kansas farming community are through marriage and affection, rather than profession or upbringing).
It takes an impressively broad skillset to run a family farm, let alone do so profitably. You are so right about the often overlooked 'MBA' part of it: successfully marketing a harvest takes serious strategic thinking, and can greatly impact a year's profits.
I have no idea what it takes in general, in spite of actually living next door to a few farms several times in my life but not being a particularly friendly neighbor, but the one farmer I have known, who was a girlfriend's dad, was an electrical engineer who decided he'd rather be a farmer and bought up some land in Amish country. Only reason it worked in such a shitty market for small timers is he had the skill to build his own generators and run the entire operation off of waste vegetable oil he got for free from the restaurants he sold his vegetables to.
I tried to read this book after hearing much praise for Taleb. I had to give up in frustration pretty quickly as, to me, it was just a lot of arm waiving. In particular, he employs the pseudo-intellectual practice of first creating, and then discussing, his own private terms of art. But the terms are never defined and change at will to fit whatever point is supposedly being made. There is no clear hypothesis that could ever be tested, and no useful rule or insight is ever forthcoming.
Shorn of all the jargon, he seems to be saying nothing more than: "stuff happens, it's hard to predict, act accordingly." I don't get what people think they are getting from his books.
Making up your own terms like "lindy" seems to be a way to get an enthusiastic audience. I guess it creates a community of people who know what Taleb means by "lindy."
He did not make up this term. It was coined in the 60s by Albert Goldman.
I looked it up just now and apparently it a version of "the test of time" heuristic. You know, like if the Pyramids have been around a long time, they will probably last a while longer also. I guess it's named after a "Lindy's" restaurant in NY that is famous for being around forever despite having objectively crappy food.
So invoking the "Lindy" effect sounds like a cute way of saying "past trends tend to continue, until they don't."
One of my (many) problems with Taleb's writing is that it doesn't lead to any sort of practical model or decision procedure. The fragile/robust/antifragile classification suffers from not being mutually exclusive nor well-defined, and so it's mostly useless when it comes to applications. The comments here have already highlighted many problems with the classifications Taleb gives in his book, suggesting that the classifications are not well-defined enough that people can agree how to classify things. Moreover, something might be antifragile to small changes, and fragile to large changes: an example is muscles, which Taleb points out are antifragile to small stresses (getting stronger with exercise), but they are fragile to large stresses (strains and tears can cause permanent damage).
Without the ability to clearly classify things as fragile/robust/antifragile, the theory lacks any predictive ability and greatly limits its usefulness. There's definitely interesting things to be said about systems that take advantage of natural disorder, but I feel like the framework Taleb sets up falls short of a working theory.
I totally agree with the idea that anti-rationalism isn't opposed to rationalism - it seems like a natural result of using rationality on itself. This is where 'why philosophers should care about computational complexity theory' feels relevant:
https://www.scottaaronson.com/papers/philos.pdf
If you aren't consciously thinking about how accurate your model might be, what its limits are, and where it's going to be wrong, you're probably assuming some naive model of computational complexity which says the model is cheap and easy to compute totally accurate answers really fast.
Likewise, if you ignore the fact that people are computers, you might naively think we should be able to scale societies arbitrarily largely. Once you understand that human beings _are_ computers and that our societies are networks of computers, it becomes reasonable to conclude that governance systems are network topologies, and not all network topologies are going to scale to arbitrarily large degrees.
If he's opposed to anything, i think it's something like a blind faith in experts, and trust in an existing system, rather than a willingness to prioritize evidence-based thinking, and skin-in-the-game predictions, over "what those smart people think."
I'm very curious about this. What insight about human societies requires the explicit description of humans as computers? Does Scott believe sociologists are unable to effectively contemplate issues arising from the size of human settlements, without appealing to big-O notation?
If you assume
- human relationships require energy to maintain
- people can only have close relationships with ~150 other humans (Dunbar's number)
- people will look out for friends of friends, and friends of friends of friends, but consider anyone beyond that a stranger
This implies an upper limit of 150^3 = 3,375,000 human beings can interact with each other, because you eventually have people who are connected so distally that they aren't really able to care about each other.
I have no idea what Scott believes on this issue.
I can't tell if you're being snarky about big o here, but there are a bunch more examples.
Sorry, I must have sounded that way, but I did not mean to be snarky! Appreciate the clarification
Thanks for clarifying - the internet is were. The idea that computational limits affect the human experience has been my main blogging focus. Here's another example:
https://apxhard.com/2019/05/02/popular-culture-as-distributed-lookup-table/
Joscha Bach has talked a bunch about this, too:
https://www.youtube.com/watch?v=P-2P3MSZrBM
The thing I find most notable about Taleb is how many clearly intelligent people hate him and his theories, without reading one of his books.
If you hate him, I wonder: compared to whom?
Taleb is generally more right, more insightful, and more actionable than Malcolm Gladwell and the TED crowd.
Taleb is more correct and useful than most academics in social sciences.
Taleb is approximately as correct and insightful as Ted Kaczynski, and likely less dangerous on net.
Taleb is likely less accurate on hard sciences and the patterns of pure invention. His discovery here is a phenomenon where you can win without being right, yet he's seemingly preoccupied with being right.
I suspect the biggest umbrage is not that Taleb is a bully, but that he packages his philosophy with just enough math that those who live and breathe math find they have to deal with the ramblings of a mad philosopher when interacting with others.
I'm not sure I understand your last sentence - math-savvy people are annoyed by this mad philosopher? But how is this annoyance specifically triggered when interacting with "others" (other philosophers? other "math people")? Can you rephrase it for me, please?
By the way, like it's the case with many people who have a point but are widely disliked, it's not really a mystery - Taleb just seems to act a bit obnoxious as a person. You'd be surprised how much even highly intelligent people care about that kind of stuff, beyond whether someone's technically right. Also, many of the "intelligent people" are the sort of the people he directly antagonizes, plus, in most current intellectual/academic spheres it's good practice to frame your ideas as adding on to/furthering discourse rather than stating that most of the people before and around you are basically idiots, and you are now going to enlighten them.
I personally don't hate Taleb by the way - I read Black Swan and found it interesting (if too long) and I was a bit taken aback when I discovered his online blogging and tweeting persona, but I'm not invested enough to feel strongly either way. I just don't think it's surprising at all that many people would ("hate" him).
I was extremely disappointed to learn that the Carter in the Carter Doomsday Argument was not Jimmy.
I would love for Scott to offer a review of some of Chapman's work, and if I could pick, I'd ask him to write about https://meaningness.com/
2nd this. I read this stuff and thought some of it good, and most of it interesting.
I think the basic thesis in the first few paragraph falls apart as soon as you try and pick apart what is meant by "the better it does" and "does well" here. These imply value judgements or objective functions of some kind, which glasses and rocks do not inherently have. What does it mean for a glass to do well? Why assume that glasses have an inherent goal of continuing to be vessel-shaped rather than transform into entropy-maximizing piles of shards?
Intuitively, glasses are "do better" by being vessel-shaped because that makes them more valuable to conscious, value-judgement-having observers. But if you define "better" as being a value judgement on the part of conscious observers, rather than the entity itself, then the hydra example falls apart, because arguably a hydra growing more heads is a *worse* state of affairs for the hero fighting it.
If instead you try and salvage this by replacing "better" with "more stable or resistant to being altered", then both the hydra and the evolution examples fall apart.
Am I missing something here?
Maybe I'm overthinking this and it's fine as long as fragility is always assessed for a (object, objective function) pair instead of just for the object itself.
Exercise is antifragile until you overdo it. And it has to be appropriate. A lot of people shouldn't run anywhere, except to the orthopedist's office.
Everything is anti-fragile, until it breaks. Taleb just traffics in circular reasoning, IMHO.
Hah! Good one.
The paragraph "For example, if some very smart scientists tell you that there's an 80% chance the coronavirus won't be a big deal, you thank them for their contribution and then prepare for the coronavirus anyway. In the world where they were right, you've lost some small amount of preparation money; in the world where they were wrong, you've saved hundreds of thousands of lives," bothers the crap out of me, because it seems completely at odds with the message of your post "a failure, but not of prediction." The point is that if the scientists gave a 20% chance of a pandemic, you don't "prepare anyways;" you prepare because a 20% chance of hundreds of thousands of lives being saved justifies an 80% chance of wasting a small amount of preparation money on something that wasn't going to be a big deal.
you prepare anyway because...
My point is you're preparing *because* the experts estimated a 20% chance. If they estimated a 0.2% chance, you wouldn't make any special preparations because that wouldn't significantly increase the chance of a pandemic more than usual. You are preparing because of the experts' prediction, not in spite of it.
Yes. But my point -- that I know I didnt spell out, sorry -- is that I read what Scott said as exactly what you are saying. That is, you are preparing because it makes sense given the predictions (but in spite of the fact that 20% sounds low and a naive system-1 response that doesnt take payoffs into account could lead to not prepare). He is not contradicting his other post. I wonder if many other people interpreted this as you did.
I'll finish up last year's pasta store sometime later this year. The opportunity cost of stacking up was minimal, so why not do it in the face of real uncertainty? Same thing with the toilet paper (although it's been thoroughly debunked that the shortage was because of hoarding - rather, it just took a while for supply chains to switch from corporate to consumer).
I'm not convinced that "prediction" in the sense of putting a probability on an outcome is so totally divorced from recognizing tail risk. It's not possible to be robust or antifragile to every conceivable event, and it's certainly not cost effective to protect against all of them equally. Some tail risks are more likely than others, or harder to protect against. A "tail risk" of 1 in 100 or 1 in 10,000 is very different. There are steps you can take to be protected against broad categories of events (e.g. stocking your house with nonperishable food could help in case of pandemic, natural disaster, social unrest, military attack, or a variety of other events that make leaving home or acquiring food difficult) but inevitably most agents will have to choose what to prepare for and that necessarily implies a model about how the world works.
> This is one reason (among many) Taleb disagrees so strongly with Steven Pinker's contention that war is declining. Pinker's data shows far fewer small wars, but does show that World Wars I and II were very large; he interprets the World Wars as outliers, and notes that since WWII the trend has been excellent.
FWIW, Pinker has acknowledged that if WW3 were to happen the death count would probably be astronomical and he still allows the possibility despite the persuasive arguing in Better Angels of our Nature that violence is decreasing.
I ran the same Lebanon v. Syria comparison on the same website as you did, and it shows only a roughly 10% difference in GDP per capita in 1913. Same Maddison data, very different picture when I generated the chart.
Same here. I suspect the review might have been written long ago and that Our World In Data probably changed its methodology since then. One difference between both graphs is that Scott's screenshot says the data were "adjusted based on a single benchmark year (2011) which makes them suitable for comparisons over time but unsuitable for comparison between countries", while the current OWID graph now says "These series are adjusted for price differences between countries using multiple benchmark years, and are therefore suitable for cross-country comparisons of income levels at different points in time" and shows near-identical GDP per capita for Syria and Lebanon until 1913.
good catch, thanks
Taleb has seen this post and he was *extremely* unamused by your joke about Lebanese proverbs, to the point that he is blocking people for sharing you: https://twitter.com/RichardHanania/status/1374798853493288961/photo/1
Antifragility, the idea that some systems benefit from fragility, seems like an insightful idea. But every example I work through in my head tells me that robustness is the ultimate goal, and antifragility is useful only as a reminder that not all systems need to depend on stability.
Exercise is a great example. The goal is fitness, to be able to physically overcome a wide variety of situations. That's robustness. Exercise is an antifragile system that helps you get fit. But eating, hydration, and sleep are also important, and those are fragile systems. (Maybe eating is antifragile? Intermittent fasting says yes.)
Evolution? Robust is another way of saying fit. Natural selection is definitely an antifragile system, but there are plenty of fragile systems like symbiosis or food chains too.
What about computers, or even smart phones? Fragile in the volatility sense, but not in the practical sense! Are there any antifragile systems in a smart phone? Seems to me it's just fragility surrounded by a robust case. Sure, sometimes there's a catastrophic failure like dropping on a sidewalk or going through the wash, but the cost of those is small compared to the value of the working phone.
So I like the idea of antifragility, but I don't believe it's the best idea ever. It's just one more too l in the toolbox.
I have a notion that anti-fragility exists over ranges, and in particular, that enough stress makes an anti-fragile system stronger, but too much stress will break it.
I *think* this is important because Taleb is so much in love with anti-fragility that he doesn't want to think about his favorite systems having limits.
Which gets to that I think Taleb's boasting and insults might be part of his charm*, but they have real epistemological risks-- his style makes it hard for him to notice whatever errors he might be making.
*I've pretty much become immune.
"Taleb is so much in love with anti-fragility that he doesn't want to think about his favorite systems having limits."
This isn't good for his marketing.
"But haven't theories given us all sorts of useful things, like science, which leads to technology?"
I massively recommend the book Shock of the Old, dedicated to this subject. It is very brief and dense and really enjoyable. It persuasively argues that the answer is "no" by arguing both the history of how important inventions arise and fall (in the 20th century) but also that our ideas about which technologies are important to us are wrong. A novel (to me) example he gives of how technology works is that he claims that poor mechanics in India understand American cars much better than the people who designed and built them: because in much of India they have to understand how to keep them running for many times their designed lifetime.
https://www.amazon.co.uk/Shock-Old-Technology-Global-History/dp/1861973063/ref=sr_1_3?dchild=1
I got a bit disillusioned with science in university when it became clear that the epistemology of science was actually to keep fiddling with your model (adding more and more free variables, appropriately justified by this or that idea of reality) until your predictions matched reality. And that the prediction of novel phenomena from these models (that is, these models teach us things we didn't already know -- rather than just allowing us to make accurate predictions in line with statistical or machine learning methods) is really the exception rather than the rule. You can really see this when you note that as a whole Newtonian Mechanics has no scientific justification given Quantum Mechanics -- you learn them entirely separately, and just wave your hands, or hold them over your heart, and you have it as an article of faith that if we were omnipotent we would understand how quantum-mechanisms give rise to Newtonian dynamics, but honestly: there is no other way to describe Newtonian Mechanics today other than statistical curve fitting: where concepts like "force", and so on, are just meaningless free variables (pleasing to our intuition) we're using to fit reality. Dark matter is another example of this (a thing that is the vast majority of the universe, but only detectable as a magic free variable that helps our equations better fit reality). My current understanding of physics is that it's statistical curve-fitting, but where everyone involved is constantly lying to themselves about what they're doing, even though nobody apart from those on the current-bottom-level (I guess string theory? About which I know nothing, sorry.) should have any reason to believe they are doing.
Anyway, sorry -- the point is that once planes exist, we can curve-fit the behaviour of planes, and use that to guide our development of better planes. But there's an issue that before planes existed, we had nothing to curve-fit to -- so engineers just had to try out stuff until planes were invented. And then the same thing happened once we got to super-sonic planes -- scientists weren't much help -- except when they were being engineers. The Old New Thing makes this point about the Manhattan Project that it was an engineering project employing well-known engineers (who are generally known as scientists because that is a more fashionable title -- compare Galileo who went by the title Philosopher because "Mathematician" didn't command respect."
"Medieval European architecture was done essentially without mathematics - Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
One thing that confuses me in statements like this. Is it implicit when people talk about "Europe" or the "whole of Europe" in these days that they are talking about Christian Europe? Or do people making statements like this have a blind spot about Islamic Europe in these days?
In any case I think this statement is unfair, because there were plenty of Muslims who were into maths and technology, and there were quiet imports of technology into Christian Europe.
For example, officially, the Catholic Church believes The Pope invented mechanical clocks in 963AD (an accurate pendulum clock that rang bells for specific hours)... and that it was a pure coincidence that this was after an extended trip to spend time conversing with some Muslim experts and various things.
The internet is presumably anti-fragile-- it considers censorship to be damage and routes around it. Pretty good censorship (as in China) is still possible, though.
****
Unrelated question: Has Taleb influenced enough people that he's affected what investments get made?
Speaking of the final note on 'anti-rationalism', I think Taleb rather thinks he belongs/actually belongs to the tradition of critical rationalism alongside with Hume, Popper and Hayek. Many of his remarks on antifragile systems seem to me to relate to Hayek's on 'spontaneous orders', just as his praise of risk and adaptation to volatility is slightly reminiscent of Popper's point about making conjectures as bold and therefore unlikely and specific as you can. I think it's in Objective Knowledge where Popper deals with how people seek for regularities in daily life, stability, balance, then don't find it and become unhappy because of that.
History person chiming in here - Spartan Warriors were the definition of fragile. The problem is we see them as being lone soldiers, or in a phalanx with other Spartans, and don't look at the society as a whole. Spartan soldiers were essentially idle - they fought, but they didn't 'work', and the society as a whole was structured with a vast, vast underclass of helots and semi-independent Greeks supplying a tiny elite at the top, maintained by terror.
Any serious disruption to this system did far, far more damage than an equivalent state like Athens or Thebes, for almost no benefit to the society at all. - there is no Spartan art, poetry, music, drama or even architecture.
I kind of wish substack would implement something like what webnovel has for comments, where you can comment on specific paragraphs, and to keep them from getting in the way, it just shows a number at the end of the paragraphs with comments on them that represents how many comments there are for it, and clicking the number opens a pop-up with all the comments, in branched format.
Mostly so I could have left a quick comment on the large vs small stone paragraph: the method of gain is from the square-cube law. For (my favorite) example: going from a cube with edges of length 4 units to length 5 units almost DOUBLES the volume.
"Taleb is much more ambitious (some would say less careful and scholarly) "
Isn't part of his argument that it's not that good to be "careful and scholarly"? In which case, he practices what he preaches.
Evolution works because the sun is continuously throwing lots of energy on this planet. Doesn't "anti-fragile" just mean "things that eat energy from the entropy of others"?
>>> "he suggests theory is much less important for technology than we give it credit for. He makes the same point I made in Is Pharma Research Worse Than Chance - a whole lot of drug design seems to happen more by accident (or more politely, through tinkering and investigating) than by smart people using theory to discover drugs" and
"I was surprised to see Taleb point out the same effect in fields like physics and engineering. For example, he argues that jet engines just sort of happened when engineers played around with airplane engines enough"
Matt Ridley's recent book How Innovation Works goes into this in depth; it's almost the thesis of it.
I partook in a Model-UN representing Libya in late 2011, and quite clearly remember the Syrian delegation refusing to partake in any negotiations because they'd been sanctioned by the entire world as a result of the uprisings there. So as far as I'm aware and can remember (Wikipedia backs me up) the Syrian civil war began in 2011, meaning Scott's comment about how "Antifragile was published in 2012, before the Syrian Civil War" isn't quite true.
> For example, suppose I am long (or short) VIX. If something unpredictable changed to make the world much more volatile, that would be a positive black swan. If something unpredictable changed to make the world much less volatile, that would be a negative black swan.
I assume Taleb would say there is a tail of volatility/fuckery so insane that it rocks the entire financial system to the point where your position on VIX is just paper. VIX operates, like the banker, within a region and just trails off to zero at the point of counterparty risk.
Hi, there is no such thing as "anti-fragile." All things in the universe are fragile, material and immaterial. Some things may gain from disorder but they are not "anti-fragile". Too much disorder will make them fragile. Put options may gain if volatility increases but beyond a certain point exchanges may go bust due to counterparty risk. Taleb is inventing artificial images to attack. He is excellent in marketing and convincing crowds he got something. He got nothing. He is confused and his books are repetitions and can be summarized in 500 words.
I think my new unifying theory is this. In the face of uncertainty, it pays to be Talebian. Otherwise, you should be Alexanderian (Scott).
I have sometimes toyed with the idea of reading Taleb.
This post finally cured me of that. He seems a lot like a stupid person's idea of a smart person?
A populist trying to generalize personal experiences and who thinks the whole universe is options trading. Antifragile is stupid idea because everything is fragile even whole universe can collapse down to singular point with everything in it. His examples comparing taxi drivers to bankers are beyond stupid. I tried reading Fooled by Randomness many years ago. I never finished the book because it was painful read. He must have used a novel editor from UK or Australia or something. You must admit the man has top notch marketing department. For him anyone who doesn't agree is an idiot who gets blocked in twitter. All his books are full of repetitions and can be compressed to a 5 minute read in a blog. Finally, a fierce proponent of lockdowns while advising a tail risk fund that benefits from stock market pain. This is pure skin in the game.
"Roman numerals (the only numerals anyone had at the time) were too unwieldy to add or subtract, and "according to the medieval science historian Guy Beaujouan, before the thirteenth century no more than five persons in the whole of Europe knew how to perform division.""
Ok about the middle ages, but the Romans themselves had very advanced abacuses that could do math in their weird system of decimal for integers and dozenal for fractions. (Also, unlike the legend says, they obviously knew about the zero.)
Taleb is full of contradiction because his "principles" are weak personal experiences. He's sort of a populist. He advocates anything that can boost his book sales.
As a reminder, the "rationalist" community is very anti-rationalist too (which you kind of suggest), the proper term for it would instead be empiricist ?
"Mild empiricism"
the proper term for the rationalist community is techno-libertarian.
For some reason i missed this post when it came out; i am only commenting here on how you presented evolution. if in fact Nassim Taleb presented his material as you indicate; it is inaccurate. for one thing there is no such thing as stable environment. Secondly, there is no such thing as a stable niche. one of the over simplifications of Darwin's approach (though he actually was far more complex than he is made out to be and did not say most of what people think he did) is adaptation to static niches. in actual fact environment and organism continually alter each other. environment is not background with holes in it into which organisms fit themselves. it is more akin to a living field that organisms adapt to. environment then changes causing changes in organism and so on. over simply, the entire ecological scenario is a self-organized nonlinear emergent behavior dynamic which operates best when close to the moment of self-organization. if it moves too far from that orientation, it becomes static and begins to fail, if it moves too close to the line across which self-organization occurs it fall apart. the healthiest situation maintains a balance point where constant change occurs, neither too far from or close to the line across which self-organization occurs. western science has too long over simplified evolutionary thinking for the masses which has resulted in a great many misconceptions. in part because most scientists never really understood it themselves.
Isn't your concern about Taleb's understanding of evolution misplaced, since evolution tends towards punctuated equilibrium rather than incremental improvement?
In antifragility is so great, why not just invest your life savings in lottery tickets?