729 Comments

When governments spend more than they collect in taxes, they do something that everyone refers to as "borrowing", which increases the "debt". But during the Covid pandemic, pretty much every country was "borrowing". But if everyone is borrowing, who is the lender? It seems to me now that those words do not have their ordinary meaning. "Borrowing" turns out to be code for "printing money", and "debt" is "the amount of money we've printed".

Well, not quite. I have the impression that governments nominally "borrow" from private corporations and individuals, but of course the way they "pay back" this money - with interest - is not by intermittently switching between budget deficit and budget surplus. Rather, they simply "borrow" more even money and use the new money to pay off the old debts. Which makes no sense to me: wouldn't it be better to print money to avoid paying interest? (and to avoid the risk of hyperinflation, have some sort of limit on the money-printing?)

I wonder if the whole system is set up in some modestly idiotic way - inefficient and difficult to understand, but not bad enough that the government is forced to change it. I also wonder if all the governments of the world use basically the same system, which would be a surprising "coincidence".

In any case, I've never seen a explanation that I could entirely follow. Aside from things like "fractional reserve" being hard to wrap one's head around, I find that virtually everyone who tries to explain macroeconomics takes for granted that their audience understands concepts like "buying debt" and the distinction between "fiscal", "monetary" and "financial". Does anyone explain this stuff like I'm 5?

Still, I would like to share a flash of insight I've had recently about macroeconomics that no one has ever even attempted to explain to me. It's about the value of money.

I assert that the value of money is (approximately) the total amount of production divided by the total amount of spending. "Amount of production" is real-world goods and services, so it has no particular unit of measurement. "Amount of spending" is the amount of money that changes hands, and it could be measured in dollars.

A key point here is that money which doesn't change hands doesn't enter into the equation. Nor does the world population. Hypothetically, then, suppose Jeff Bezos finds a way to gobble up most of the world's wealth and he becomes a 50-trillionaire. If production stays the same during this time (I guess it's more likely to increase, but let's pretend) and he spends almost none of this money, the effect of this wealth accumulation should be deflationary: the denominator (spending) decreases because Bezos is not spending his earnings (while production is flat or increasing), so the value of money increases. Everyone's money is worth more! Yay! However, those who are in debt effectively find themselves with bigger debts. Wages fall in response to the constricted money supply, so indebted people will have trouble paying off their debts. (I heard somewhere that this was a major problem during the Great Depression.)

But now, suppose that suddenly Bezos decides to spend 6 trillion dollars for a vacation on the moon three years from now, and suppose world production responds mainly by *moving* resources to the moon mission (due to structural limitations that prevent total production from increasing very much). Thanks to the increasing denominator, the effect will be sudden inflation (especially in moon-mission-related industries, i.e. this is where price increases are likely to be concentrated, though there will be inflation everywhere due to the loss of production in other sectors, and also whole supply chains relevant to the moon mission will be impacted, which can also cause price increases to bleed into other areas of the economy).

Getting back to the debt issue, while nominally the U.S. and many other countries have huge public debts and also huge private debts, none of this matters in practice, *as long as it doesn't affect production or spending significantly*. Indeed, perhaps big debts can be good by stimulating production, though I wonder if it can lead to instability (and if so, why).

Also, anyone want to predict the overall stock market trend over the next few years? I am not aware of any mechanism by which a major crash should occur, so I tentatively expect a minor crash at worst. However, US stocks are probably overpriced, so I expect that price increases will level off pretty soon and investor returns will be relatively poor over the next few years. Of course, though, I'm no expert and I don't really understand why the stock market rose so much in the first place. Did a lot of those stimulus dollars somehow get dumped straight into markets? Or was it caused more by regulators using their poorly-explained mechanisms to increase the money supply in a way that increased average stock prices?

Expand full comment

> the impression that governments nominally "borrow" from private corporations and individuals, but of course the way they "pay back" this money - with interest - is not by intermittently switching between budget deficit and budget surplus. Rather, they simply "borrow" more even money and use the new money to pay off the old debts.

:) Everyone's happy until the music stops.

On a more serious note, you ask many good questions and I'd love to see a domain expert answer them.

> I don't really understand why the stock market rose so much in the first place

Between the rising inflation and terrible ROI on safe investments, there is massive pressure for capital to get parked _somewhere_. Hence concurrent stock market and real estate bubbles.

Expand full comment

> there is massive pressure for capital to get parked _somewhere_

While market cap isn't the same thing as money invested, my question is, to the extent more investment did go into the markets, what route the extra money took to get there. For example, maybe there is a story you can tell where the (multiple) stimulus policies worked great by staving off poverty, but *incidentally* profits and stocks rose indirectly via money being spent on products and services, and another story where the stimulus didn't do its job efficiently and the majority of it went straight into pockets of people who didn't need it, and these people of course dumped the extra money into markets.

Expand full comment

So what's up with cognition in schizophrenia?

I have schizoaffective, minimal negative symptoms (zero to my awareness), above-average IQ + some problems in attention in processing speed, but I'm scared of cognitive decline.

Many cross-sectional studies show that cognition is not declined in schizophrenia at age 18-65. But two recent longitudinal studies (Zanelli, Kotov) do show cognitive decline

after 10 and 20 years respectively.

So what should I do? I do not smoke, I'm managing my weight, I'm controlling my blood pressure, I do go on walks. Is this enough? Is this the best that I can do?

I'm aware that there are no drugs or supplements for cognition in schizophrenia (something does exist in the pipeline, but who knows about that).

There is some data on "cognitive training" but Cohen d = 0.4 for this, and some folks do have cognition which is 2SD below normal. Sounds like a joke. Seriously?

Also one of the promising agents in the pipeline " BI 425809" has an effects size of around d = 0.45. Is this even distantly possible to have something with 1 SD effect on cognition?

Right now I'm extremely butthurt that psychiatry for 100 years ignored the obvious symptom of schizophrenia, hinted even by Kraepelin.

Another interesting statistic is that ("Three cognitive trajectories", "Tale of three trajectories") you can find that schizophrenia does not universally

affect cognition. There are probably 3 cognitive trajectories with approximately 30% of people with intact cognitive function. So they should work!

There should be 30% of schizophrenics in the workforce. But the real number for 1st world is 10-15%. So what's up with them? Too lazy? Underdrugged? Overdrugged?

The obvious consequence of this is "no schizophrenics in the workforce" => "no exposure for society" => "weird ideas about schizophrenia" =>

"stigma".

Expand full comment

We all know that [Steven Pinker](https://www.reddit.com/r/slatestarcodex/comments/gp8wv8/goddamn_it_pinker/) is [directly responsible](https://www.reddit.com/r/slatestarcodex/comments/6ggwap/steven_pinker_jinxes_the_world/) for all the world's problems over the last decade or so. But the man simply cannot be stopped:

[**Steven Pinker Thinks Your Sense of Imminent Doom Is Wrong**: *“It is irrational to interpret a number of crises occurring at the same time as signs that we're doomed.*](https://www.nytimes.com/interactive/2021/09/06/magazine/steven-pinker-interview.html)

Expand full comment

I've read somewhere ( I think it was old SlateStarCodex ) that fracking is now cleaner than most other energy sources. This doesn't align with conventional wisdom. It also doesn't align super well with Wikipedia's introduction, which I would summarize as something like 'greenhouse effect similar to coal, other effects worse'.

Has Scott ever written about this? If not, any other rationalist-adjacent people? If not that, either, but if you have strong opinions on this (in either direction), where is the best evidence?

Expand full comment

It's common knowledge that compared to coal, natural gas releases about half the CO2 per unit of energy (https://ourworldindata.org/emissions-by-fuel#coal-oil-gas-cement-where-do-co2-emissions-come-from), and it releases much less pollution as well (see deaths caused per unit of energy: https://ourworldindata.org/grapher/death-rates-from-energy-production-per-twh ). Some quarters are worried about CH4 leaks from the natural gas system, but you can see in the AGGI (Figure 3) that greenhouse warming from CH4 is relatively low, and not rising much, compared to CO2, in the last 20ish years since fracking became a big thing: https://gml.noaa.gov/aggi/aggi.html

Now, about the fracking used to access the natural gas... I'm no expert, I know it can cause some tiny earthquakes, and I know that there are local regulations in Canada around how frackwater "flowback" from a well must be dealt with (I recently chatted with a guy at a local oil & gas company who explained how the industry has responded to environmental concerns by cleaning up its water management... I proposed that this wouldn't affect public perception, just as all the extra regulations that were piled on after the Three

Mile Island improved safety, but in the long run, almost no one noticed; instead people just went from treating nuclear as "dangerous" to "dangerous and expensive". He didn't seem to believe me, probably because he didn't like nuclear power.)

Expand full comment

> but you can see in the AGGI (Figure 3) that greenhouse warming from CH4 is relatively low, and not rising much, compared to CO2, in the last 20ish years since fracking became a big thing: https://gml.noaa.gov/aggi/aggi.html

I don't understand how this answers the question of relative damage without also knowing how much energy was won per amount of methane vs. Co2. If I understand this data, it shows that methane increased warming by .114 whereas Co2 increased it by 1.084. That's 9.5 times as much. However, if 95% of our energy comes at the cost of emitting Co2, then methane would only account for 1/20th of the energy gain at 1/10th of the warming cost, and thus be twice as hamrful. These numbers are made up, I'm just trying to illustrate the point that we're missing an additional variable to calculate the relevant quantity.

Or is that also in the report? I haven't looked at it in detail.

Expand full comment

I assume your hypothetical is "5% of energy comes at the cost of emitting CH4", but virtually none of our energy requires emitting CH4; if CH4 reaches the atmosphere, it means we *failed* to use it to produce energy, i.e. it leaked. I'm afraid I don't have a reference handy, but my impression is that the warming effect from leaked natural gas is much smaller than the warming effect we would get from using coal instead.

An important thing to note when analyzing this issue is that CH4 has a short lifetime of only about 12 years before it is destroyed in the methane cycle. That makes the methane problem less serious, as we expect nature will clean up the mess eventually when reduce our methane output.

Methane has 28x the warming potential of CO2 over a 100-year time horizon (https://cdiac.ess-dive.lbl.gov/pns/current_ghg.html) but I think it's like 80x over a 30-year horizon or something like that. A noteworthy result of this is that if our CH4 output is stable, then CH4 levels won't increase in the long run. By contrast, CO2 accumulates, so in the long run the CO2:CH4 ratio should continue increasing. If we look at the 100-year time horizon, then, natural gas producers would have to leak about 3.7% of all their CH4 in order to produce the same amount of global warming via leaking as they produce via burning (remember, coal produces about 2x the CO2, so a 3.7% leak would put it on par with coal in terms of greenhouse emissions). But a 3.7% loss would be is a significant loss of revenue; producers want to avoid that, and it probably doesn't cost a lot of money to reduce leaks below 1%. Also, there are regulations against leaking.

Expand full comment

I'd be impressed if its other effects managed to be worse than coal (doubly so if you count lignite)

Expand full comment

Initially, when I found out a friend of mine hadn’t been vaccinated (male, ~22 years old), I was worried that:

a) he was prolonging the pandemic

b) his behavior, at scale, is producing variants that could be avoided

c) he risks dying if he gets it

Now, I’m not so sure. From an article I read recently, the widespread nature of Covid seems inevitable. Variants seem to be here to stay. At endemicity, these variants seem rather insignificant. Covid is forecasted to become a permanent, cyclical virus that infects the population in waves for all of future time. (If this is news to you, please consider reading this article before responding: “Why Covid is Here to Stay and Why You Shouldn’t Worry About It” https://cspicenter.org/blog/waronscience/why-covid-19-is-here-to-stay-and-why-you-shouldnt-worry-about-it/)

Further, my friend is unlikely to get it now that a significant percentage of people are vaccinated. His risk of transmitting it to anyone vaccinated, assuming he gets it, is low. And even if he does transmit it, they’re extremely unlikely to have any serious complications. Finally, if he gets it, hospitalization is unlikely. Most probably, at worst he’ll be extremely sick for two weeks, so I can’t even rationally use his own self-preservation as a very compelling argument.

Also, everyone is pointing out that “98% of covid hospitalizations are in unvaccinated people,” but nobody talks much about the base rate of hospitalization being low (around 1-4%, from what I’ve found, sometimes less, at ~500 per 100k cases).

It’s hard for me to build a compelling case for a young person (eg. < 30 years old) to get vaccinated now. I’m pretty sure it’s a smart thing to do, but I can’t confidently say it’s wrong not to.

Expand full comment

> I’m pretty sure it’s a smart thing to do, but I can’t confidently say it’s wrong not to.

The is the strange thing about this whole debate, to me, is the most important part of it that no one seems to deal with.

Let's say you could confidently say it's wrong for someone else to choose not to get vaccinated. We all know that just because you can confidently say something doesn't mean you're actually correct; in fact, most controversial issues are controversial precisely because a reasonable person can confidently take mutually exclusive, contradictory positions.

But there's this implication that if you, personally, or you and a large group of people you agree with, are confident about something, that it is *objectively* or *cosmically* true that it's a moral imperative that people behave the way you want them to. Sure, when it's made explicitly, most people would back off and say that no, they don't hold the belief that it's an objective, cosmic imperative that people that disagree with them change their behavior to be in line with what they want it to be. And yet, when it's not made explicit, people still *behave* that way, en masse - people apply all sorts of pressures and manipulations to change the behavior of people they think are wrong.

PERHAPS, before we talk about whether we can confidently say anything about whether other people should or should not inject some drug we would like them to inject, we should settle the question about the precise circumstances under which it's okay to pressure people into injecting a drug that has not been approved by the FDA (which was happening almost everywhere, with great intensity, for months). Or, more recently, the precise circumstances under which it's okay to pressure people into injecting a drug that has *very recently* been approved by the FDA in an incredibly accelerated way, and for which the makers of the drug are specifically and uniquely immunized from any civil or criminal consequences due to harms done by that drug.

Ironically, I can confidently say that your question is premature pending the resolution of the above. I can also confidently say that your question is effectively support a great moral evil. This is NOT a rhetorical device - I can truly confidently say that.

So, then, what does it really mean that someone can confidently say something about an ought?

Expand full comment

>We all know that just because you can confidently say something doesn't mean you're actually correct; in fact, most controversial issues are controversial precisely because a reasonable person can confidently take mutually exclusive, contradictory positions.

I honestly didn't expect my use of the word "confident" to be read in this way, or I would have used a different word. Sorry. I'm using the word as a proxy for certainty, eg. in statistics how people say "we are 95% confident that..." — this isn't just an statement of our emotional state, but is an empirical reflection of certainty in the data by a particular metric. Granted, I haven't run numbers and don't even have the expertise to calculate my conclusion's confidence in a quantifiable way, but this empirical certainty is what I'm alluding to when I talk about "confidence." Even the most unreasonable person can say things confidently, but this isn't what I mean.

>when it's not made explicit, people still *behave* that way, en masse - people apply all sorts of pressures and manipulations to change the behavior of people they think are wrong.

I agree fully. I don't even think people behave that way solely *en masse*, though; Nietsche stated this idea far better than I could: "every great philosophy up till now has consisted of ... the confession of its originator, and a species of involuntary and unconscious autobiography; and moreover that the moral (or immoral) purpose in every philosophy has constituted the true vital germ out of which the entire plant has always grown." I've often found myself arguing for something which I later, upon reflection, realize says more about my own beliefs than about anything objectively true in the world. It doesn't take a group for this behavior to emerge, just some unexamined unconscious thought and a strong enough motivation to voice it.

>There's this implication that if you, personally, or you and a large group of people you agree with, are confident about something, that it is *objectively* or *cosmically* true that it's a moral imperative that people behave the way you want them to.

Actually, that's exactly where I'm operating from — although, replace the word "confident" with something less subjective and more like "we're as certain as we can be, given the evidence, that this is the correct conclusion and we are on solid ground." If we have overwhelming evidence that the vaccine is only positive, and that the effects of not getting it are all negative, then we would have reasonable grounds to believe that it is a moral imperative for people to get it. Similarly, if we had overwhelming evidence that the vaccine didn't work and only gave people bad reactions, we'd have reasonable grounds to conclude a logical (if not a moral) imperative to steer clear.

>we should settle the question about the precise circumstances under which it's okay to pressure people into injecting a drug that has not been approved by the FDA ... or has been approved by the FDA in an incredibly accelerated way, and for which the makers of the drug are specifically ... immunized from any civil or criminal consequences

That's certainly a valid question. I don't know what the right answer is, but the world seems to have weighed in by its actions: something to the effect of, "it's okay to pressure people [to do this risky thing] when the risks of not doing it far outweigh the risks of doing it." There's a certain amount of paternalism there, to be sure, but it's the same as how if you're in a public square and everyone starts running away from something, you don't wait to see what it is — you run. There are some times where enough sensible people have run the calculation and reached the same conclusion, where it makes sense to follow along and see those who don't as misguided. Social pressures are a valid heuristic tool and indicate some "social proof" that dissent is legitimately misguided.

>Ironically, I can confidently say that your question is premature pending the resolution of the above. I can also confidently say that your question is effectively support a great moral evil.

That's an extremely bold position to take. I suppose I can understand your "confidence" if we're using it how you seem to have been. (ie. You might be confident, but this is just a claim regarding how you feel about your position; that is, it has little to do with whether your claim is actually true.) Anyway, in an academic sense, my question does hinge on the resolution of yours, but in a pragmatic sense, it urges a speedy and potentially errored conclusion to your question. We don't have all the time in the world to figure this out, so we let people vote with their actions, and as I said before, the world has clearly spoken. This isn't to say the world is right, but it does suggest that we actually *can* discuss my question without having resolved yours.

Personally, I don't think that what has occurred with "pressuring" is a "great moral evil," and I might caution you to revisit the Nietzsche quote from above. Your stupendous confidence and highly polarized choice of moral expression makes me wonder if you reasoned your way to your conclusion, or if you had it all along.

Expand full comment

> I'm using the word as a proxy for certainty, eg. in statistics how people say "we are 95% confident that..."

Yes, but you're saying you can be 95% certain of, essentially, an ought. Which is impossible.

You're not *essentially* concerned about the measurable things here, you're concerned about the ought. You collect a bunch of measurable things you have determined in some way fully encapsulate the ought, but that's really an impossible task. You can never measure an ought because it's qualitative, not quantitative. You can't even do it by proxy.

There is ALWAYS a qualitative aspect to the question "Should someone who isn't me behave how I want them to behave based on my judgement, rather than how they want to behave based on their judgement?" because the question itself is qualitative. You cannot be 95% certain that someone ought to do anything, E S P E C I A L L Y change their behavior to suit your judgement, rather than theirs.

> I don't know what the right answer is, but the world seems to have weighed in by its actions: something to the effect of, "it's okay to pressure people [to do this risky thing] when the risks of not doing it far outweigh the risks of doing it."

This exact argument supports the essential moral correctness of American racial slavery and nazi genocide. It is therefore false unless we want to reconsider the moral correctness of those historical moments, and it cannot be correct unless we find, upon reflection, that they were actually net moral goods.

Every sensible person *ought* to reject out-of-hand every argument that one ought to act as the arguer judges rather as he himself judges if that argument cannot be consistently applied.

> Your stupendous confidence and highly polarized choice of moral expression

My confidence and choice of moral expression is exactly the same as yours.

It's trivial to see how bad it is when applied to other people when you're the person it's applied to, isn't it?

Expand full comment

>"You're saying you can be 95% certain of, essentially, an ought. Which is impossible... You collect a bunch of measurable things you have determined in some way fully encapsulate the ought, but that's really an impossible task. You can never measure an ought."

>Earlier: "I can also confidently say that your question is effectively support a great moral evil. This is NOT a rhetorical device - I can truly confidently say that."

If the first claim is true and the collection of measurable things is an impossible task, then your judgement from earlier is irrelevant. How do you know *you* haven't missed the mark on collecting all of the information relevant to your moral judgement? If it's truly an impossible task, then I see little reason to take your moral conclusion seriously.

In any case, I don't think the premise, while technically correct, is relevant to making a judgement. Sure, you can't fully encapsulate the ought, but what would you do otherwise? I'm not claiming to know the objective "ought," and never did. (I referred to "heuristic tools," a "potentially errored solution," "as certain as we can be given the evidence" — all indications that we're doing our best to aim toward the objective ought, but zero claims that we know what it is.) But we have to aim at something, and claiming that it's impossible to know what to do is a non-starter.

Oh, right, the nazi/slavery thing... The usual "obviously those were bad," but also it's a straw-man of my argument. What I said was this: "We don't have all the time in the world to figure this out, so we let people vote with their actions ... the world has clearly spoken. **This isn't to say the world is right**, but it does suggest that we actually *can* discuss my question without having resolved yours."

I said right there that this isn't an indication of what's right. I'm not supporting moral correctness of past atrocities, because I was discussing heuristics, which can be (and have been) incorrect. I think you were trying to give me a repugnant conclusion I wouldn't want to accept, but my claims don't lead to that conclusion anyway. My point still stands.

Expand full comment

As long as the absolute reduced risk of infection, hospitalization, and death are lower than the risk of health complications from the vaccine I'd say there's cause to take the vaccine.

However I'm guessing the fact that 'Covid is here to stay' is because if you have a super-transmissible virus, a vaccine that reduces but does not eliminate transmission rates will still result in everyone getting infected long term.

People who took the jab early in hopes it would ease the hysteria or open things up or allow them to walk around without masks will be disappointed, if not betrayed, when the goal posts get shifted again.

From a consequentialist standpoint you'd imagine most everyone (barring a few people who may be uniquely susceptible to vaccine side effects) getting the vaccine as quickly as possible would be the best case outcome. This is true given an honest assessment of the state of knowledge as it exists now.

*However*, for that very reason I think there's a hesitancy to allow public discourse to touch anything that would make a wishy washy person excuse themselves from getting jabbed (however factually accurate), and this cumulatively generates a dishonest and contradictory narrative:

So you don't want to say that vaccine effectiveness wanes over time or that it doesn't provide full immunity but you do want to say that the vaccinated need to be protected from the unvaccinated (which are basically equivalent statements). You don't go around saying that the severity of the virus on the population will wane over time, you don't talk about the interaction of comorbitities so that healthy people excuse themselves. You don't publicly acknowledge natural immunity in calculations of herd immunity. You assume the worst of treatments for infection because the more treatable the virus is assumed to be the more people will feel they don't need to worry about getting jabbed. etc. etc. etc.

The cynical calculus is that the apathetic will be mobilized in favor more than the paranoid will be hardened against.

Expand full comment

If we expect Covid to stick around and be a perennial nuisance like the flu, then getting vaccinated now is the equivalent of "we inoculate all children against measles". The measles is mostly harmless (I had it myself) but in some cases the complications can be very bad:

"Complications

Common complications from measles include otitis media, bronchopneumonia, laryngotracheobronchitis, and diarrhea.

Even in previously healthy children, measles can cause serious illness requiring hospitalization.

One out of every 1,000 measles cases will develop acute encephalitis, which often results in permanent brain damage.

One to three out of every 1,000 children who become infected with measles will die from respiratory and neurologic complications.

Subacute sclerosing panencephalitis (SSPE)external icon is a rare, but fatal degenerative disease of the central nervous system characterized by behavioral and intellectual deterioration and seizures that generally develop 7 to 10 years after measles infection"

So for the sake of prudence, and to maintain herd immunity, we vaccinate babies. Even if the risk of children catching measles is low, and even if they do get the measles they may recover just fine, we consider it more beneficial to have mass vaccination programmes.

Same with your friend: he probably won't get it, and if he does he'll probably get better just fine, but it's more prudent to avoid getting it in the first place by preventative measures. And the problem is less "what about a healthy young man, what risks does he have of bad outcomes?" and more "if he gets sick, he's contagious, and what about the people he passes it on to?"

Expand full comment

I agree with your point about complications, and vaccination being the equivalent to the future innoculation of children. However, I'm not sure that your final point is as persuasive as I'd like. It's certainly correct, but I'm thinking of Jonathan Haidt's "The Righteous Mind" here: the point of "what about others?" is rooted in a value of care or fairness, which I believe is not held as singularly important in conservatives as it is for liberals. (My friend, and most unvaccinated individuals I know, are conservative.)

Haidt's point is that when we try to reason with other groups, we need to think in terms of their moral foundations/pillars, but we typically end up making points that are rooted in our own. An argument of care & fairness is strongly motivating for me, but I don't think it addresses conservative pain points as much as other arguments (which you or I may not find as motivating). I wonder if an argument based around cleanliness/hygiene might be more motivating, or if there's some connection to loyalty & authority that could be leveraged. (Eg. Donald Trump recently started saying to get the vaccine -- although he's no longer president, so idk if people still view him as an authority. Hm..)

Expand full comment

Re. COVID risks:

Does your friend like being able to taste or smell things?

Does he like being able to become erect?

Does he like having a fully functional brain, and not having post viral syndrome?

These are all things that you can loose with even a mild case of covid where you don't even get close to needing a trip to the ER.

You don't need to die to get damaged by covid, Scott has a post on this very blog re. long covid, and the numbers on that are actually very high (Atleast, to me they are).

Re. Endemic-ness: That is pure assumption, at this point. It is possible it will become endemic, but it is equally likely that it won't. Considering the massive population of virus in the wild, it has had surprisingly slow mutation.

Likewise, it is very likely that it will never become less harmful to catch. There is no reason that covid HAS to decrease in danger to flu/cold level.

You weight this against: Your choice of vaccines, of more or less traditional manufacture, that are free to the user and very efficacious and have been tested on about 1 billion other humans.

He is free not to get vaccinated, and I'm free to think that makes him a cretin.

Expand full comment

> His risk of transmitting it to anyone vaccinated, assuming he gets it, is low.

What about risk of transmitting to unvaccinated people, e.g. small kids?

Expand full comment

"Variants are here to stay" is not a compatible belief with both "my friend is unlikely to get it now that a significant percentage of people are vaccinated" and "His risk of transmitting it to anyone vaccinated, assuming he gets it, is low" remaining true in the medium to long term.

For Covid to become endemic requires a continuous supply of people vulnerable to infection. There's three ways for that to happen: immunity can fade with time, the disease can mutate to escape immunity, or it can become a childhood disease relying on population turnover for new hosts.

Fading immunity and mutation escape both undermine the idea that your friends is highly protected by the vaccinated immunity of those around him, since the people around him will become less immune over time.

And the childhood disease option relies on the endemic variants being very, very contagious, like measles, chicken pox, or RSV. Less contagious diseases that produce lasting immunity tend to hit in epidemics decades apart since it takes time to rebuild the vulnerable population. Traditional childhood diseases like chickenpox, measles, and mumps have r0 values in the 10-20 range, meaning that 90-95% of the population needs to be immune at equilibrium. If 60% of the population has vaccinated immunity, then and additional 30-35% of the population must be unvaccinated but naturally immune (infected but recovered) in order to achieve equilibrium. This implies your friend's long term risk of infection would be 75-88% if he remains unvaccinated. Even if the risk of serious illness of death to you friend is negligible, in this scenario he's trading the trivial inconvenience of a vaccine and a moderate risk of feeling kinda cruddy for a day or two vs. a 75+% chance of contracting a live case of Covid.

Delta's estimated no-lockdowns r0 of 6-8 isn't that far from the typical childhood disease range, so that's a fairly plausible scenario.

The article you linked did suggest that Covid is likely to become a mild cold-like disease as it becomes endemic. That's plausible, perhaps even probable, but it hasn't happened yet and there are precedents of diseases that remained potentially deadly to immunologically naive adults in the long term: the aforementioned classic childhood diseases, plus deadly recurring epidemics like smallpox.

Expand full comment

I am pretty certain that this is correct and well-reasoned. However, I've read it multiple times and I think I lack the expertise to understand fully. Hopefully it's not too obtuse to as you to "dumb it down" a bit?

I understand your main point in the first three paragraphs. It's the calculation I'm having trouble with. You mention that diseases with r0 of 10-20 require 90-95% immunity for equilibrium. (I take this to mean that equilibrium meaning people are infected at the same rate as they recover; >10% susceptible allows for transient "spikes" of sudden outbreaks, right?) So it doesn't matter how you get there, whether it's 90% vaccinated, 60% + 30% naturally immune, etc.

But then how do you get to 75-88%? Is there some calculation I'm missing?

Expand full comment

By assumption, 60% of the population is immune due to vaccination and 40% is not. If the herd immunity threshold is 90% immune, then 90% - 60% = 30% of the general population must have immunity due to prior infection for prior-infection immunity + vaccination immunity to total 90%. Since all of that 30% with prior-infection immunity but not vaccination immunity come from the unvaccinated population, they constitute 75% (30 / 40) of the unvaccinated population.

Repeat the calculation with an assumed 95% herd immunity threshold, and you'll get 35% with prior-infection immunity but not vaccinated immunity. 35% / 40% = 87.5% of the unvaccinated population, which I rounded up to 88% to avoid implying too much precision for the result of a back-of-the-envelope calculation.

Expand full comment

Ah, that makes sense. Very cool, thank you!

Expand full comment

+1. Note that since vaccines are not as effective on Delta, and since immunity wanes over time, the long term risk of infection for the unvaccinated should be more than 75-88%, and the presence of so many unvaccinated people also raises the risk of infection for the vaccinated.

Saw news that a minor Republican official died of Covid. Checked conservative sources, among which I only found one mention of his death, which left out the part about Covid. So, this it why. They don't get vaccinated because their narrative is "Covid's not dangerous" and "you probably won't catch it anyway". I am guessing the reason they picked that narrative is because it was Trump's narrative. https://twitter.com/DPiepgrass/status/1423703403528663046

Expand full comment

In that that situation, most compeling case is that it will quite literally look bad on his resume. Strictly medical benefits are of course real, but far less significant than that potential employers might notice and not like that.

Expand full comment

Does that not feel like a bad situation, where the only compelling reason to get vaccinated is that someone would not hire you otherwise? If that were the case, then employers should not require a vaccination. It would be like requiring someone be baptized to work at a Catholic hospital. To the employer it makes a lot of sense, but as a society we don't just accept the employer's desires as acceptable just because it exists.

This is in regards to the OP's point. If it does make sense to be vaccinated (and for health care and some other professions it likely does for purely health reasons), then obviously that's a different scenario.

Expand full comment

Well, from the perspective of personal self-interest, it does not matter whether he thinks that it is wrong for employers to discriminate on the basis of vaccination. They might do it anyway, his personal moral objections nothwithstanding, and if it happens, he, as an unvaccinated person, will be a victim of this.

And if he is concerned with moral questions, then he should get vaccinated in order to minimize a risk that he will spread covid to someone else.

Expand full comment

This reminds me that I've never actually done a cost-benefit analysis of getting vaccinated, for covid or for anything else. There are a bunch of fuzzy factors floating around in my head, but explicitly listing them might make it clearer, and also make it clearer where the margins sit.

Expand full comment

One thing I'm hypersensitive to is that these analyses aren't scale-invariant. As far as I can tell, this logic works for an individual not getting vaccinated, but it assumes most other people are. If more than a handful of people draw the same conclusion, though, then it breaks down and there are larger consequences due to interacting individuals.

In fact, that's the only argument I can think of, is, "What would happen if everyone did as you're doing?" But the reality is that everyone **isn't** doing those things. Hypothetical extrapolation is a fun game, but the current data point is the world we inhabit.

Expand full comment

I'm a long time lurker (since 2014 or so) and many years ago used to post on the subreddit. I am thinking of blogging on substack. I've set up a "dummy" page already under this name. If I were to blog I would mostly discuss politics, current events, some pop culture, maybe some religion. I'm not interested in earning money, just want the intellectual stimulation. I have never blogged before, but I've been lurking and posting anonymously on message boards, others' blogs, etc since the late 1990s.

This is the most intelligent and kind community I've encountered in nearly 25 years online. I value all your feedback tremendously and it will have a decisive role in any decision I make about blogging here on Substack. Here are some questions I have that I would greatly appreciate the community's feedback on

1. How likely am I to be doxxed? I am terrified, to put it mildly, of being doxxed. I am especially concerned about the consequences it would have on my employment and on my family (including minor children). FWIW I have no plans to discuss things and people from my day-to-day life on my blog - it just doesn't overlap in many interesting ways with what I'd want to discuss online

2. Would the content of my blog create issues (esp relating to doxxing)? I mostly want to discuss politics. To give you an idea of where I'm at politically, I was the only 14 year old girl in the country who wanted to talk about how awesome Newt Gingrich was, but when I see things like Jack Dorsey's congressional testimony or corporate logos all over BLM stuff, I find myself muttering "workers of the world unite you have nothing left to lose but your chains......." You met me at a very strange time in my life :) I am still working a lot of things out and I have a feeling I'm not the only conservative who's doing the same. I'd like to have a blog spot where that kind of conversation can happen. I'm debating whether to get involved in local GOP politics and thought maybe a blog could overlap/springboard to that. But I don't want to bother if Substack is not the right environment, if it will get banned due to content, or will cause more trouble than it's worth (see 1).

3. While I've been lurking/commenting on blogs for ages I've never blogged before myself. Would it be advisable to start by commenting on other substack blogs for awhile and then starting my own?

Well I've rambled along enough.....many thanks for your feedback!

- Cloud Possum

Expand full comment

I blogged for a long time (on blogspot), but stopped writing since the number of people reading and commenting changed from "few" to "none". Thing is, even if your writing is really good, building an audience is hard. When I want to read something, I don't have good aggregation software so I tend to go straight to a favorite site such as ACX. Substack has the advantage of letting users subscribe and receive new posts via email, so it might well be easier to build an audience there.

Your chance of getting doxxed is proportional to you audience size, so there's nothing to worry about at first, but in addition, the potential doxxer has to figure out who you are, so just don't reveal it. Scott Alexander is a poor example: not only did he originally write under his real name, but when he switched to using a "pseudonym", the pseudonym he picked was... his real first and middle names. Even though I'm a nudist with unusual opinions, I use my real name because the lesson of the last 15 years has been that nobody f**king cares who I am. However if you're a schoolteacher or something then you've probably got a good reason to use a pseudonym.

Expand full comment

For the doxxing thing, it might help to read https://www.gwern.net/Death-Note-Anonymity (Death Note: L, Anonymity & Eluding Entropy) to get a sense for how people can be identified using a surprisingly small amount of information - it doesn't take much to pare down a pool of millions of people to a pool of ~10 people who match your observable hobbies, opinions, age, time zone, occupation, education, et cetera. And once it's down to a pool of ~10 people, it's doable for a determined group of vigilantes to just follow them all around in person to fill in all the remaining details.

To protect against this, you basically need to slip in a few bits of misinformation here and there so any would-be doxxers will get lost on wild goose chases. You're safest when you're not even in the pool of people they're looking at, and you're still pretty safe once they catch on to the deception because they have no idea where you lied and how much you lied to juke them. Once they have to look at all the people who *somewhat* match your observable details, the pool of suspects expands back to hundreds of thousands of people and you're safe once more. Just make sure to have a consistent cover story, so you don't accidentally reveal which of your bits and pieces of information are actually red herrings made up to throw them off the trail.

See also, https://www.gwern.net/Death-Note-Anonymity#security-is-hard-lets-go-shopping & https://www.youtube.com/watch?v=pT19VwBAqKA [Protecting Privacy with MATH (Collab with the Census) - Minute Physics]. You can't be tracked down if the person they're trying to find isn't you.

Expand full comment

That's a really cool read - thanks! Unfortunately in light of today's announcement my concerns about doxxing might become moot at least where employment is concerned. I am not vaxxed and I very much DO NOT WANT to be vaxxed - for several reasons, some I might want to write about on a blog, some way too personal to share online even anonymously. I'm anticipating my employer mandating the vaccine soon and I'm at a loss for what I'm going to do. But I guess if I'm unemployed and unemployable doxxing isn't quite such a threat, and maybe the paid subscriber Substack is worth looking into some day.

Expand full comment

Make and write on your own website. Do not write on Substack or any other platform, because then the platform has you hostage.

1. Unlikely. 5% chance.

2. Absolutely. 100% it will create issues. Politics is a cesspool.

3. Learn how to make your own website, launch it as fast as possible. Start blogging immediately.

Expand full comment

Fwiw I disagree on the make your own website point. I think it's best to just get started. Paying substack isn't so bad if it let's you start when you otherwise wouldn't. Plus, you can export the emails so once you are up and running you can move to a site if you build one.

Expand full comment

Thanks for the feedback!

Expand full comment

If you make your own website btw, read up on how to do it privately. I believe that by default your contact information is public when you buy a domain name.

I also disagree about Substack. You can write on Substack, and then back up all your posts locally so you don't lose anything in the off-chance that Substack deletes you. I think the lower the barrier to entry for your writing, the better. And making a website is a pretty high barrier.

Expand full comment

thanks Alexander. I would have no clue whatsoever how to start a website of my own. What's your take on the likelihood of being doxxed or just the content of the blog causing trouble on Substack?

Expand full comment

https://www.theguardian.com/environment/2020/jun/24/seasteading-a-vanity-project-for-the-rich-or-the-future-of-humanity

I was going to make a snarky remark about not sending a libertarian to do a statist's job, but a more accurate or insightful assessment would be something to the effect that the seasteaders in the article were trying to do something that the existing and mature system doesn't really accommodate for.

It would be like if someone were to introduce a Tesla forty years ago, but without any of the infrastructure in place that is needed to operate a Tesla. There's no stations so an owner can charge it away from home, nobody knows how to fix it, the DOT isn't sure how to crash test a vehicle with none of the internal combustion stuff, the environmental testing people can't test tailpipe emissions on a vehicle with no tailpipe, etc..

Expand full comment

I thought it was a really cool project and was disappointed about its failure, even if there were founders with hokey beliefs.

Expand full comment

Well, I was trying to avoid snark, really, I was.

Expand full comment

Does anyone here have advice for RSI? I recently bought a brand new laptop, and it's really nice with the exception that my hands begin to hurt after using it for more than a few hours. For whatever reason it's only on this new laptop, my old laptop does not have this problem. Obviously it would be super lame to have to just use the old laptop or return the new laptop, so I am wondering if you guys know what could be causing this or how to fix it.

For context, the new new laptop is a Zephyrus g14, and the old laptop is a 13 inch macbook from 2017.

Expand full comment

Set them side by side, put your hands in normal use positions, and observe. The difference between what you’re doing with one and the other is your issue, obviously, so look for material differences in the way you hold your hands, fingers, arms, and the way you sit and lean forward. Look out for crooked wrists (laterally), bent wrists (vertically): in particular, your hands should be level with your wrists or lower, and your palms should be straight in line with your forearms.

I should note here that Apple is pretty renowned for their obsessive focus on ergo.

My guess, and this is just a guess and I’d need to see you at work to be sure, is this: the MacBook Air is a very thin device, especially at the leading edge which is like a tenth of an inch thick, whose top case (containing the keyboard) cants pretty dramatically towards the user. The G14, while not a brick, is a tenth of an inch thicker at its thickest point, the leading edge appears in pictures (can’t find stats) to be much thicker, and the keyboard certainly does not angle as much. Which probably puts you in the awkward position of having to cross a thick edge and angle your wrists upward to type or hold your entire arm off the desk, exactly the stuff you shouldn’t be doing.

Expand full comment

If possible, use a separate key board.

If you can, go to a physical therapist.

If you game, use a controller instead of M+k.

If it's real bad, I've seen other tech dudes use split keyboards with some success.

Expand full comment

Any time someone complains about this sort of thing at work, the corporate ergonomic assessment is that they should use a wrist rest. Never needed it myself, but some people have found it to work. Lots of options online, along with discussions on whether they're effective.

Admittedly, this is more appropriate for an external keyboard if your laptop has a touch pad where your wrists would go.

Expand full comment

Find a good physical therapist / massage therapist who explicitly talks about posture. I had issues that I kept trying to fix with gear, when it was a posture/muscle tension problem cured by stretching.

It may seem weird, but muscles are connected in odd ways and stretching something a joint or two away can lead to immediate relief of chronic symptoms.

And for me it was caused by a slight change in setup. A slightly different posture has knock on effects.

Expand full comment

I'd recommend an external keyboard and mouse (and raise the laptop to eye level whenever possible). If you keep running into this issue, try an ergonomic keyboard.

Expand full comment

This seems 100% good reason for returning laptop, I would encourage doing this.

Though maybe you simply use it more?

Expand full comment

I use an external vertical mouse when using laptop, what may not be viable in your case.

Expand full comment

What if the Egyptian Pharaohs had devoted less effort into building the Pyramids and more into building canals to bypass all the Nile cataracts?

Expand full comment

I think the First Cataract as Aswan was usually a political boundary. Modern Egypt extends to a bit short of the Second Cataract, but I think that's an artifact of either British or Ottoman rule. As far as I know, the First Cataract was pretty consistently the southern boundary of Egypt from the original kingdoms of Upper and Lower Egypt until the Marmaluk Sultanate. The only exception I know of is the 25th Dynasty, when the Kings of Kush mounted a successful invasion of Egypt from the traditional Kushite heartland between the Third and Sixth Cataracts and reigned over part of all of the lower Nile valley for about a century (747 BC to 656 BC).

In that light, keeping a natural hydrological barrier at the political boundary makes a degree of sense. Having the canal span two polities complicates the building project, and at any given time at least one of the two polities on opposing sides of the cataract would prefer to keep the cataracts intact as a speed bump for raiders and invading armies.

There are also potential engineering challenges, which might explain why the Kushites and other polities that did span various Nile cataracts didn't build canals. The Canal of the Pharaohs connecting the Nile to the Red Sea was pretty close to being a Sea Level canal project, connecting the Eastmost distributory of the Nile Delta to the Bitter Lakes and the connecting the Bitter Lakes to the Red Sea. Elevation changes over the canal route would probably have been gentle enough that locks wouldn't be required, as the Red Sea is at sea level and the Delta isn't much higher (Cairo, near the upstream point of the delta, is 75' above sea level). Cataracts, on the other hand, imply a steep enough change in altitude as to require locks to be navigable. I think primitive forms of canal locks first appeared in China in the 10th century AD and in Europe in the 14th century. I don't know when the technology became available to Egyptians, but I'm guessing it was too late for Dynastic-period builders to take advantage of it.

Expand full comment

They dug a canal all the way from the Nile to the Red Sea. If they didn't dig canals around the cataracts, I imagine it was because it didn't pass cost-benefit analysis.

I think it also bears mentioning that Egypt had 170 pharaohs and only three of them bothered with really big pyramids.

Expand full comment

How would they have quarried underwater? They quarried on land using spear-chisels.

Expand full comment

dig a small canal around the cataracts while waters are low, dam the cataracts, dig them out, then fill in the little canal. (might now work, given how the Innundation was when you had a lot of farmers with nothing better to do than dig a canal)

Expand full comment

Many quartz at a time.

Expand full comment

Are there any migraine sufferers here who have tried microdosing psilocybin, or any other tryptamines? Did they have any effect on the frequency of your migraines? Have you ever tried to dose as you feel a migraine coming on, and did it have any effect on your symptoms?

Expand full comment

I have experimented with psilocybin microdoses (in the shape of mushrooms, usually dried) in the past, and I get occasional migraines.

I can't really comment on whether the frequency changed. I don't remember getting any migraines during the times I was on a regular dosing schedule (eg every 3rd day), but I get only maybe a dozen or less migraines a year, with slight clustering in time. It's therefore not unusual for me to experience several months without any migraine episodes, let alone the few weeks that were the longest time I tried a regular schedule.

As for the acute use of a psilocybin microdose when a migraine episode is starting, or has already come on fully - I don't usually have very distinctive prodrome symptoms and on have experimented with psilocybin microdoses in the past, and get occasional migraines.

I can't really comment on whether the frequency changed. I don't remember getting any migraines during the times I was on a regular dosing schedule (eg every 3rd day), but I get only maybe a dozen or less migraines a year, with slight clustering in time. It's therefore not unusual for me to experience several months without any migraine episodes, let alone the few weeks that were the longest time I tried a regular schedule.

As for the acute use of a psilocybin microdose when a migraine episode is starting, or has already come on fully (I don't usually have very distinctive prodrome symptoms and only experienced obvious aura with visual disturbances maybe three times, and therefore rarely realise I'm about to get hit before the pain begins):

Yeah, I did that a few times.

After trying it for the first time, I did it every time I had a migraine, available mushrooms, and remembered both *that* I had the mushrooms and *where* I had stored them.

I would say that for me, a microdose (I used generally something around 0.3-0.4g of dried shroom; but I wasn't always super precise in those situations) is probably about as effective as 50mg sumatriptan - which works pretty well for me - at halting, or at least strongly weakening, the development of a starting migraine.

When the headache has already developed, the psilocybin may actually be a bit more effective at reducing the symptoms for me than 50mg sumatriptan, but maybe similar as 100mg sumatriptan.

"Reducing the symptoms" (or halting their development) in this case includes the non-headache symptoms, eg nausea, although I usually keep some light sensitivity even in the best case (talking about psilocybin).

I also seem to feel less tired when I stopped a migraine with psilocybin than I usually do when I stopped it with sumatriptan.

Expand full comment

https://www.newyorker.com/magazine/2021/09/13/can-progressives-be-convinced-that-genetics-matters

"This fall, Princeton University Press will publish Harden’s book, “The Genetic Lottery: Why DNA Matters for Social Equality,” which attempts to reconcile the findings of her field with her commitments to social justice. As she writes, “Yes, the genetic differences between any two people are tiny when compared to the long stretches of DNA coiled in every human cell. But these differences loom large when trying to understand why, for example, one child has autism and another doesn’t; why one is deaf and another hearing; and—as I will describe in this book—why one child will struggle with school and another will not. Genetic differences between us matter for our lives. They cause differences in things we care about. Building a commitment to egalitarianism on our genetic uniformity is building a house on sand.”

Harden understands herself to be waging a two-front campaign. On her left are those inclined to insist that genes don’t really matter; on her right are those who suspect that genes are, in fact, the only things that matter. The history of behavior genetics is the story of each generation’s attempt to chart a middle course. When the discipline first began to coalesce, in the early nineteen-sixties, the memory of Nazi atrocities rendered the eugenics threat distinctly untheoretical. The reigning model of human development, which seemed to accord with postwar liberal principles, was behaviorism, with its hope that environmental manipulation could produce any desired outcome. It did not take much, however, to notice that there is considerable variance in the distribution of human abilities. The early behavior geneticists started with the premise that our nature is neither perfectly fixed nor perfectly plastic, and that this was a good thing. They conscripted as their intellectual patriarch the Russian émigré Theodosius Dobzhansky, an evolutionary biologist who was committed to anti-racism and to the conviction that “genetic diversity is mankind’s most precious resource, not a regrettable deviation from an ideal state of monotonous sameness.”"

Expand full comment

"On her right are those who suspect that genes are, in fact, the only things that matter"

These people are as real as counterrevolutionary saboteurs in the USSR.

The range of actual opinion has basically between those who estimate heritability of psychological traits at zero or epsilon (sometimes explicitly so and other times not saying it explicitly) and between people who estimate heritability greater than zero, with the high end estimates at 80% and the low end estimates at 30%, depending on the trait. I know that ground-up polygenic scores explain a much smaller portion of the variation.

No one has unironically taken the heritability estimates gained from twin studies to mean that the world needs to start revving up the Auschwitz train cars. This is simply taking one side's account of the nature of the debate at face value.

The problem is that most of what is done to address the kinds of inequalities people care about are... Well empirically attempts to affect the course of someone's' life long term through interventions are usually ineffective. "The alternative hypothesis" provides a neat explanation as to why. (Arguably less that heritability is high but that shared environmentality of traits is low)

Expand full comment

Add in the likelihood of future genetic enhancement through CRISPR technology and this topic has really important implications for social equity

Expand full comment

I would have thought the sine qua non of social justice crusades was building on sand. It's like the surfer dude seeking the perfect wave -- if he ever actually found it, his life would be over, meaningless.

Expand full comment

That is what happens when you strawman your outgroup.

Expand full comment

Yes well I might have given that argument the time of day 45 years ago, when these social parasites and three-card monte hucksters first started promising utopia Real Soon Now, if they were just given umpty $trillions and we agreed to "temporarily" set aside the great principles of color-, sex-, and creed-blindness first written down by Jefferson. As it is, nearly half a century later, after nothing to show, I now lump 'em in with all the snake-handlin' tongue-speakin' religious con-men you find in a free society and who indeed occupy the first rank of my "out" group.

Expand full comment

Jefferson? Like, "Notes on the State of Virginia" Jefferson? Raped his wife's fourteen-year-old one-eighth black sister and enslaved their mutual children Jefferson? Maybe use a different historical figure for color-blindness?

Expand full comment

A better choice: freed his slaves in 1791, and did a meticulous job so they stayed free and prospered.

https://www.cnn.com/2021/09/05/us/robert-carter-iii-deed-of-gift-slavery-anniversary/index.html

Expand full comment

Social Justice goes back 45 years?

Expand full comment

Most members of the movement trace it back to the civil rights movement, which would be more like 60 or so.

Expand full comment
founding

Most wiccans would trace their movement back to pre-Christian Europe, but that doesn't make it so. An awful lot of Social Justice looks to me like people cosplaying the march from Selma because they think it was the awesomest thing ever (pretty close) and can convince themselves that it's still 1965 (not even close).

Expand full comment

I think Social Justice has some different premises, in particular, that marginalized communities are the only important thing.

Civil rights was focused on policy changes (desegregation, affirmative action) rather than changing background reflexes.

Expand full comment

What's the technical difficulty level of something like this:

I have a strong preference on keeping moving, but I want to get the bus ultimately, but because I don't want to wait for the bus, I want to keep moving until the bus catches up with me.

A plug-in/adaptation for Google Maps/similar that says "Walk this way, walk until this point, and the bus should be there (small amount of time) after you get to that stop".

Expand full comment

One hard part here is that it relies on busses showing up on a fairly tight schedule. In places where I've lived that I've tried using public busses, their schedules have been more aspirational than descriptive. I'm sure this is at least partially a function of mostly being in places with shitty public transit (suburban SF Bay Area, San Luis Obispo, and Orange County). In the Bay Area in particular when I was using busses (~20 years ago), busses were frequently up to 10 minutes early or late relative to their schedules. The worst-case was a bus route that was supposed to have a bus every 10 minutes where I once had to wait almost two hours and then four busses showed up at once; I'm not sure what happened to the other ~8 busses that were supposed to have arrived.

But I expect it's also an inherent problem with a mode of transit that may or may not stop at a given stop due to the presence or absence of would-be riders, where the vehicle needs to be stopped while fares are collected, where occasional need to take on bicycles or wheelchairs can make loading unpredictable in length, and where the vehicle's progress is impeded by varying amounts of car traffic. Even in places I've visited with good public transit (e.g. Montreal), the "good transit" aspects of the busses seems to be more a function of short intervals between busses than of busses sticking tightly to schedule.

Expand full comment

In San Luis Obispo in particular (where the public bus system was aptly named "SLO Transit"), the optimal algorithm in most cases would be to ignore the busses and just give walking directions. The town when I lived there (early 2000s) was relatively small geographically (about a 2-3 mile diameter), the streets had low speed limits and frequent four-way stops, and the bus system favored circuitous routes to minimize transfers, so walking was almost always practical unless you had enough baggage that busses would also be difficult, and walking was usually faster than driving unless you were taking a direct bus route between opposite ends of town (e.g. from the University to the airport).

Expand full comment

Should have written "walking faster than taking the bus". Driving a private car would almost always be faster than driving except for absurdly short trips or when you're fighting particularly bad traffic or pedestrian crowds are frequently stopping traffic to cross the street (e.g. trying to drive through downtown on a Friday evening when half the college students were hitting the bars).

Expand full comment

Ugh, "almost always be faster than walking". Driving is very rarely faster than itself. Curse Substack's lack of an Edit button, and my own impatience in hitting "Post" prior to proofreading comments!

Expand full comment

Making something that works is not arduous.

The tricky part is maintaining compatibility with various fragmented feeds of real time bus data across world.

Given relatively large initial work and ongoing work and people not willing to pay for apps directly (or minimal amount of money) and small user base it seems not viable as a commercial project.

You would either need to code it yourself or hire someone to do this.

Expand full comment

Can you walk up and down the block with the bus stop on it, till you see the bus coming? Or is it important that you are always moving towards your destination?

Expand full comment

Always towards the destination. I live in a compact UK city with bus stops every 3-500m so if the bus is 3 minutes behind you, you can walk to the next one.

Expand full comment

Doesn't sound hard to me, assuming you have real-time data on where the bus is, or the bus is supernaturally (by American standards) on time all the time.

But unless the bus stops are quite close together or the bus is unusually slow (relative to your walking speed), it seems doubtful the answer will often be any different from "walk to the closest bus stop".

Expand full comment

Buses in my city tend to be broadly reliable and stops are generally 3-500m apart in most parts of the city. So if the bus is 10 minutes behind you, you can probably walk 4/5 stops before it catches up to you.

Hmm

Expand full comment

Wait what...? You are going to walk 2000m in 10 min? That's an 8min/mile pace, which while not exactly booking it is rather a brisk jog.

Expand full comment

Well if the bus is 10 minutes behind you, as in, the bus when you get to the first stop isn't due for 10 minutes, you walk along for ten minutes and the bus is still going to be 5 minutes behind you, and then another 5 and the bus is a minute or so behind you.

That's what I meant. But I also very much didn't pay attention to the maths there.

Expand full comment

Isn't the problem here simply that Google Maps assumes you will walk to the nearest applicable bus stop and wait, so the preferences need simply to be changed to prioritise walking to the furthest possible bus stop to catch the fastest bus rather than the closest one?

Expand full comment

"simply" is not simple here, unless poster can decide how Google Maps are working (and in such case they would not ask the question here)

Expand full comment

Correct. But I don't think this is an option in Google? If it is, that's rather exciting as I'd imagine it'd turn into a huge learning problem, hence the question.

Expand full comment

My idiot-level solution to this would be: (1) how fast/far can you walk? (2) what is the schedule of the particular bus you want to get?

If there is a twenty-minute gap between buses on the route, you have not taken the first bus, you know that the bus usually waits two minutes at each stop, and you know that you can walk to the point three stops down by the time the next bus gets to that point, then that's where you go. (Twenty minutes for the next bus to start the route plus six minutes for each stop gives you twenty-six to thirty minutes for you and the bus to sync up at the third stop on the route).

Depending how fast you can walk, what the distance between bus stops is, and what route/time the bus schedule runs on.

Expand full comment

It would work, but I'm looking for something taking into account other factors like additional bus routes, additional stops near your destination you could use, a bus route you could join that is in the direction you're travelling but out of your way enough that you're not going to intersect it accidentally.

Becomes a major travelling salesman type problem I think?

Expand full comment
founding

Not too hard to get a prototype that mostly works, but there's no obvious way (to me at least, after 2 minutes of thinking) to do it provably-optimally. And that's before you take into account bus delays being possible and whatnot.

Expand full comment

I'm thinking live, in a lot of UK cities, mapping services have access to live data on buses. So if it were delayed and so on, and if you're walking from the periphery of the city to the centre and towards the periphery on the other side, bus route density will increase and then decrease.

I imagine it's a really complex problem, at least to scale effectively across cities and countries and so on?

Expand full comment

There isn't just that problem, your effective speed depends on how much traffic lights and traffic get in your way.

Expand full comment

The live times for UK buses which Google uses are pretty accurate. Certainly not more than 3-4 minutes delayed normally, so probably a tolerable error since I'd estimate bus stops in urban environments at 3-5 minutes walking normally (well tolerable unless you're managing the Swiss public transport system but that's another story...).

Expand full comment
founding

(It's easy to do optimally if there's only one bus route that gets you where you want to go, but difficult if there's lots, I think?)

Expand full comment

Yeah, exactly. And if you're walking from one side of a city to the other from the suburban to densely urban/CBD to suburban, you're going to come across additional bus routes that would work for you. So you go from 1/2 that work to 5-8 over the space of 2-3km.

Expand full comment

Can anyone recommend a good intro book to Bayesianism?

Expand full comment

From a machine learning/computer science perspective: http://web4.cs.ucl.ac.uk/staff/D.Barber/textbook/090310.pdf

Expand full comment

Practical or philosophical? Bayesian statistics is, in some sense, very simple: Model everything using probability; condition on the data. But, that may not be so easy to do. Andrew Gelman's books are probably good, although I haven't looked at them in detail. His blog is good reading: https://statmodeling.stat.columbia.edu/. I learned Bayesian statistics long ago: Optimal Statistical Decisions by Morris DeGroot (one of the first books on Bayesian statistics and a classic). The Likelihood Principle by James O. Berger and Robert L. Wolpert (philosophy). Statistical Decision Theory and Bayesian Analysis by James O. Berger (both practical and philosophical). Dennis Lindley's works.

Expand full comment

Someone likely can, based on the priors.

Expand full comment

Probability Theory by E.T. Jaynes

Expand full comment

What do you want to use it for? General reasoning? Estimating hierarchical models? Etc?

For general reasoning, this is a great introduction if you program: https://greenteapress.com/wp/think-bayes/

(...or if you're willing to learn to program, you could first work through his "Think Python" book...)

The goal is to teach Bayes in a discrete setting, with a heavy emphasis on how to bring Bayes to complicated real-world problems. Very clever, very approachable strategy. I'd highly recommend it. It's not a last book on Bayes, but a great first book for building some intuition.

For things like hierarchical models I'd look into Gelman's books.

Expand full comment

There’s a new book called “Bernoulli’s Fallacy” by Aubrey Clayton. It’s a bit polemical and spends a fair amount of time on the history of Statistics, which may not be to everyone’s taste, but it’s a very readable overview. (FYI Clayton argues for a particular flavor of Bayesianism, following from the work of E.T. Jaynes. It’s basically a logical interpretation of probability.)

Expand full comment

Hey Scott,

You should write about the effectiveness of condoms against STDs. I haven't found any detailed, condensed articles about whether or not to wear a condom.

This has a great parallel to wearing masks. It would be interesting what contradictions exist between people with different tolerances in either category. The demographics would be interesting to compare.

Expand full comment

Here you go - the Guttmacher Institute:

https://www.guttmacher.org/fact-sheet/contraceptive-effectiveness-united-states#

2004 report on condom use and STIs:

https://www.guttmacher.org/journals/ipsrh/2004/09/consistent-use-crucial-efficacy-condoms-prevention-stis

"Comparing the sexually transmitted infection (STI) prevalence rates of condom users and nonusers may not be as relevant as comparing those of consistent and inconsistent users, according to a U.S. study of STI clinic visits.1 Fifty-four percent of clinic visits were by patients who reported having used condoms in the previous four months—38% sometimes and 16% at every intercourse. Risky sexual behaviors, such as having ever had more than 10 sexual partners or recently having had new or multiple partners, were reported at a significantly greater proportion of visits by condom users than of those by nonusers. In analyses comparing condom users with nonusers, any condom use did not offer clear protection against STIs; however, in analyses comparing consistent and inconsistent condom use, consistent use significantly reduced the odds of gonorrheal and chlamydial infections among men and women (odds ratios, 0.7-0.9), of trichomoniasis in women (0.9) and of genital herpes in men (0.7)."

So it is complicated by the behaviour of condom users, that is, they engage in "risky sexual behaviours" (being promiscuous and not too careful about sexual histories of partners). If you're going to be catting around, use condoms *all the time* - only using them sometimes is not much good. Consistent use and not being damn stupid where you stick your dick are what will lower the risk of contracting STIs.

(Don't say I never do anything for you).

Expand full comment

Seconded, especially because both have the proper use / compliance wrinkle.

As I understand it, *proper* condom use is reasonably effective against pregnancy but *typical* condom use is only marginally more effective than pulling out. One would imagine that typical use would likewise be poor protection against most diseases.

Likewise, observing typical face mask use (e.g. surgical mask pulled under nose or chin) it seems highly unlikely to provide much of any protection to oneself or others but supposedly proper mask use with N95s worn properly and changed regularly does provide some protection.

Expand full comment

I find it difficult believe that you haven't been able to find any articles about the efficacy of condoms to prevent STD spread. Did you actually look?

Google these four keywords: condom hiv prophylaxis studies

Or substitute std for hiv if want to reassure that condoms are effective against other stds.

Anyway, you can find a lot with Google.

Expand full comment

Yes, I have looked into various studies. The issue is the problem is complex and requires the synthesis of multiple layers of data. Okay so I know that one study found that the efficacy of condoms against herpes is X. But, do I trust this study? What were the demographics of the study? If someone is having sex within a particular population (lets say people with graduate degrees), do the probabilities of changes? What is the actual long term consequences of herpes?

I think most literature out there is basically condoms good. There's no real cost benefit analysis or application to more specific populations and situations.

Expand full comment

I can't help but feel you're asking for Scott to treat condom use here like his surveys of the efficaciousness of drugs. Which is silly: we know how effective condoms are if used properly, and we know the mechanism by which this works (it's an impermiable barrier). The population surveyed makes no difference here: a population might have varying levels of condom use, proper usage or even latex allergies, but unless we have a certain section of the population more likely to have sharp objects attached to their genitals then none of this changes how effective a condom is if used properly.

Also, I don't think the long-term consequences of any common STI need much investigation. There's not much debate on this front - depending on your carelessly-acquired disease it's some combination of infertility, possible health complications, short or long-term medical intervention and ethically having to be much more cautious sexually. It's hard to see any rational case that could be made that these risks are minimal enough to allow unsafe sex, especially because the exact risk you are taking is always uncertain. Note too that the risk of pregnancy exists (you might have more of a case asking for a survey of the efficacy of the pill...) in hetrosexual relationships.

Expand full comment

Getting HIV/AIDS or HPV related cancers is very expensive. I hear herpes is very annoying, and can be lifelong. Spreading STDs to other people is considered a dick move where I'm from, so if you contract any of them then you'll be stuck wearing condoms for the rest of your casual encounters anyway. Meanwhile the cost of a condom is nearly nil (most cities in the world have somewhere literally giving them away for free), so are you asking if some marginal hedonistic value is worth the cost of increased STD risk? I think at that point it's up to you versus how much you value how good sex feels without a condom.

If you're in a 100% monogamous sexual relationship, then not using a condom is probably fine, otherwise it seems unnecessarily risky

Expand full comment

I'm intrigued by the parallel to masks, since my understanding is the difference in efficacy is in the terms of orders of magnitude. Also condoms protect wearers equally don't they?

Plus coronavirus may be comparable to a STI but I think that entire getting pregnant thing is without parallel here (unless you want to argue long COVID is similarly tiring, which would be amusing if silly).

Expand full comment

Isn't the answer "always yes unless with a monogamous partner"?

I'm not sure if you need much math to arrive at this conclusion.

Expand full comment

If I'm being blunt and crude, this sounds like "give me an excuse not to use condoms because I don't like having sex with a condom on, but women seem to want their casual sex partners to use them so I need a convincing argument to tell my next hook-up I'm not going to give her a dose of the clap".

Expand full comment

A bit uncharitable? You could read the tone as typical rationalist looking for evidence to back a pre-held belief. Either reading seems possible.

Expand full comment

I have now read his second comment and honestly, it just reinforces my impression that what he *really* wants is "condoms don't do much, if you make sure to only have sex with nice people you're safe so you don't need them" ("population with graduate degrees", really? having a PhD is no guarantee the person is not a pox-ridden mess, if they were young, dumb and full of - vigour - during their college years).

Look, it's not complicated: all the old mores around sex are mostly due to (as the saying has it) "fuck around and find out". Fucking around means finding out that unintended pregnancy and sexually transmitted infections do, in fact, happen to you more than the Nice Girl or Good Boy who got married and only has sex with their spouse (well, the STIs at least, whatever about pregnancy). Casual, unprotected, and frequent sex with high-risk populations (like prostitutes, because this is a hazard of the work) means greater chance of picking up a little bonus.

You don't want to get infections. Then (1) either be chaste before marriage and continent within it or (2) don't fuck a ton of partners, don't fuck a ton of new partners at the same time, don't engage in risky behaviour, and use some form of protection. Right now, if you don't want to pick up STIs, condoms are your only way to go.

So, your choice.

Expand full comment

It does doesn't it? This is one of the open threads, though. This guy may be a troll, but maybe he was sleeping during his high school health classes.

Expand full comment

Fun fact, my Mormon guardians did not allow me to attend sex ed and the school punished me with incredibly boring non-sexual homework.

Expand full comment

But you're on a rationalist discussion group. I think if you're attracted to these sorts of discussions that you'd know how to use Google.

Expand full comment

No need to move the goalpost just to keep discussion going. Anyway, rationalists are a small minority of SSC survey-takers, and OP said "I haven't found any detailed, condensed articles about whether or not to wear a condom" which doesn't make it sound like Google helped, not that one is required to use Google before asking a question here anyway.

Expand full comment

Dr. Scott Alexander,

Could you please write about the booster? Clarifying the science and how to think about it? If it has been 6 months since dose 2, what's the MEDICAL advice for various people, in different circumstances (immunocompromised, NOT immunocompromised but high risk of serious problems with covid, healthy low risk people, etc)? Same mRNA you got before, be it Moderna or Pfizer, again? Is Moderna not approved for this yet?

Expand full comment

Oh, thank you for doing this for Long Covid. Btw, I read a nytimes article that came out after your article on Long Covid, saying that even mild covid led to kidney problems. How often?!

Expand full comment

What is the etiquette regarding when to reply to a Tweet? Is there any etiquette (other than "don't reply to people who block you" and similar software limitations)?

Expand full comment

Who in the world could possibly care about this?! Reply if you want; no one has to read or respond to you.

Expand full comment

I have found there is an epidemic on Twitter of people screeching about "you are unfairly requiring women to engage in emotional labor" in response to an on-topic reply to a tweet that Twitter promotes to me. I would prefer to cite a reliable source on etiquette to prove they are wrong, rather than simply claiming it myself.

Expand full comment
founding

there is no such thing as a reliable source on etiquette about this. It's a new phenomenon and there is no consensus.

think for yourself and if someone disagrees, own the disagreement.

Expand full comment

Okay, I take it back — you're right, I do care.

At the same time, I feel like if pointing out "that's not a cogent reply to this relevant and reasonable argument" doesn't work, saying "by the way, this is considered rude" won't either. Probably they'll just screech something about tone policing or male tears or something.

Expand full comment

Etiquette? It's the internet lol ;)

Expand full comment

So what's, like, the verdict on Iraq after the Iraq War and the surge and then Islamic State and the next surge etc. etc. etc. What kind of shape did Iraq end up in, as a country? I don't think I'll ever stop being fascinated by how short the American attention span is, and even in the middle of The World Discussing Afghanistan To Death- has anyone even mentioned Iraq?

My vague impression is that they're a quasi-functional democracy, and basically about as competent and with as much state capacity as they did during the Saddam era/pre-Iraq war. Does that sound correct? Like, they're roughly at the same level as other 2nd world countries in the Middle East and North Africa, but now they have elections that are at least kind of legitimate. Did they successfully drive Islamic State out? The American media I guess decided to stop covering that particular issue at some point. They're still in an awkward federalist structure with the Kurds?

I guess my point is that, even with wall-to-wall wailing about how we 'lost' in Afghanistan, I just haven't heard any summaries or lessons learned from Iraq. Aside from the WMD issue, I suppose we could say the left was wrong in that democracy apparently *can* be forcibly imported to a developing Muslim country, but also that the left was right in that it probably wasn't worth the cost (a couple trillion $)- yes? I mean Iraq is not even a particularly important or strategic ally to the US in the Middle East, if I'm not mistaken.

Does inspire some reflection that we were able to make Germany & Japan stable democracies, it sort of worked in Iraq, and yet the Arab Spring seems to have done absolutely nothing in the end. State capacity something something, culture & institutions something something

Expand full comment

I've been reading about Afghanistan recently, and based on that, my bet is that the lessons we should learn from Iraq are very similar to the lessons we should learn from Afghanistan, i.e. it's likely that the U.S. made the same huge mistakes in both. But in the case of Iraq I have a sense there are extra lessons like "be suspicious about the justification given for a war" and "don't remove rank-and-file staff who know how basic infrastructure and systems work (remove only the leaders.)"

I greatly recommend these sources on Afghanistan with choice quotes:

1. New Yorker: "On average…each family lost ten to twelve civilians in what locals call the American War…By 2010, many households…had sons in the Taliban, most of whom had joined simply to protect themselves or to take revenge" ... "Mohammad’s brother traveled to Kandahar to report the massacres to the United Nations and to the Afghan government. When no justice was forthcoming, he joined the Taliban. On the strength of a seemingly endless supply of recruits, the Taliban had no difficulty outlasting the coalition."

https://www.newyorker.com/magazine/2021/09/13/the-other-afghan-women

I was sad about the Syrian civil war, but was glad the US didn't get too involved because there was no group of "good guys" to back in the fight. In Afghanistan, it appears there were horrible warlords that the Taliban replaced by being slightly not-as-bad, but, if we can extrapolate from the central example in the New Yorker article of Amir Dado, when the Americans swooped in, they returned power to such warlords because they were "allies" against the enemy Taliban. I guess this was understandable in the short term since the U.S. rushed so quickly into war, but incomprehensible in the longer term.

Here's a survey that gathered a ton of information, and maybe the best way to read it is to look at the bottom for nationwide 2019 survey results (and Appendix 1 on methodology; notably "34% of women and 23% of men were inaccessible to random walk interviewing")

"If formal peace negotiations begin, who do you believe must be most trusted to defend your needs and interests at the negotiating table? (Allow two)": of 15 options, the bottom four were America, Taliban, Russia and NATO. The top two were National Unity Government and President Ashraf Ghani." (Too bad the US went with "just let it collapse" instead.)

Around 40% of Afghans would leave Afghanistan if "given the opportunity" (and that was before the collapse).

Questions about women suggest Afghans are mostly on board with educating girls and letting women work "outside the home", and strongly against "baad", meaning giving away daughters to settle disputes. I wonder if "outside the home", for some, just meant farming. The Burka was considered the most appropriate attire for women (32%) but other head coverings had the majority (no head covering: 1%)

https://asiafoundation.org/publication/afghanistan-in-2019-a-survey-of-the-afghan-people/

There's also a data viewer for getting more details, but it weirdly cuts off the meat of the text of many of the questions, as if they didn't care about making a usable tool.

There's a disconnect between the New Yorker article (where the Taliban and Americans are both portrayed as bad, while the Afghan government is mostly ignored) and the Asia Fundation survey (in which people seem to like the US-backed government and dislike the Taliban, but the Americans are mostly ignored). The survey largely doesn't ask questions about the US military or Americans, but I found one relevant question: "Please tell me how you would respond to the following activities or groups. Would you respond with no fear, some fear or a lot of fear?"

Encountering ANA (National Army): 61% no fear ... 11% a lot of fear

Encountering Western Military: 20% no fear ... 35% a lot of fear

Encountering Taliban: 6% no fear ... 35% a lot of fear

So Western military was perceived as less scary than the Taliban, but not by a wide margin.

"Afghanistan might well be the single most “subsidized” state in the world" - and while Afghan income was rising over the last 15 years, it looks like much of that income came from U.S. sources and was just cut off. It's also not hard to see that money spent in Afghanistan could have been used *ridiculously* more efficiently than it was (even if it was all still spent on Afghanistan.) https://www.unz.com/akarlin/where-are-the-afghanis/

Expand full comment

1. I would be very leery of any survey results coming from Afghanistan, just as I would of any survey sponsored by the HR Department.

In both cases, the subjects are likely to tell the survey-taker what they think the taker wants to hear. Regardless of any guarantee of anonymity, there is no real upside for doing otherwise, and telling unwanted truths has the potential for a whole lot of downside, such as a night raid by an American or American-backed death squad*, or even just attracting a whole lot of unwanted heat upon your district.

2. Notwithstanding any survey results, no guerilla army can survive without popular support. Most Taliban weren't paid so much as a single Afghani to fight, and the Taliban held on and fought for twenty years. By contrast, few ANA and police were willing to fight, and most surrendered, fled or joined the Taliban the first chance they got.

3. Further along those lines, the United States has had a continuing problem in recent years with its proxies, most of whom are thugs (ANA or any one of several Latin American police or militaries), cranks (MEK in Iran or various Ukrainian neonazi paramilitaries, although they also are pretty thuggish) or opportunists (the "Iraqi National Congress"). They aren't people who share our values and are willing to fight for them.

4. Considering our track record in recent decades, human rights and democracy are but a pretext to make war on countries we don't like. The carnage is as much a feature as it is a bug, or at least we are indifferent to it. All empires do this; ours is no worse than any other empire in this regard.

* I used the term "death squad advisedly. For are we not rationalists? Should we not call things by their proper names, regardless of country, team or tribe?

https://www.google.com/search?q=us+death+squads+afghanistan&rlz=1C1GCEA_enUS886US887&oq=american+death+squad+afghan&aqs=chrome.1.69i57j0i22i30.5720j0j7&sourceid=chrome&ie=UTF-8

Expand full comment

A friend of mine who did a couple year-long stints in Iraq tells me that Iraq is pretty much a failed state, that, other than a few collaborators, nobody is better off now than they were under the Baathists. My friend reserves some choice words for them and for the Iraqi military.

And we have to hold Iraq's oil revenues hostage to ensure that the government does not disobey.

https://www.wsj.com/articles/u-s-warns-iraq-it-risks-losing-access-to-key-bank-account-if-troops-told-to-leave-11578759629

Expand full comment

I think it's a mistake to see either Afghanistan or Iraq as wars of democratisation. They were realpolitik wars, which were deliberately muddled together with the liberal neo-Wilsonian notion of 'humanitarian intervention' largely to help sell them to the Western public.

The foreign policy advisors and decision-makers surrounding George W. Bush in the early 2000s were not Kouchner- and Bettati-inspired liberals. Their aims for Afghanistan were to apprehend or kill Osama bin Laden, dismantle al-Qaeda, create a sense that 9/11 was avenged, and demonstrate American might to deter other regimes from 'harbouring' terrorists. The war in Iraq was an opportunistic power play on the back of the 'something big must be done' momentum of 9/11, to dismantle a regime that has long been a regional thorn in the side of the United States and Israel.

(Leaving aside the vexed question of Iraqi WMD, the stated doctrine of pre-emptive war to stop hostile states from giving weapons of mass destruction to terrorists did not stand up to basic scrutiny on its own terms. With the exception of some biological agents, it's notoriously hard to do, even harder to maintain deniability, and not particularly more effective than, say, coordinating a few suicide trucks.)

The main problem of with the democratic angle of American interventions in Middle East and Central Asia is that American governments don't like democracy very much when it doesn't lead to 'American-compliant' or 'non-Islamist' outcomes. Mutatis mutandis, the same is true for American interests in South America. And lest we are being too hard on Americans, the same is true for the EU establishment - see e.g. Syriza and Podemos.

Take the celebrated 2004 presidential election in Afghanistan - those smiling girls holding up purple fingers and all. It had a certain spark. Granted, there had scarcely been much of a campaign and was a lot of diddling about and horse-trading on the council side of things, but the turnout was quite respectable, about 70%.

Unfortunately, Americans kept bombing weddings, and the 'democratically' elected leadership could do nothing about it. The second election, in 2009, had a turnout of ca. 30% and ended in a fraudulent muddle which legally ought to have produced a run-off election but didn't, for reasons. Addressing the main national issue - you know, the ruinous ongoing guerrilla war - was not within the power of the re-elected president, anyway. The US and NATO declared that their troops are staying no matter what and (somewhat unhelpfully) tried to foist an unelected 'shadow liaison' with broad executive powers on the elected government, relenting only when their chosen candidate for that position, Ghani, refused.

During his second term, President Karzai, who had little to lose because he couldn't run again, tried to negotiate with the talibs (a position supported by the vast majority of Afghans), and refused to sign the Bilateral Security Agreement with the US unless certain limits were included, like giving up the Americans' immunity from prosecution. For a brief moment, it actually looked like NATO would have to leave Afghanistan, and indeed there was some planning for that 'zero option'. But then the 2014 election rolled around, inspiring a dismal turnout (particularly outside Kabul). Ghani won, John Kerry negotiated a power-sharing agreement with Ghani's runner-up rival, and they immediately signed the BSA, allowing upwards of ten thousand US and NATO troops to stay in-country with impunity.

The Ghani government (reelected in 2019 with <20% turnout!) was famously, hideously corrupt. That Pentagon joke about 'vertically integrated criminal enterprise' just about said it all. While the incoming Taliban government will be in dire economic straits (perhaps heroin will help) they will certainly be able to claim they're draining the Kabul swamp. But Ghani was pliant, and that's what mattered most to Americans. Only, in the end, no one bothered dying for him.

You mentioned the Arab Spring. It's instructive to contrast the events in Tunisia (the model outcome) with those in Egypt. In both cases, following popular uprisings, fairly moderate Islamist parties with Muslim Brotherhood roots became major players (Ennahda in Tunisia, FJP in Egypt). However, because Tunisia wasn't strategically important to the US and Israel, basic politics were allowed to play themselves out without foreign interference. The secularists pushed back against the Islamists and a political equilibrium emerged. In Egypt, however, Mohammed Morsi's democratically-elected FJP government was removed in a classic tanks-surrounding-palace coup d'etat (which, hilariously, the Obama admin refused to call a coup for legal reasons - "We’re just not taking a position," said Jen Psaki) and military dictatorship was restored. Egypt was deemed too important to leave to democracy with potentially inconvenient populist outcomes.

Libya is a not dissimilar story. Same goes for the Palestinians whenever they have the temerity to democratically elect Hamas to something.

To finally get around to Iraq: it's tough to render a verdict because the system is currently in rapid flux, with electoral reforms passed recently and a parliamentary election to be held in October. Until now, the crippling feature of Iraq's American-imposed 'democracy' have been sectarian quotas. The president is a Kurd, the prime minister is a Shi'ite, the parliamentary Speaker is a Sunni, and other posts are similarly distributed via a points system. The quotas were actually a well-meaning idea, aimed at preventing the domination of any of Iraq's three main ethnoreligious groups over the others. But in striving for balance, they formalised division and gave rise to extremely corrupt systems of patronage. Elections in Iraq are technically decent, especially considering the two civil wars and ISIL. They've just never meant very much in practice, as the same power blocs remained in place, doling out jobs and contracts. Following the last two years of protests and the fall of one barely legitimate government, this muhasasa system might actually be going away, although, notably, Iran finds it useful and doesn't want it to.

Meanwhile the Kurds just want to secede and have near-unanimously voted in a referendum saying so, to which the central government responded militarily in 2018. The various militias enlisted to fight ISIL have now entrenched themselves as political actors with local power bases. Iraq exports tons of oil, but the country's basic infrastructure is a ruin.

So before we start evaluating the relative success of 'installations of democracy', we have to get into the weeds of whether that's what's actually being done. At the heart of it, the US wants stability that favours its interests and those of allied regimes, just as it always has. Those priorities poison any democracy installation scheme in places where large segments of the population have (or perceive) a reason to be aggrieved against Americans.

Expand full comment

https://www.newyorker.com/magazine/2021/09/13/the-other-afghan-women

One of the better articles I have seen on the subject. It's not that Shakira and other Afghan women so relished life under Taliban rule, it's that what temporarily replaced it brought them little good, and much that was worse, certainly more corrupt, more arbitrary, more spiteful, more pointlessly brutal.

Expand full comment

I don't know if I exactly have any formed thoughts around this, but I've been sort of thinking about soft power, especially in its cultural manifestation, and how this ties in with debates over "Is the West/America a declining force? Is China going to be the new global superpower?"

Now, this is usually in terms of economic and military power, but the interesting thing is the huge, huge, huge influence American culture has had. People talk about Coca-colonisation https://en.wikipedia.org/wiki/Cocacolonization but I don't know if the knowledge of the appeal of American culture is really recognised.

And of course, what leads on from that is when there is an attempt to co-opt this and replace it with your own cultural influence. Again, China is the one here; the Belt and Road Initiative and the influence within African nations are often discussed: https://en.wikipedia.org/wiki/Belt_and_Road_Initiative

But what about the influence of American culture on the youth of China? I think the CCP is becoming somewhat concerned about this, or at least trying to co-opt it as a propaganda opportunity. I started thinking about this due to two things:

(1) The really horribly translated potential crackdown on "sissy pants males" - that is to say, what would probably be called "metrosexual" here, at least when that was the fashion a while back. Young, attractive, well-groomed men who are actors and models and pop idols and have a huge fan base following (mostly if not solely women). It sounds vaguely silly to Western ears, but there's something more at the base of it than "we don't want our young men to be nancy boys!" It's precisely because of the huge fanbase, and the economic importance (these idols have all kinds of endorsements from a range of Chinese and Western companies selling everything from makeup to instant ready meals to watches and fashions) and more concerning to the CCP, influence on how the fans think of them - there's a minor scandal over one actor whose career has been pretty much totalled by the government, it kicked off because of jealousy of one set of fans who set out to cancel him by digging up "oh shit" moments from his social media history, and because a set of his fans defended him, this was seen as Bad Influence: instead of falling into line with the CCP judgement, they were holding out. This is not the job of media celebrities as the CCP views it; they are supposed to be Positive Role Models for the Youth, which includes being (at least publicly, whatever the views in private) 100% in line with the government and leading their fans the same way.

https://www.sixthtone.com/news/1002883/xinhua-mocks-sissy-pants-male-idols

https://en.wikipedia.org/wiki/Zhang_Zhehan#Controversy

(2) I am currently watching online something called "Street Dance of China". It's the fourth season of a dance competition. This year they have international dancers as well as Chinese dancers. The theme is "the battle for peace and love".

Now, I never imagined I'd end up watching hip-hop etc. dance (it's a long story) but I am, and I'm enjoying it. However it is (a) an example of how hugely influential American culture, from rap and hip-hop, are globally and (b) there's a subtle but definite angle about showcasing Chinese culture, how Chinese dancers are interpreting these styles of dance, and showing how China is catching up with and indeed surpassing international stars.

https://www.youtube.com/watch?v=h3wcZkmbB1w

It's attracted an international audience as well, building on from success of last season, such that the TV channel is providing a range of subtitles:

"The first drafts of & English & Vietnamese & Thai & Spanish & Arabic & Indonesian & Japanese & Korean captions are up."

So, yeah. Huge and unknown(?) influence of American popular culture on modern Chinese youth, and possibly an attempt by China to leverage that popularity into cultural influence within its own geographic sphere - Chinese hip-hop dancers instead of American ones as the role models for Asian youth?

(Or I could be talking out of my hat. The dancing is great entertainment, though).

Expand full comment

There are American schools all over the world, and elites everywhere like having their children educated at American Universities, even if they pretend to dislike America. But if English wasn't the global Lingua Franca then our influence would be less. But the Chinese might have a big advantage in that it is less easy for foreign powers to target their populations with specific propaganda since we don't have enough Chinese language speakers to crank out a lot of news media. Plus they have the great firewall. The supposed Russian meddling in US elections would not happen in China. As super capitalists we have built a good industry for creating art, music, film etc that appeals to people everywhere. Maybe this is because our culture is more an individualistic melting pot, and Chinese culture is a bit more steeped in traditions and things we don't understand and will not appeal to us. Bland culture can appeal to everyone.

Expand full comment

Cultural influence lags behind other forms of power. Whatever the next big power is, their culture will eventually become the dominant one.

Expand full comment

Counterpoint: Latin culture survived the fall of Rome pretty well for 1000 years and arguably (although the dominant language has changed to a weird Germanic dialect with Romance features) is still pretty important.

Yet the Huns and Mongols didn't produce anything similar. I'd suggest there's more to soft power than just following hard power.

Expand full comment

You would, at least, need to consider the longevity of the power as well as the cultural markers produced during the time in power. I imagine a fully illiterate group of people who produce no art, music, writings, etc. (not to say the Mongols fit this, but they definitely produced less) would have a much smaller effect on culture than a power who produces all of those things. That's likely why the Chinese have been creating their own movie and pop culture industries, and why they've pushed Western producers to bend to meet Chinese demands before their materials can be released there.

A power in place for 5 years will likely have little to no lasting influence, no matter how strong they were militarily. Similarly, a power that lasts 1,000 years would likely be a large influence far into the future even if relatively weak. A country that has both, like the UK for a while and the US now, can project their culture across the whole world.

Expand full comment

A counter-possibilty is that powerful memes spread regardless of their origin.

Thai food is available in every city in the world now, not because Thailand is a rich and powerful country but because Thai food is delicious.

American music is everywhere, it's true, but it's not the music of rich and powerful Americans, but music that originated in the poorest and least powerful parts of American society.

Rome conquered a lot of places, but in cultural terms wound up getting conquered by its provinces; Romans spent a lot of time trying to imitate the more cultured Greeks, and eventually Rome wound up speaking Greek, worshipping an Israeli religion and moving its capital to Asia Minor to be where all the cool people are.

The Chinese government will spend a lot of time and money trying to promote Chinese cultural memes overseas, but I doubt they'll have much success (beyond the few things that instantly appeal to the rest of the world like certain food items and martial arts movies).

Expand full comment

Educated Romans spoke Greek and worshipped Levantine cults in the Republic, even after they had established dominance over the Hellenistic states. So it might not be the provinces conquering Rome so much as earlier soft power winning out.

Expand full comment

"and eventually Rome wound up speaking Greek"

Primarily after the death of Theodosius I. Before, the Greek language died out in Italy and Latin was spreading in Tunisia and the Balkans. Your point on Christianity is completely correct, though, and it's true the City of Rome under the Late Republic and Early Empire was inundated by Greeks and Syrians, and native Italians were a minority of emperors after Commodus.

Expand full comment

I would add that, as America's relative power & wealth have declined over the last 20 years, its cultural influence over the rest of the world seems to have actually increased. Some of that is about the Internet, which is very American-centric, and some of that is just vague cultural appeal. Look at how the BLM protests spread globally last summer, even in places that, uh..... don't really have black people? Soft power seems to hang around for a while, look at all of the world-changing British bands post-war.

One thing I think about a lot is how much 'American' culture is really just the product of fairly small groups within the US. The main example here is African American culture, which has just dramatically shaped global music, dance and entertainment culture in the 20th century, from like 13% of the US population (which is like 4% of global population). From jazz through rock to hip-hop, plus *so many* words & phrases we take for granted today. The other main group are Irish Americans, who seem to have had pretty outsized influence for probably being much smaller in numbers than African Americans

Expand full comment

Also, the sheer amount of advertising and sponsorship, and the range of products on display, show that the Chinese marketplace has enthusiastically embraced capitalism.

Maybe *too* enthusiastically, to quote from this article:

https://www.sixthtone.com/news/1002861/cctv-apologizes-for-bombarding-student-show-with-ads-#

"China’s state broadcaster has publicly apologized for flooding its annual back-to-school television show with back-to-back commercials, after outrage from parents.

Jointly produced by China Central Television (CCTV) and the Ministry of Education (MOE), “First Class for New Semester” has become a mandatory viewing activity for parents and students on the first day of the fall semester. But on Saturday, millions of parents watching the show disapproved of the “endless” ads delaying the show by 12 minutes.

“We have broadcast too many ads before ‘First Class,’ which has prevented the parents and students from watching it on time,” CCTV said in an apology statement Sunday. “We will strive to provide better service for our audience.”

Expand full comment

(I WANT AN EDIT BUTTON SO BADLY GAAHHHH) For the median citizen of these places, do you think quality of life will become better or worse by 2040, as compared to 2021? How confident are you of these beliefs? Places: Your country, USA, China, France, currently developed world as a whole, currently developing world as a whole.

"Quality of life" is of course subjective, so use whatever metrics seem reasonable, but lean toward measurable ones like life expectancy, median income vs. affordability of basic goods and services, homicide rates, suicide rates, etc.

Expand full comment

Hmm, U.S. politics are still horribly bonkers, but "slightly better" is my best guess (low confidence). My country of Canada (America's hat) should be "modestly better" but will be dragged up or down by whatever the U.S. does. The risks posed by China are clearly greater under Xi Jinping, but it seems they don't want a war with the U.S... the world order seems stable. In which case, global development / industrialization should continue, and quality of life should increase at a variety of rates in the third world, while the first world makes smaller gains. If UBIs in the $600-$900/mo range become popular in the first world, I'm fairly convinced they will work well at reducing homelessness and poverty without harming the economy.

Expand full comment

F--- knows, slightly worse, slightly better, about the same, ditto, a lot better

Expand full comment

Better, better, better, better, better. Each has a confidence level of about 90%. They're not independent (if the US tanks somehow, the odds that France, for instance tanks as well goes up. Same for China and basically everyone else on the list.)

However pretending they are independent, that comes out to a 40% chance that one of those categories will be worse in 2040. Which sounds about right to me as well.

Expand full comment

It will better in every mentioned place, with 80 % confidence. You might laught at me after 2040

Expand full comment

Nineteen years isn't a long time. It's hard for me to see whether things have got better or worse, overall, over the past nineteen years even in my own country. Whatever has happened, it's been within the error bars.

So in 2040 I think that things will either be basically the same (90% confidence) or noticeably worse (10%).

Expand full comment

However, in most of the developing world, the past 19 years has produced a very easy to notice improvement in quality of life, well outside error bars...

Expand full comment

Better, Worse, Worse, worse, worse. I expect a hot conflict between now and then and I expect us to win

Expand full comment

What kind of hot conflict? A limited war over Taiwan is unlikely to affect living standards in the PRC much, let alone the entire developing world. An unlimited war including nuclear exchanges is unlikely to leave us better off, even in victory. The window of conflicts that leads to your projected outcomes is incredibly narrow, possibly infinitesimally so.

Expand full comment

Quality of life in developing countries has been increasing remarkably rapidly. I'd be surprised if a hot conflict was enough to reverse that trend.

Expand full comment

Similar, Worse, Similar, Similar, Worse.

This is assuming that current trends re. climate, drought, and food insecurity continue, but don't get worse; and that China hits the top of the exponential curve and drops back into line with their demographics.

Expand full comment

For the developing world (and China) you are forecasting the reversal of a secular trend that has held for the past century, and has been arguably the most significant human story of the post WW2 era. That's bold, and would seem to require more justification than just some mumbling about climate, drought and food insecurity (notwithstanding which, the median citizen in both China and the developing world has seen transformational improvements in quality of life over the past century).

Expand full comment

I think I answered the wrong question.

What I actually answered was: Will things be getting better at the same rate as they are now.

AS for the actual question then:

Better, Better, Better, Better, Better, Better; unless climate change modes are WAY the hell off, in which case,

Better, Better, Better, Better, Better, Worse.

Expand full comment

better, better, worse, unclear, better.

Expand full comment

Are we discounting the fact that 2021 was a plague year? Because things like "being able to send your kids to school" and "being able to socialize in person regularly" would seem like a major improvement in QoL for most people...

Expand full comment

Good point. Yes, I meant to discount that, so maybe I should have used 2019 as a baseline year instead.

Expand full comment

Do you guys know what were the fastest declining major languages of the 20th century? I can think of Yiddish, Occitan, and Plattdeutsch, but I wonder what you guys can come up with.

Expand full comment

What do you consider to be a major language? I assume you'd define it by the numbers of speakers? — i.e. one million speakers? 100,000 speakers?

Yiddish lost the majority of its speakers during the Holocaust. There are arguments going on right now whether it's still continuing to decline or whether its's holding its own.

Iceland is worried about Icelandic, because the majority of Icelanders already speak English, but the authorities are blaming the social media for young preferentially speaking English over Icelandic. I don't know if this is a real problem, or it's just a bunch of nationalists who are fretting about their mother tongue.

The authorities in China have been pushing Putonghua (i.e. "Common Speech" aka "Mandarin") on Guangdonghua (Yue, aka Cantonese) speakers as well as the other seven or eight major language groups in China. The government refers to these languages as "dialects" to downplay this program — but most of these "dialects" are mutually unintelligible. Cantonese and Mandarin as similar as English and German. So, there are about nine different languages that are under threat in China.

Despite the efforts of the Irish government, Irish Gaelic has been in decline through the 20th Century. However, its numbers of speakers as a primary language may have stabilized.

And Scots may be making a comeback after three centuries of slow decline.

Expand full comment

"I assume you'd define it by the numbers of speakers? — i.e. one million speakers?"

Ten million.

Expand full comment

I suggest non-prestige Chinese languages (Min, Gan, Hakka). Mainland China standardized on Putonghua in the 20th century to dramatic effect, the other languages (especially ones other than Cantonese) must have suffered.

Expand full comment

Yeah, I was surprised to learn that actors in dramas get dubbed over by voice actors who speak the "standard" language, in order that there is one version understood by everyone viewing the show, and the actors in question might speak a regional dialect/have an 'accent' because of that.

https://www.quora.com/Why-do-some-Chinese-dramas-appear-to-be-re-dubbed-in-Chinese

Expand full comment

I think this will definitely be the case for the 21st century, but China's population probably grew fast enough to counteract most of the dialectical leveling during the 20th century.

Expand full comment

I would guess that there are some languages of Indonesia that got swept away with the invention of Bahasa Indonesia in the middle of the century, but Wikipedia mentions that Bahasa Indonesia is mainly a second language still, so perhaps not: https://en.wikipedia.org/wiki/Languages_of_Indonesia

Expand full comment

I would guess that other candidates are regional languages of China or India, and perhaps major indigenous language of South America like Quechua and Aymara. European regional languages mostly got swept away with the rise of print media in the 18th and 19th century, and standardized education systems, and these things (along with broadcast media) came to Asia, Africa, and Latin America in the 20th century. Southern India has had a large resistance movement against Hindi, but I expect there are large regions of northern India where the local language was similar enough to Hindi that it could have gotten replaced without as much of a fight.

Expand full comment

"and perhaps major indigenous language of South America like Quechua and Aymara"

Definitely not them; Peru grew in population fast enough to counteract any decline in speakers of those languages.

" I expect there are large regions of northern India where the local language was similar enough to Hindi that it could have gotten replaced without as much of a fight"

Maybe; idk.

Expand full comment

I don't think you're correct about Quechua. My stepmother who is a linguist was involved in Quechua language preservation project. Now she's working on Mayan language preservation project. It does look like Quechua is declining (below). Definitely the different Mayan languages are in decline.

https://www.quechua.org.uk/Eng/Main/i_DANGER.HTM#:~:text=For%20Cajamarca%20Quechua%2C%20the%20total,167).

Expand full comment

It didn't decline in absolute numbers over the 20th century.

Expand full comment

I will admit that I've never researched this question. And I may have made a mistake by taking my step-mother's linguistic word for it. Could you back up your statement up with a link that shows the percentage of Quechua speakers over time?

Expand full comment

If you mean by percentage instead of by absolute number of speakers, it would be any of the languages that went extinct in the 20th century: https://en.wikipedia.org/wiki/List_of_languages_by_time_of_extinction#20th_century

Expand full comment

I meant by number of absolute speakers.

Expand full comment

Albion's seed had a section at the end about diasporas other than the 4 main English groups. One of them was a group of Scottish settlers (not Borderers, another group) who lived in coastal North and South Carolina. They apparently still requested ministers be sent from Scotland who spoke Gaelic in the late 1800s.

I wonder if there are any Native American languages which were spoken in 1900 but have since vanished.

Expand full comment

Another book I'd recommend on the spread of languages over time is Nicholas Ostler's "Empires of the Word".

Expand full comment

Could people fill me in on Sinovac? I gather it's not a very good vaccine, but better than nothing.

Any thoughts about why China bet on an inferior vaccine? Bad luck? Cheaper to develop and make?

Does China seem to be getting some good out of it in terms of foreign relations?

Expand full comment

Excellent comments form Eric and Neal! China also has an mRNA vaccine coming out soon called ARCov (aka Walvax). I think they've completed some or all of their Phase III trials for ARCov. They also have a recombinant protein vaccine in the works, called VO-1, which is in Phase III trials. And there are a couple of others in development whose ETA is further out. China has now surpassed India as the largest manufacturer of vaccines in the world. Both those country's production numbers leave the US and the EU in the dust. ;-)

Expand full comment

My understanding is that Sinovac was an easy, fairly safe, relatively low-tech vaccine candidate that was mass-produced and distributed as quickly as possible with only basic testing, with the expectation that it would at least be better than nothing.

It's an inactivated whole-virus vaccine, which is the second or third oldest vaccine technology, after deliberate live infection with a related virus that produces cross-immunity (i.e. innoculation with variola minor or cowpox to guard against variola major). Inactivated whole-virus vaccines are a well-understood and we've got a pretty good handle on how to predict and mitigate risks, so it's a defensible candidate for rushing into production with minimal testing. At worst, it's going to be ineffective.

I can see both practical and political arguments for rushing a low-risk vaccine. Pragmatically, even a marginally effective vaccine will slow the spread of the epidemic and the benefits of rolling it out ASAP arguably outweigh the risks of waiting for phase III trial data. And political, rolling out a vaccine quickly will make it look like you're doing something promptly to deal with the crisis, especially in an authoritarian state that can suppress bad news about effectiveness and side effects.

As it happens, it turns out Sinovac was indeed much better than nothing in terms of effectiveness, although it seems to be much less effective than Adenovirus or mRNA based vaccines. I haven't seen good data about side effects, but qualitatively it seems similar to other Covid vaccine types (i.e. moderate risk of injection site pain and minor illness symptoms shortly after the dose, but extremely low risk of serious side effects).

As for sticking with Sinovac instead of switching to an Adenovirus or mRNA vaccine, that's also probably a combination of pragmatic and political concerns. Pragmatically, they've already build out the infrastructure to produce and distribute Sinovac, so they can make money and/or diplomatic capital by selling or giving to other countries, and consuming domestic production of Sinovac and Sinopharm is probably a faster and lower-marginal-cost strategy for finishing fully vaccinating the domestic population than importing Pfizer, Modern a, AstraZeneca, or Sputnik-V. And politically, admitting the domestic vaccine is less effective than foreign alternatives would be a massive loss of face domestically and would also embarrass the governments of other countries that also adopted Sinovac.

Expand full comment

> At worst, it's going to be ineffective.

There have been vaccine-candidates that ended up making the disease worse. Was this not a possibility with Sinovac?

Expand full comment

I don't know. My guess is that is was a potential issue, but they did enough testing to rule it out as a major problem before rolling out the vaccine.

I was actually coming back to amend my comment about "rushing into production with minimal testing" when I saw your response. I double-checked and found I'd misremembered the degree to which Sinovac was rushed out: the vaccine manufacturers did publish Phase I and II trial results and Phase III trials were underway before the Chinese government authorized the vaccine domestically, and several other countries did their own Phase III trials of Sinovac. Emergency approval for high risk groups in China was granted in August 2020, and full approval for general use was granted in Feb 2021.

That's several months faster than US approval of the mRNA vaccines (emergency approval in Dec 2020 for both, and full approval in July 2021 for Pfizer with full approval for Moderna still pending), but not the near-immediate rollout I thought I'd remembered.

Expand full comment

Sinopharm is much better than Sinovac and is the "flagship" of Chinese vaccine diplomacy. I think there's a lot of focus on Sinovac in Western media because of anti-China bias and/or general bias towards reporting bad news or putting a bad spin on things in order to fuel doomscrolling.

Hate to cite my least favorite academic here but Steven Pinker correctly points out that there seems to be significant media bias against reporting good news. For example, none of my usual news sources told me when China and the UAE started offering Sinopharm to children ages 3 and up - I had to find out in one of my semi-regular googling sprees. I'd think the UAE (which licensed Sinopharm from China and manufactures its own version, I believe) running trials on a cohort of kids 3-18 - including members of the royal family - would be the kind of great global covid news media outlets would want people to know. I'd think it would be relevant that kids in Abu Dhabi can be safe at school while kids in Texas and Florida can't, and maybe the American public would want to pressure the US to get a move on with testing our vaccines in that age cohort if they knew we were falling so far behind other countries.

In my country of residence, Georgia, Sinopharm is quite popular - in so much demand that supplies had to be restricted and the government is negotiating a new round of purchases from China - while at the same time, the government just rejected a gift of 50,000 doses of AstraZeneca from Lithuania because no one in the country wants AZ and the government thought the doses would just expire. This makes sense because from what I understand AZ is less effective than Sinopharm and with a much higher rate of side effects.

I'm not sure why China is exporting both vaccines when one is clearly better than the other - it might be something to do with production facilities or supplies; I have no idea - but at least here in Georgia few people are paying Sinovac any mind, and many people - myself included - are quite happy with China because of Sinopharm.

Expand full comment

Thank you, and also to @Eric Rall and @broblawsky. I'm biased against China, and I try to compensate for it.

I've formatted my reply this way because I don't want to clutter the feed with three identical replies, but all three of the comments were good. I have no idea whether the @'s will help replies to get to people.

Expand full comment

My pleasure.

Expand full comment

I'm curious, what makes Pinker your least favorite?

Expand full comment
Comment removed
Expand full comment

But you're OK with Lyme Connecticut releasing Lyme disease to the world, and Zaire releasing Ebola to the world? Or Europeans releasing Measles to the New World? And why would China be happy with it? It killed their own people and impacted their own economy.

Expand full comment
Comment removed
Expand full comment

I can imagine that I hear you sputtering in indignation. Lol!

Expand full comment
Comment removed
Expand full comment

I suspect that releasing the virus was an accident, possibly the sort of accident any advanced country might make. Also, the lab leak hypothesis is plausible, but not the only possibility.

Expand full comment
Comment removed
Expand full comment

Why shouldn't the block any investigation? Going down the investigation path is a fool's game. The conspiracy theorists and the people with anti-China political agendas will keep moving the goal posts. And China will never be able to prove that they weren't covering something up.

Expand full comment
Comment removed
Expand full comment
Comment deleted
Expand full comment

I believe the big deal about children in school is the risk of them bringing the disease home, not that they'll get significantly sick themselves.

Expand full comment

New studies in Scotland and England show that B.1.617.2 (aka Delta) is creating twice the rate of hospitalization compared to Alpha (B.1.1.7). And percentage-wise it's attacking a younger demographic. No data from the US, yet, but anecdata seems to indicate kids are being hospitalized at a higher rate than with earlier variants.

Expand full comment
Comment deleted
Expand full comment

Indeed, but vaccinated doesn't mean invulnerable.

Expand full comment
Comment removed
Expand full comment

> Could people fill me in on Sinovac? I gather it's not a very good vaccine, but better than nothing.

CoronaVac (the vaccine developed by Sinovac BioTech) is an inactivated virus vaccine, unlike the Pfizer/Moderna vaccines (MRNA-based) or the AstraZeneca/J&J vaccines (modified virus vector). That means that Sinovac took COVID virus samples, replicated them in vitro, then chemically deactivated them to make them incapable of reproduction. The immune system can still recognize them as a pathogen, so it produces an immune response, giving the immune system a head start when it gets exposed to live COVID virus.

> Any thoughts about why China bet on an inferior vaccine? Bad luck? Cheaper to develop and make?

Probably speed, cost, and pragmatism. The technology behind CoronaVac is essentially similar to that used in the original Salk polio vaccine. It's easy to grow viral particles in a culture and chemically inactivate them; we've been doing it for a long time.

Also, there are a bunch of attempted Chinese COVID vaccines, including at least one MRNA vaccine: https://en.wikipedia.org/wiki/Category:Chinese_COVID-19_vaccines CoronaVac is just the one that took off first.

> Does China seem to be getting some good out of it in terms of foreign relations?

Hard to say. They've been exporting it to huge chunks of the global South, southeast Asia, and Eastern Europe. Whether that will result in long-term diplomatic gains is hard to say.

Expand full comment

Is it possible to take pictures of outside while keeping camera sensors inside a Faraday cage?

Expand full comment

Sure. You need some small hole to let the light in.

Expand full comment

You might be able to train an AI to edit out the cage. You'd end up with low-quality images, though.

Expand full comment

If the camera can move a little and especially if you're taking pictures of something that isn't moving, you could have the AI stitch different photos together and not lose much.

Or have two or more cameras in slightly different positions taking a movie and make them into a single movie to edit the cage out.

Expand full comment

So long as the holes in the faraday cage are bigger than the wavelength that you're taking photos in

Expand full comment

I don't see why not. Faraday cages don't block photons. You just couldn't upload your pictures until the camera was out of the cage.

Expand full comment

Technically a Faraday cage blocks electrical fields, and photons are electromagnetic fields in action, so an ideal Faraday cage would block all forms of electromagnetic radiation, which includes light as well as radio. What you are probably meaning to say is that a practical Faraday cage, e.g. an ordinary metal mesh, doesn't function very well at sufficiently high frequencies (like the optical). But if you had a pretty dang ideal Faraday cage, like a layer of plasma, then it would block photons quite as well as radio waves.

Expand full comment

Hmm I think of a Faraday cage as mostly blocking electrical fields. I would say that goes from frequencies of zero to 100 kHz or so. You can make a F. cage out of wire mesh.

Expand full comment

Or just a metal box

Expand full comment

Well, but the metal in that case is blocking visible light for reasons other than being an approximation to a free electron gas, so I thought that was kind of cheating. I was trying to think of a "true" Faraday cage, meaning something with electrons that could freely respond to even the very high frequency EM field of visible light. A plasma is what came to mind.

Expand full comment

Triple-skinned mu-metal.

Expand full comment
founding

Technically they do block photons, just not the wavelengths you care about when taking pictures.

Expand full comment

Normal faraday cages tend to have gaps in the metal that a camera could see through, so probably? Don’t really get the motivation for the question tho

Expand full comment

If the communication is allowed to be quantum communication then the task is easy: One of the participants prepares the equal superposition of all states of the form |i_1>|i_2>|i_3>...|i_n> where the i_j are all distinct and run from 1 to n. This is a state of n qudits where d=n. The preparer then sends each participant one of the qudits. Everybody measures their own qubit in the computational basis. The result of the measurement is that everybody has distinct numbers and nobody knows anything about anyone else's numbers.

That probably falls foul of the "isolated" rule, so here's a (much less efficient) answer where you only use classical communication:

(1) Rirelobql pubbfrf n ahzore sebz 1 gb a ng enaqbz jvgubhg gryyvat nalobql.

(2) Jr abj qb n cebprqher gb purpx cnvejvfr jurgure rirelobql'f ahzoref ner qvssrerag. Pubbfr gjb crbcyr jubfr ahzoref lbh jbhyq yvxr gb purpx sbe qvfgvapgarff. Pnyy gurz N naq O. Pubbfr n guveq crefba, qrfvtangrq gur "purpxre". N pubbfrf n pbqr jbeq sbe rnpu bs gur vagrtref sebz 1 gb a. Gurl erirny guvf pbqr gb O, ohg abg gb gur "purpxre". Abj N naq O erirny gurve rapbqrq ahzoref gb gur purpxre, jub choyvpyl qrpynerf jurgure N naq O unir tvira uvz gur fnzr pbqr jbeq be qvssrerag pbqr jbeqf. Vs gurl ner gur fnzr lbh erghea gb fgrc 1. Vs qvssrerag, lbh ercrng guvf fgrc sbe n erznvavat cnve gung unfa'g orra purpxrq lrg. Vs nyy cnvef ner purpxrq guvf jnl gb pbafvfg bs qvfgvapg ahzoref, lbh ner qbar.

Expand full comment
founding

Gur cnegvpvcnagf fbeg gurzfryirf ol anzr nycunorgvpnyyl naq fgnaq va n pvepyr. Gura, fbzr ahzore bs ebhaqf sbyybj. Sbe rnpu ebhaq, rirelbar jub unf abg lrg nffvtarq gurzfryirf guvaxf bs n ahzore sebz v gb a (gubfr jub unir orra nffvtarq vafgrnq guvax bs gurve bja ahzore, juvpu boivbhfyl jvyy abg or v). Crefba 1 (nycunorgvpnyyl) pbzzhavpngrf fbzr neovgenel vagrtre gb crefba 2. Jura fbzrbar urnef n ahzore, vs gurl cvpxrq gur ahzore v gurl vaperzrag gur vagrtre ol 1, ryfr gurl yrnir vg nybat. Gura, ertneqyrff, gurl cnff guvf vagrtre gb gur arkg crefba. Bapr crefba 1 urnef n ahzore ntnva, gurl purpx gb rafher gung rvgure gurl cvpxrq ahzore v naq gur vagrtre jnf arire vaperzragrq be gung gurl qvq abg cvpx ahzore v naq gur vagrtre jnf vaperzragrq rknpgyl bapr. Vs guvf vf abg gur pnfr, crefba 1 naabhaprf gung rirelbar (abg lrg nffvtarq) fubhyq cvpx n arj ahzore naq erfgneg gur ebhaq. Vs vg vf, crefba 1 vaperzragf v naq naabhaprf gung n arj ebhaq ortvaf. Guvf pbagvahrf hagvy nyy ahzoref unir orra nffvtarq.

Gur xrl vafvtug urer vf gb ernyvmr gung fbzrubj "guvf ahzore vf gnxra" zhfg or pbzzhavpngrq jvgubhg nalbar rire yrneavat jub gbbx vg, bgure guna jurgure gung crefba vf gurzfryirf. Gur hfntr bs na neovgenel vagrtre engure guna 0 vf pehpvny urer fb gur bgure cnegvpvcnagf pnaabg qrgrezvar jurgure crefba 1 gbbx gur ahzore.

Expand full comment

I was trying to come up with some variation of Diffie-Hellman but this is really good and easily understood.

Expand full comment

I remember working on this problem back in 2012 :).

I think I just worked out a solution which is different from what I had back then.

Rot-13’d:

Unir bar qvfgvathvfurq fghqrag juvfcre gb rnpu bgure fghqrag n qvfgvapg enaqbz ahzore orgjrra 1 naq a. Gur bayl vffhr abj vf gur vasb gung svefg fghqrag unf.

Arkg pubbfr n frpbaq qvfgvathvfurq fghqrag, naq unir uvz funer bar enaqbz crezhgngvba jvgu nyy fghqragf orfvqrf gur svefg. Gurfr a-1 fghqragf pna rnpu gura nccyl gung crezhgngvba gb gurve bja ahzore. Gur bayl vffhr abj vf gung gur svefg fghqrag qbrfa’g unir n ahzore.

Gur svefg fghqrag pna vasre jung uvf ahzore fubhyq or vs ur xabjf gur fhz bs rirelbar ryfr’f ahzore. Gb trg uvz gung fhz jr pna unir rnpu bgure fghqrag trarengr nabgure enaqbz ahzore naq gryy uvz gurve ahzore cyhf gung enaqbz ahzore. Svanyyl, jr pna unir rnpu bgure fghqrag gryy gur frpbaq qvfgvathvfurq fghqrag jung gurve yngrfg enaqbz ahzore jnf, naq gur frpbaq fghqrag pna tvir gung fhz gb gur svefg fghqrag.

Pretty sure that works :).

Expand full comment
founding

Consider every possibility when there are three students. Student 1 can always determine which number each other student has by which number he is left with.

So student 1 says, without loss of generality, "2" to student 2 and "3" to student 3. Now student 2 says "1 3 2" or "1 2 3" to student 3... it can be shown every permutation is identical to one of these two if "execute the permutation" means to choose the next number. Student 1 is then left with 3 if and only if student 2 has 1 and 3 has 2. Otherwise 1 is left with 2, 2 with 3, and 3 with 1. Therefore the number student 1 has can predict the numbers the other two are left with.

Expand full comment
founding

I misunderstood "execute" I think; the above critique isn't accurate if execute means a[i]=i for all i.

That stated, there's still a cheap shot that can be made at this, and that's that (for this example of n=3) person 1 has received p2+rand2 and p3+rand3, and rand2 and rand3 presumably are chosen from similar distributions, or at least some prior exists on what the distribution of the random numbers will look like. Hence, whichever sum is smaller has a higher probability, however miniscule it may be, of belonging to the person who got the lower number.

Expand full comment

All arithmetic can be performed mod n, blocking this attack.

Expand full comment
founding

Yeah that sounds like it works.

Expand full comment

Here’s a fun thought experiment - if you distributed buttons to every human, which when pressed, deposited $1,000,000 into their bank account at the cost of randomly killing a person in the world, how many people would die?

What would the death counts look like with 100k, 10k and 1k?

Expand full comment

After the first few million people push the button that $1M will be worth less. Maybe it limits when the dollar becomes worthless.

Expand full comment

Once people start noticing the dead, it's going to become dangerous to let anyone know that you're suddenly wealthy.

Expand full comment

There's to many psychopaths, sadists, deep ecologists, anti-natalists etc. for this to even be an interesting thought experiment. Deranged people would just keep pressing the button until they off themselves. Maybe there would be about ten thousand people left in the end, that's small enough to not have anyone too crazy. It doesn't matter if you get $1,000,000 or $1k or $0, the result would be the same. And money would of course be worthless anyway after the total collapse of civilization that follows such a massive extinction event. Humanity would probably revert back to a hunter-gatherer stage.

It could be interesting to speculate how unpleasant you would have to make the button to avoid this. A button that gives a small electric chock (and no money) would probably give the same result. A button that causes excruciating pain for an hour may give the normies enough time to stop the weirdoes killing everyone. At least you would know if someone pressed the button.

There wouldn't be laws or social norms around button pressing (as some commentators seem to believe): society would collapse within the first day or so as half of humanity dies. There's not enough reaction time.

Expand full comment

Surely this thought experiment works better if the button only works once... and kills the second person to push it.

Expand full comment
Comment deleted
Expand full comment

> I would do it because it's effective revenge against all the hostiles in the population.

Holy shit. Hostiles? Like, killers, like yourself?

Expand full comment
founding

High enough that the question of whether it kills without or with replacement matters.

Expand full comment

Extracting out the people too young or feeble to push buttons, like babies and small children and people on their deathbeds, I'd say practically everybody.

There are some people who wouldn't push a button to kill a random stranger in order to get a million dollars into their bank account, but you have people (1) very desperate and in need who would make this decision (2) criminals, whom we already know don't mind killing even people they do know (3) very cold-blooded people who think 'it's probably some one of the faceless billions in the Third World who are better off dead than with the horrible life they already lead' (4) the rest of us, who probably don't need too much pushing to be convinced that the benefits of one million smackeroos to us far outweigh somebody the other side of the world we don't know or care about.

I'm torn about whether or not to include the senselessly rich here, the kind of people who make more money simply by waking up in the morning and strolling through their day doing nothing in particular, because their wealth grows so fast, than any of us can even dream of making in our entire lives. On the one hand, if you've got billions, a mere million is nothing at all. On the other hand, very seriously rich people didn't get that way by turning down any opportunity at all to make money, and even a measly million is money.

I can't give an estimate on how much of the global population would survive the day after everyone got their hands on "make yourself very rich now" buttons, but I don't think it would be even half the current population.

Expand full comment

See this thought experiment in narrative form here: https://www.youtube.com/watch?v=TBEC2A1uwt4

Expand full comment

Your optimal strategy is seemingly to not worry about the money at that point, since it will quickly become worthless anyway. Knowing that most people on the planet are going to die now, you need to instead try and get there as quickly as possible, so spam the hell out of the button until every city is down to a few hundred people, but before we get widespread wars that destroy all the material wealth. Now whoever survived can live lives of fabulous opulence since the property around them is still intact. At least until they realize they accidentally killed all the farmers and they starve to death.

Expand full comment

Perhaps more in the spirit of the thought experiment, though not so apocalyptic: what if only a million people got a button, and it only worked once?

Expand full comment

Does it make wealth or money? If there's nothing new of value backing up that million, all you've done is deflate the dollar a bit

Expand full comment

That's hardly all you've done. There's all the dead people, of course. Also, the new dollars aren't distributed the same way the old ones were, so wealth will get moved around.

Expand full comment

How fast do we start seeing inflation?

Expand full comment

my own personal intuition is that at the 1 million dollar mark, the entire human race is buttoned out of existence.

Expand full comment

Does a button keep working after its owner's death? Does it need to be pushed by a conscious human? Can I just hook it up to a clicker machine?

https://2.bp.blogspot.com/_3pzXydA_9sg/SlhHgKN_68I/AAAAAAAAAFg/HmzjvmGLbnA/s320/Drinking-Bird-Simpsons-01.jpg

Expand full comment

Is an individual allowed to press it more than once? "Yes" is more interesting.

Expand full comment

Yes, the button is spammable and comes with instructions detailing what happens if it is pressed.

Expand full comment

I make a bet with you that everyone dies, then I keep on pressing the button, not caring about the money any more.

Nothing is more important than proving I am right.

Expand full comment

Doesn't everybody just die? The first press is the hardest. If you allow more presses to the people who have already valued life below that money, and have already made themselves murderers - why should they stop?

Expand full comment

Largely depends on details that influence the social perception of the act of button pressing.

Taking the scenario literally, it'd seem so absurd to people that lots would press the button out of sheer curiosity. Still more would press it because it is so easily *defensible* as innocent curiosity - plausible deniability.

In this interpretation, 1-2 billion people at least, before coordination has a chance to set in.

Expand full comment

FTR I was assuming max 1 button push per person

Expand full comment

I think in general the thought experiment and similar ones sort of devolved into absurdity as the practical consequences will, diverge from your intended idea of “how much do people value random lives”. Like, at an extreme, you just violated conservation of mass, so now we have free energy? What happens when 1 billion * 1 trillion of “real cash” pops into the economy? They’re sort of silly.

And I just realized you can press the button more than once - and like what happens when an Indian or a Nigerian gets the button? If one in 100 people press the button 100 times just for fun (wow I found a button on the ground!) it’s plausible even more than 1/8 would die

Expand full comment

Well, if that much money popped into the economy and caused hyperinflation, you'd have to keep pressing the button to maintain your standard of living right? Would you keep spamming it to keep your kids from starving? Seems like a vicious cycle.

Expand full comment

People would quickly realize they didn’t like slaughtering everyone else. In general the thought experiment isn’t well founded lol

Expand full comment

"People would quickly realize they didn’t like slaughtering everyone else."

Only if/when people around them started dying, or family members living in another country. I think people won't think of it as 'real' until people near them start dropping dead, which means *they* could be the next one to drop dead. *Then* pushing the button seems a lot less attractive than when it's "there are seven billion people on the planet, who will know or care if one of them dies because I push a button that makes me rich?"

Expand full comment

No most people genuinely don’t like the idea of killing people.

Or, to put it another way - if you’re a friend of someone and you ask for them to help you do an asassination scheme - even if it’s unlikely they’ll get caught - most will say no.

Expand full comment

Im with you. My son, when he was about 13, posed this one to me. He was very ambitious to “make out” andhe presented it to me as a no-brainer.

It took me few days and then I suddenly said to him,

“OK. What if you’re on the wrong end of the button? There’s maybe a starving kid somewhere that would erase you for a can of beans.”

It seemed to make an impression.

Expand full comment

I also think it takes GiveWell a few grand to save a life, so lol EA would have a new dilemma

Expand full comment

This makes apparent the moral distinction between letting die and killing. Utilitarians should hit the button and donate to save lives. The correct utilitarian response might be to hit the button until the price of saving a life raises to 1,000,000.

Expand full comment

At least if the utilitarian ignores the consequences of precedent, etc. Popularizing non-utilitarian decisionmaking just may be the utilitarian choice

Expand full comment

If utilitarianism is a form of consequentialism, the utilitarian cannot ignore the consequences (by definition). They might fail to think through the consequences, but missing obvious consequences is amazingly dumb when the question is "do I commit murder".

Expand full comment

Hmm, average-utilitarians might not mind pushing the button so much, but having such a button surely makes one question one's objective function.

Expand full comment

If this button becomes a common new cause of death, then it will be one that costs quite a lot to prevent, and will interfere with most of the current methods of cheaply saving lives.

Expand full comment

Why will it interfere with current methods of saving lives?

Expand full comment

Because the buttons would destroy the world economy.

Expand full comment

It used to be that pulling the switch to move the trolley from the track with five to the track with one would reliably save four lives. But if people are being killed left and right by the button, then one of those people on the track with five might still be killed by the button, so you've only saved three lives.

If the ambient annual mortality rate becomes 20%, then anything that used to reliably save X lives per year now only saves .8 X lives per year.

Expand full comment

lol this is the objectively correct response

Expand full comment

Depends on whether everyone knows that everyone else has a button or not.

Expand full comment

Also, you can probably put a lower bound factoring in people who don’t have bank accounts, unless you postulate the button will instantly grant them 1,000,000 USD equivalent in culturally specific fungible wealth

Expand full comment

Would the response be different depending on whether people are familiar with classic sf tropes? Deals with the devil?

Expand full comment

What is the rationalist take on the Hegelian argument that the human condition is constituted by a struggle for recognition?

Expand full comment

I don't think there's a "the" rationalist take: each will have his own take, except those like me who haven't read Hegel and don't have a take. (Rationalists don't tend to think of ideas as "better" by virtue of being older or traditional; I, for instance, think some modern philosophers - including Scott Alexander & Yudkowsky, notwithstanding their job titles - tend to think and communicate more clearly than 200-year-old philosophers, so I expect reading new stuff to be, on average, a better use of time. The trouble with reading modern stuff is that there's not yet a consensus on which works are "greats" or "classics", so it's harder to figure out what I ought to read.)

Expand full comment

My 68 year old cousin died of COVID last week, a week after her roommate died of COVID. They both were apparently fully vaccinated. They both recently went back to their lives; going to the gym, work and family gatherings. They both had pretty major health issues before all this. Her roommate had COPD and was undergoing long term antiretroviral treatment and she was undergoing immunotherapy for Hep C. We think. It’s kinda fuzzy the details, lots of unnecessary secret-keeping (the 28 first cousins on that side of my family were pretty close growing up) and we’re all more or less much feared-based neurotics : )

Here are some recent testable prediction numbers:

1. Did they get one shot - failed to get the second shot - got COVID and died from complications because they were immunocompromised?

the family myth creators: 8% me: 1%

2. Were they both vaccinated and got COVID and died from complications because they were immunocompromised - a failure of an unnecessary unknown and un reliable vaccine?

family myth creators: 94% me: 1%

3. Were they afraid of the vaccine (due to their various health concerns) and lied about getting vaccinated.

the family: 6% me: 98%

I am not making light of their deaths, my cousin was one of the cooler cousins and I certainly share mental health issues with her. And am therefore highly empathetic.

If you believe that they were both fully vaccinated, I’m sincerely curious if there could be another explanation?

Expand full comment

did they get vaccinated together? probably a long shot, but perhaps they both received vaccines from the same batch which had been inactivated for whatever reason (not refrigerated properly for example)

Expand full comment

What are "family myth creators"?

I've heard about grocery stores in red-tribe areas offering private rooms where you can get secretly vaccinated... presumably there are also reds in blue-tribe areas lying about getting vaccinated.

Expand full comment

For a 68 year old who was receiving immunotherapy for hep C to die of a breakthrough case of covid doesn't sound that implausible to me. The fact that she had "gone back" to gym, work, and family gatherings after having stopped seems like evidence that she really did get the vaccine.

However, the fact that it happened to both does seem like a piece of evidence a bit more consistent with a joint lie about the vaccine.

Not knowing the people involved, I would be somewhere in the fuzzy middle, somewhere between 10-90 and 90-10.

Expand full comment

Being immunocompromised shoots up your rate of death for both vaccinated and unvaccinated for any disease. Although it depends on how immunocompromised and all the other stuff. However ... does antiviral treatment or “hep c immunotherapy” indicate being immunocompromised? Not an expert there. It looks like COPD may significantly increase Covid related mortality.

Expand full comment

A doctor friend told me she had a diabetic patient she told to not have soda. The patient showed up to an appointment holding an open can of soda, but insisted that he was just holding it for someone else. This doctor friend told me this story when I asked her how good patients are at following directions to cut back on sugar. I would raise the probability that your cousin didn't get vaccinated.

Expand full comment

My mother had this conversation with her doctor some years ago:

Doc: You shouldn't drink caffeine in the morning.

Mom: Sorry, I'm going to keep doing it.

Doc: Well...that's better than most patients, who just lie to me.

Expand full comment

My wife does that, too.

Doctor: "Don't you care about your health?"

Wife: "Yes, that's why I'm telling you I'm not going to follow your instructions."

Expand full comment

I've heard from various sources that fully vaccinated people are roughly ten times less likely to die from Covid than non vaccinated people. I don't know whether that sounds like big difference to you, but it is vastly less than the difference between two unvaccinated people - one healthy 20 year old, and one 70 year old with 'major health issues'.

Does there need to be another explanation? - a disease which was killing unvaccinated people at a rate of about 10 million a year becomes one which kills vaccinated ones at something like 1 million a year. In both cases predominantly people who are older and in poorer health.

It's a shitty disease causing much misery.

Expand full comment

I guess a better question would be:

What are the chances that both of them - considering their immunocompromised status (he was 70) and were apparently both fully vaccinated -

die weeks after getting COVID from COVID related complications?

Expand full comment

If they were both in poor health and older, then the chances are a lot higher than you'd think. The rate of people dying to COVID has a lot more to do with age and health than random chance of people who got COVID. If they got COVID from the same source or one of them gave it to the other, the chances that they would both die are actually very high (presuming that their medical conditions are relevant to the chance of COVID death).

Expand full comment

A friend who is immuno compromised and taking part in the vaccine trials had zero immune response after two AZ doses. He will now be given a third dose likely of another vaccine. This is in the UK and he is on immuno suppressing medications for a long term condition - recognising there is probably a scale to what people mean by immuno compromised and different vaccines have different effects. But this is a real situation for many people

Expand full comment

A friend in that situation stopped immunosuppressant meds for a few weeks before and after getting her vaccination but she is relatively fortunate that her disease (rheumatoid arthritis) is one where you can safely stop the meds for a stretch. For other conditions they’re basically just not going to be protected by vaccines and that’s a big part of the original argument for everyone else getting vaccinated — herd immunity.

Expand full comment

The way I'd think about the odds is to simply ask what are the chances of a single person in their position dying after contracting (and having been vaccinated against) Covid.

It's worth adding that vaccine efficacy is reduced considerably for older people receiving the treatments you described so the chance of the 'coincidence' is closer to the chance of one elderly unvaccinated person succumbing to the virus i.e. not very surprising at all.

Expand full comment

I really think that there's something to the cliché idea of there being a decline of the intellectuality culture and arts (weighted by popularity) in the past century or so. That is, it seems to me our culture and art is less and less produced and defined by intellectuals and mirrored in their (our) aesthetics, at least if you appropriately weight each cultural and artistic production by its impact and popularity.

My guess is that in the past the intellectual elite had much stronger ties to the social and economic elite (and these really are very distinct elites!), perhaps because those latter elites used the intellectual aesthetics as a way to distance themselves from the common lowly people. With the rise of the bourgeoisie and the optimization of capitalism, it was realized this was a nonsensical move -- you can sell a lot more culture and arts if you dumb it down. Slowly but surely the intellectual aesthetic was torn down in favor of a 'democratized' common man's aesthetic, which is obviously less intellectual given that most people are not intellectuals.

I don't want to get into the value judgement of whether this is a good or bad thing, but I'd like to hear whether people agree this decline of intellectuality took place and whether what I've written is a plausible mechanism.

Expand full comment

I don't think we're gotten to a definition or even a sketchy idea of what intellectuality in art means.

There's one way narrative art has gotten more intellectual-- there's much more elaborate world-building than there used to be. Tolkien made a huge difference.

So did recorded tv. Instead of having a bunch of stories with a reset button so that little or nothing changed from one story to another, there were long narrative arcs.

I'm not sure why extended stories became popular in comic strips. Possibly the influence of other art. I don't think there was a technological shift.

Expand full comment

I was under the impression that was the introduction of bound paperbacks with high quality color printing that drove the shift to long form stories in comics. It's not really technological so much as business related.

Expand full comment

Judging from the anecdotes around me, I'd guess the mechanism described here are real but you may be overestimating the intellectuality of past artistic endeavours, at least as you weigh them by popularity. When you read about artists from the relatively distant past, they're always in this intellectual elite, producing art for other elites (with notable exceptions like Shakespeare). But more likely than not the majority of the population would be listening to popular folk artists and poets and generally dedicating themselves to far less intellectual pursuits than common folk these days. And back in the present there are still very high class levels of art that most people are ignorant about but are likely to fill many museums in the future. So I'm inclined to side with the opposite view overall, even if the mechanism you describe may be making the most popular genres simpler and less intellectual in more recent decades, potentially showing a very recent change in trend.

Expand full comment

There's a lot more analysis of art, some of it good, than there used to be. This strikes me as the culture getting more intellectual.

Expand full comment

Popular culture is the new high culture, internet culture is the new popular culture

Expand full comment

I do not agree that there has been a decline of intellectuality in culture or arts. I believe that the arts and culture I consume are vastly more intellectual than anything which could even have conceivably been created 100 years ago.

I'll give you an example. I'm currently rewatching Fringe, which is a show that features, among other things, a mirror universe. The concept of simultaneous "parallel" universes goes back to about 1952, and first appeared in fiction in 1963. The most impactful mirror universe in scifi is probably the Trek mirror universe (1967). But really you can see them all over the place at this point. Sliders went through parallel universes in the 90's and was quite popular. Fringe made it to five seasons, which was four more than Firefly.

Or take time travel. Sure, the concept is older than 100 years, but it's only relatively recently that you could make a cultural product for a general audience and expect them to not only understand time travel, but understand related problems (like the grandfather paradox, time loops, etc.) and have multiple points of reference for different "rules" of time travel (including, again, Star Trek, but also Back to the Future, maybe Looper, maybe 12 Monkeys, etc...). Heck, you can even make an incredibly lowbrow movie like Hot Tub Time Machine or TV series like Future Man and the target audience gets it - no need to dumb down the concepts at all.

It's not just science fiction. An entire genre of intellectual culture was born from Lord of the Rings, which was about as intellectual as you could get - the guy designed his own language for worldbuilding. Based on that, you have not only derivative works (like the Peter Jackson films) but also fantasy roleplaying games like D&D which essentially created a new pastime which is essentially an intellectually more formal and rigorous method of group storytelling.

And people love to rag on the MCU films but look - they're incorporating mythology, and science fiction (parallel worlds, time travel) and fantasy, all of which are so intellectual that only 25 years ago one could be bullied and branded a "nerd" just for associating with any one of them. Like I was the loneliest kid in the world reading my Norse mythology and no one would talk to me about Thor and Odin, but now Thor has his own series of movies and Odin is a leading character in and American Gods tv show. And before you tell me they're dumbed down, like, read the original myths. They're full of drinking contests and stupid pranks and all kinds of nonsense. Mythological Thor never once said anything nearly as clever as MCU Thor's "all words are made up" line, and like, the fact is that Marvel just casually threw a joke about *the origin of human language* into a movie that Very Serious People like to claim is for kids. So be it, but that means the average dumb kid these days is about twelve times as intellectual as Scorsese or any of his peers.

See also: Arrival. And Westworld. Westworld! If you'd told me the 10 years ago that the work of Julian Jaynes would make it into a prestige drama on television I would have said you were crazy.

On to music, and I'll be brief. Musical theory has advanced. Musical technology has advanced. Now I like folk music more than most, but modern music is so much more complex it's mind-boggling. Music is produced by whole teams of people who do nothing but study music, and how to make music, and trends in music, and musical technology - it's all a deeply intellectual pursuit. I recently watched a video about the tresillo and it revealed structural similarities between a bunch of songs I had no idea were related, apparently due to this mid-2010's trend in pop beats. But look - a genre where a bunch of artists say "okay, everyone's doing tresillo songs now, I'll try one" and then you get a bunch of tresillo songs and then they all move on to something else is not a dumbed down genre. People don't realize that pop artists are experimental, and don't give producers enough credit (or look at the existence of pop producers as some kind of condemnation of pop, as though "real" music is produced by loan geniuses). In reality they're studying and experimenting with an art form, in constant musical dialogue with one another.

It's pretty clear to me that nerds - people who take pride in interests which engage their intellect - are absolutely dominating pop culture in movies and music, in a way that is steadily raising the intellectual level of pop culture over time and has been for decades.

Expand full comment

Swing music was the popular music of its time, and it was more harmonically complex than most modern pop music. It was rhythmically more complex in that swing is not a metronomic division of the beat. Bands employed composers, arrangers, lyricists, and of course teams of skilled musicians to perform the music. Musicians improvised and many bands had improvisers who had developed their own sounds and styles. If you listen to the top streamed songs of today, you get something less complex.

Expand full comment

To be fair you have to have a very high IQ to understand Rick and Morty, but...

> It's pretty clear to me that nerds - people who take pride in interests which engage their intellect - are absolutely dominating pop culture in movies and music

It seems to me that "nerdiness" is in some sense the antithesis of intellectualism.

Expand full comment

I think you're playing on the difference between "nerd" and "dork." Nerds are the intellectual side, whose interests (at least in the past) were considered non-cool. Dorks used to be in the same space as Nerds and the same person could be both, but they are distinct. Dorks are decidedly uncool no matter what topics they are interested in.

Expand full comment

In a sense that "nerds" are perceived to be low status while "intellectuals" are perceived to be high status?

Expand full comment

To be less snarky, go read some good literature - just to name popular fancy ones, Moby Dick, Dostoyevsky, or even for actual mythology the Iliad or maybe go for Shakespeare. It’s significantly more ‘intellectual’ and much much more illuminating and interesting than marvel...

Expand full comment

You think implying that I am illiterate is "less snarky" than your previous comment? That's odd. People are behaving oddly in this thread.

I think I've adequately presented my understanding of why I believe that modern media properties are being made by and for intellectuals. You, on the other hand, have not made any argument whatsoever - just repeated a tedious prejudice against modern media, and, what's worse, given me a list of "good literature" that reads like a caricature of what not terribly smart people think intellectuals would like. You appear to have a shallow conception of intellect.

Level with me, bored-anon. We both read Scott Alexander for pleasure. You're telling me that if, given no other context, I had asked you for recommendations of some interesting "intellectual" writing, knowing that the writing we like in common is ACX - you would have told me to go read the Iliad? Really?

I've read the Iliad. I've read Moby Dick. I've not only read, but directed, performed, and taught Shakespeare. Intellectualizing Shakespeare is a rookie mistake made by people who name-check old dead white dudes as a *substitute* for critically engaging with them. I'll just quote Chaostician's excellent comment from below: "Many of his plays were written in less than two weeks and premiered in a theater where poor people would literally throw rotten fruit at the actors if they didn't like it. Shakespeare's target audience was (usually) the masses. Shakespeare only became high-brow after a few hundred years."

Okay, you don't think MCU is "illuminating and interesting" - whatever. That wasn't the question. The question is who is making it and who is consuming it and who is adopting it as an aesthetic.

Expand full comment

"Intellectualizing Shakespeare is a rookie mistake made by people who name-check old dead white dudes as a *substitute* for critically engaging with them."

To a significant degree, the same is true for the Iliad, the Odyssey, and probably the rest of the Epic Cycle. The poems were basically superhero tales replete with their own tapestry of instantly-recognised easy-gratification clichés. They were recited for entertainment, not illumination or intellectual stimulation.

Expand full comment

Doesn't Shakespeare include some French & Latin which the common people of his era would not understand?

Expand full comment

Not sure whether you meant to ask this of Neil Zupancic (who is likely much better on Shakespeare) instead of me, but there's a fair bit of Latin in Love's Labour's Lost (which was written specifically to be presented before Queen Elizabeth at a party held by the Inns of Court, an association of lawyers) and even there it's played for humour - pretentious people (Holofernes, Don Adriano) using bad Latin to sound fancy and learned, just as in our day.

The French in Henry V Act V is also played for humorous effect, as Henry tries to woo a French princess, badly, while the princess herself tries to learn English and ends up saying some bawdy stuff. ("Ainsi dis-je; de elbow, de nick, et de sin. Comment appelez-vous le pied et la robe? / De foot, madame; et de coun./ De foot et de coun! O Seigneur Dieu!").

So... you tell me whether all this is 'intellectual' and who the outgroup is here.

Expand full comment

Children are consuming it.

Expand full comment

Im not sure a pop show that includes the concept of time travel is any better than a show about wizards and dragons, or especially vs a story exploring and fleshing out interesting and complex parts of life and peoples struggles without flashy science stuff.

> Mythological Thor never once said anything nearly as clever as MCU Thor's "all words are made up" line, and like, the fact is that Marvel just casually threw a joke about *the origin of human language* into a movie that Very Serious People like to claim is for kids.

zany_face woozy_face :think_sphere:

Past fiction absolutely had serious themes. The surface level jokes and surface level reappropriating of scientific sounding ideas into modern media doesn’t make it interesting or complex or intellectual. Pop media is nerdy, but not super intelligent.

Expand full comment

Pretending(?) to be amazed by that line by calling it a "joke about *the origin of human language*", complete with awestruck italics, is like something I'd write as a parody of the Very Intellectual pop-culture journalist. "But consider the Themes! It subverted your expectations!"

It's like breathlessly describing a fart joke as "a joke invoking *the complexity of human life and biology!*"

Expand full comment

There are measurements showing modern music is less complex:

https://slate.com/culture/2012/07/pop-music-is-getting-louder-and-dumber-says-one-study-heres-what-they-miss.html

That article says the study is missing rhythm, but rhythm has also become more homogenous:

https://youtu.be/yDUSFpek0PM?t=238

And this homogeneity shouldn't be surprising because of super-producers writing such a large share of the hit songs:

https://pudding.cool/2018/05/similarity/

Regarding Marvel's Thor, back in the 80s a young character in "Adventures in Babysitting" was really into him. The older characters deride her for that. So pop culture has essentially gotten more childish :)

Expand full comment

"Regarding Marvel's Thor, back in the 80s a young character in "Adventures in Babysitting" was really into him. The older characters deride her for that. So pop culture has essentially gotten more childish"

I remember that. My impression was that they derided her for liking Thor because he was kind of an obscure Marvel hero, but then I didn't read comics so I have no idea if that is true.

But it's not that pop culture has gotten more childish. It's that kids who were her age in the 80s grew up to become producers and consumers of culture.

"There are measurements showing modern music is less complex"

So many issues with that study and its interpretation I don't know where to start. Selection bias from using a user-generated database of music? Selecting indicators of musical complexity? Not hard to think of what a decent study would look like - compare only songs from a given Top 100 (e.g. the Billboard Hot 100) list over time across all widely-recognized indicators of complexity.

"And this homogeneity shouldn't be surprising because of super-producers writing such a large share of the hit songs:"

Right, and my point is, the super-producers are musical intellectuals, and it's weird that we don't seem to recognize that. As I said: "People... don't give producers enough credit (or look at the existence of pop producers as some kind of condemnation of pop, as though "real" music is produced by [lone] geniuses)". A person who has studied music extensively - probably multiple instruments and technologies - and mastered a formula to make music that lots of people will love, and then applied that formula successfully across a broad spectrum of artists - in what sense is that person not an intellectual?

Expand full comment

the thesis is that the intellectual elite used to have more influence over the arts. If you look at who is influential today in shaping popular music, it is certainly not intellectuals when you watch some of these guys talk they do not come off as intellectual at all. Max Martin or Dr Dre for example. I don't know if Stalin would be considered an intellectual, but he heavily influenced the music of Shostakovich, and that music has had a huge audience over the years.

Expand full comment

The youtube video I linked is of Rick Beato, who is himself a producer. He talks about how as a producer he can appreciate the work that goes into assembling all the tracks that make up a modern pop song... but as a musician he notes they are still far less musically complex than the popular music of decades ago. Here is another video from him on the most musically complex hit pop song (from 1983):

https://www.youtube.com/watch?v=ZnRxTW8GxT8

George Lucas grew up on serials like Buck Rogers. But he didn't then go on to just make a Buck Rogers movie, he created a new fictional universe that Disney is still minting money from now. And he's one of the more child-oriented I.P-mining directors of his peer group. If you look at the films that topped the US box office in the last decade, the only one that wasn't a sequel was American Sniper. In the 1980s, the only sequels to be the top film of their years were Empire Strikes Back and Return of the Jedi, while in the 90s the only ones were Terminator 2 & Phantom Menace. In 1979 the top film was Kramer vs Kramer!

Expand full comment

Time travel as a concept is at least more than 100 years old, the publication HG Well’s Time Machine. Arguably a Christmas Carol involved time travel and a parallel universe - since there were two Christmas days. Talking of Dickens he was popular with the masses, many of whom left before completing secondary school, and yet dickens is considered difficult today.

I get the impression here that you don’t understand the complexity of what was written and produced in the 20 C and before.

Expand full comment

"Time travel as a concept is at least more than 100 years old"

I literally said that in my comment.

"I get the impression here that you don’t understand the complexity of what was written and produced in the 20 C and before."

The question was not about complexity. It was about whether art and culture are being produced by and for intellectuals less now than in the past, and the relationship between art and the aesthetics of the intellectuals.

It's interesting to me that you seem to equate intellectualism to "complexity" and are unable to engage with, or even acknowledge, someone else's conception of intellectualism. Why do you think that is?

Expand full comment

You were the one talking about complexity. And then you went on to decide that “it was only recently” that something like time travel got complex. This is a weak argument for complexity - the grandfather paradox is just not that difficult. And it first appears in literature 1929.

You are discussing from a very limited cultural palette anyway, even your nerd knowledge is recent. There are plenty of other books to read, and the standard 19C canon is a good one.

The phrase “all words are made up” is fairly banal by the way.

Expand full comment

Survivorship bias is a big part of creating this impression. So is a general "Get off my lawn you damn kids" effect, where every generation ever looked at differing tastes among the succeeding generation as signs of moral and aesthetic decay: c.f. the old quote "Times are bad. Children no longer obey their parents, and everyone is writing a book." This has been apocryphally attributed to Roman, Babylonian, and Sumerian writers, but the actual earliest known source of the quote is over a century old (1908).

Another dynamic feeding this is that increasing societal wealth, near-universal literacy, and vastly improved communications technology have allowed more and more arts and entertainment to escape a niche as an expensive highbrow luxury product. This has produced mass-marketed entertainment (typically extremely well produced, but often bland and formulaic in content in order to ensure broad appeal), long tails of niche products that appeal strongly to a relatively small number of devotees but are often actively disliked by lots of people with differing tastes from the target audience, and "tastes as attire" products where professing to like or dislike a piece or genre serves as a signal for group affinity.

Expand full comment

I think you have articulated a plausible explanation for the dumbing down of popular arts. In music, it used to be wealthy and powerful people (including the Church) who patronized composers and funded concerts. The composers were able to influence their patrons and vice versa. With the rise of the consumer, recording technology, and mass media, musicians and composers quickly figured out how to create instantly appealing sugary music that sells. Forty years ago making a record was expensive, so music producers had a big say in what got recorded and distributed, therefore producers still influenced what got heard by the public. Now pop artists are discovered on youtube after they get a lot of views, and this is a form of crowdsourcing. So, yeah, in music the elite have lost almost lost all influence. Popular music has been democratized, and it is structurally, harmonically, and simpler in dynamics and nuance than before.

Expand full comment

I disagree here, as there is a lot of old "instantly appealing sugary music that sells" or the same in art, etc. of the past.

What we see are the works that survived because of quality. All the J.D. Somebodies who were massively successful *in their day* and whose work fell off a cliff into obscurity after they died aren't the ones being played or read or viewed today.

Give it a hundred years, and the 'classics' of today's pop music and art will be a selected few works, while the majority of the best-sellers will be 'who?' as far as the connoisseurs of the future are concerned (will Damien Hirsch be any more than a footnote in "selling crap to insanely rich people with no taste" histories of art?)

My own personal hobbyhorse: I *hate* John Rutter's music. It's exactly that "sugary instantly appealing" you're talking about. I don't think it's going to survive into the future, while music of a different quality will (I'm not going to forecast what composers of today will survive).

And 'the elite' works and artists that today we view as self-evidently excellent underwent long periods of obscurity or disfavour themselves; Shakespeare pretty much foundered in lack of appeal to the taste of the day during the 17th and 18th centuries, and it was only when certain actors started reviving the plays during the 18th century, and when the Romantic movement got into full swing, that his reputation was the glowing one we know today. The same for many other musicians and artists.

We're the beneficiaries of time winnowing out the wheat from the chaff, and the changes in popular taste mean that today's wheat is tomorrow's chaff (and vice versa).

Expand full comment

The top song of 1969 was literally called "Sugar, Sugar". Nostalgia is incredibly selective.

Expand full comment

I think technologies for mass production/copying of information changes things relative to the past. Centuries ago it wasn't possible for anyone to have reach of the biggest producers/songwriters today:

"[...] more songs have been produced by fewer and fewer topline songwriters, who oversee the combinations of all the separately created sounds. Take a less personal production process and execute that process by a shrinking number of people and everything starts to sound more or less the same."

https://pudding.cool/2018/05/similarity/

Expand full comment

Why push the boundaries with popular music today (and see the boundaries and gatekeepers pushing you back) when you can write whatever you want, put it online and see a niche scene grow up around it?

The top 40 will get more generic as everything else gets more interesting.

Expand full comment

I asked the following related question in r/AskHistorians a few months ago and didn't get an answer:

Near the beginning of The Great Divorce by C. S. Lewis (1945), there is the following passage:

"However far I went I found only dingy lodging houses, small tobacconists, hoardings from which posters hung in rags, windowless warehouses, goods stations without trains, and bookshops of the sort that sell The Works of Aristotle."

Lewis is clearly trying to make you think of a lower class neighborhood, one you might now want to find yourself walking through at twilight. One of the distinguishing features of this sort of neighborhood is ... Aristotle?

Today, Aristotle does not seem like lower class literature. Even if this is referring to a cheap, sparknotes-like version of Aristotle, that still doesn't seem like lower class literature. Today, we might expect celebrity magazines and generic action or romance novels, not Greek philosophy.

Was Aristotle characteristically lower class literature in the early-mid 1900s?

Expand full comment

This might be the answer to your Aristotle question:

https://en.wikipedia.org/wiki/Aristotle%27s_Masterpiece

"Aristotle's Masterpiece, also known as The Works of Aristotle, the Famous Philosopher, is a sex manual and a midwifery book that was popular in England from the early modern period through to the nineteenth century."

Expand full comment

That would explain it.

Expand full comment

I think he means the sort of shabby, second-hand bookshops that sell old volumes of collected works that are picked up from estate sales or handed in when Grandfather Smith died and his family are clearing out the house. It's not a swipe at Aristotle, it's the kind of mass-market Victorian volumes that collected dust as they mouldered unread on neglected bookshelves and were then destined for the dump or the second-hand bookshop.

Expand full comment

I wonder if it also has something to do with those works being in the public domain.

Expand full comment

I think the emphasis in that passage isn’t on “Aristotle” than it is on “The Works of”. It’s looking down on bundling things, a kind of vulgar idea that you’ll be getting all of Aristotle in one convenient book. What a savings! It’s certainly a bit snobbish to look down on that, but I imagine Lewis would be thinking that anyone who really cared about Aristotle wouldn’t buy “The Works of Aristotle” while someone who just wants to look cultured would buy it, put it in their shelf, and forget about it. If you go to a thrift store like Goodwill and go to the book section you’ll find lots of old book “collections” of this type from Lewis’s era; apparently it was a significant gimmick of the time to bundle “intellectual” works for the common mans home library. “The Complete Works of Shakespeare”, in one convenient giant brick of a book with tiny type. That sort of thing.

Expand full comment

I own the Complete Works of Aristotle (in two volumes), but I also have a complete set of the Harvard Classics, which I guess is even more prole. Fortunately, I've never had a guest who was sufficiently classy to take me to task for it.

Expand full comment
Comment deleted
Expand full comment

Indeed I have. I'm not classy enough to whether that makes it better or worse.

Expand full comment

"you can sell a lot more culture and arts if you dumb it down"

This is only true if the poor have significant disposable income. I suspect that this is not the result of capitalist values - it is the result of capitalist wealth creation. (Assuming that this decline actually happened.)

Expand full comment

One of my points is not to conflate the intellectual and the economic elites. It's not like every rich person consumes only high brow art. Much to the contrary I believe that's getting to be less and less common. I believe though that in the past these disparate elites had closer ties: we imagine old-timey aristocrats smoking cigars to the tune of a string quartet or something, but we really don't imagine the current mega rich entertaining themselves like that (I'm not rich enough to know how the mega rich actually entertain themselves, I'm just guessing here). I can only vaguely speculate why the intellectual and economic elite parted ways though.

Expand full comment

I don't know whether it's a change, but the rich pretty much go to the same movies as everyone else, and it's not like the rich have special amusement parks.

Expand full comment

Someone has mentioned survivorship bias: only the best art is remembered. There is another dynamic worth considering. Art which began as low-brow, if it survives, becomes high-brow, simply because it was old.

An example: Shakespeare. Many of his plays were written in less than two weeks and premiered in a theater where poor people would literally throw rotten fruit at the actors if they didn't like it. Shakespeare's target audience was (usually) the masses. Shakespeare only became high-brow after a few hundred years.

Expand full comment

Right. Dickens wrote potboilers in weekly installments; Lizst was a rock star of the keyboard ("Listzomania", and all that.)

Expand full comment

But fans of modern potboilers find Dickens to be slow-paced, dull and impenetrably hard to read compared to Jack Reacher. Fans of modern pop music find Liszt to be dull, twee and tinkly compared to Wet Ass Pussy.

This seems to support the idea that society's standards are slipping lower and lower. Modern entertainment is a massive super-stimulus compared to anything that has come before. Strip out all the complexity to make more room for hyper-palatable fructose-enhanced goo.

Expand full comment

I don't see how to distinguish (from this information alone) between writers getting better (at writing things that readers enjoy) and readers getting worse (at appreciating things that have more potential enjoyment in, for sufficiently discerning readers).

If Dickens was trying to write stuff that everyone could enjoy with zero effort, and he just didn't know how to do it as well as Lee Child does, so much the worse for Dickens.

(But if he was trying to write stuff that everyone could enjoy _and that also repays closer attention_, and the price of that was that sufficiently degraded readers would be unable to appreciate it, then so much the better for Dickens and so much the worse for modern readers. I take it that's your position; it might be right; but I don't know whether it is.)

Expand full comment

I didn't know that about Shakespeare, but I'm not sure it disproves my point. Shakespeare definitely was an intellectual, so there you have an example of an intellectual producing art consumed by people, which is what I'm arguing was much more prevalent in the past.

Expand full comment

Was he an intellectual? From Wikipedia: "Shakespeare was born and raised in Stratford-upon-Avon, Warwickshire. At the age of 18, he married Anne Hathaway, with whom he had three children: Susanna and twins Hamnet and Judith. Sometime between 1585 and 1592, he began a successful career in London as an actor, writer, and part-owner of a playing company called the Lord Chamberlain's Men, later known as the King's Men. At age 49 (around 1613), he appears to have retired to Stratford, where he died three years later. "

Nothing he did is what I consider typical of an intellectual. As far as I know, he didn't write philosophical or theological treatises, work on geometry, or do scientific research. He was an actor turned writer and business owner.

Expand full comment

I guess this depends on one's definition of intellectual. I'm just saying he was intelligent and his works are (at least generally) subtle and requiring of effort to appreciate, rather than being immediately digestible cheap stuff.

Expand full comment

We think that *now*. But for something like a century or so, Shakespeare was a minor figure and Johnson was considered the genius of the age. Shakespeare was doing puns and dick and fart jokes, and became "The Bard" because he did it in great language and because, for whatever reason, he became the fashion 150 years or so after he died. You can easily imagine a parallel universe where, for whatever reason, that didn't happen, and every summer people attend Johnson in the Park instead.

*IIRC, I'm getting this from the book From Dawn to Decadence, but I can't leaf through the book right now and my quick googling didn't confirm it. If I've got something wrong here I'd appreciate correction.

Expand full comment

There's a contrarian book titled "William Fortyhands" arguing that he was a producer who didn't write anything. Modern intellectuals regard certain versions of his plays as the "good" versions, but the book argues that he bought those from struggling writers and then had them turned into the "bad" versions that audiences actually wanted to see.

Expand full comment

Lots of people saw his plays, but he still had elite patronage. Hence being "The Lord Chamberlain's Men" then "The King's Men".

Expand full comment

A couple things.

One, I somewhat doubt the relative prevalence of high art versus forgettable trash serving as entertainment has really changed. But court gossip doesn't exactly survive much in public record, nor whatever fart jokes and bare knuckles brawls people were using to stay entertained in pubs. So it's not visible to you, but it was there.

Second, there is a technological element to the ability to scale certain "lower" forms of entertainment. Arena rock requires acoustic technology we didn't have in the Mozart days. You can't stage Michael Bay style explosion fests on a theater stage. Plot-based spectacle is the only kind you had, so innovations had to be mostly in storytelling, not in special effects. Sex has probably always been the single most preferred form of human entertainment, but public viewing of public sex acts is going to embarrass all but a few kinksters. Mass photo printing, video, and now the Internet have made pornography available to virtually everyone. Similarly, sports have probably always been pretty popular for whoever could make it to the field to watch, but that is inherently limited to a small number of people. Video recording and telecommunications now make it possible for everyone everywhere to get their fill.

Third is that it just seems like incentive structures have changed. A whole lot of great art, literature, and philosophy from the past was created by people who were either wealthy themselves or patronized by the wealthy. They had no requirement to appeal to a broad audience to make a living at it.

Expand full comment

I would seriously question this premise.

Expand full comment

Bad art/entertainment is less likely to be remembered after a century. So we have a skewered sense of old art/entertainment, thinking it better than it was on average.

I am inclined to think that popular fiction has gotten more sophisticated lately, because people consume so much fiction these days so they are more jaded. But the effect is small because popular fiction is more aimed at young people, and they are no so jaded yet.

I think music has gotten more complex in the last century, firstly because we have better technology to make music. But secondly because the internet allows niche-music to become mildly popular.

Expand full comment

"Bad art/entertainment is less likely to be remembered after a century. "

I've been reading Pepys diary. He attends a lot of plays. Most of them, including ones he thinks highly of, are plays I have never heard of by playwrights I have never heard of.

Expand full comment

And weren't his favorite parts of Shakespeare's plays the music & dancing bits not included in modern performances?

Expand full comment

Just a quick note that my local Shakespeare in the Park productions sometimes include musical numbers.

Expand full comment

Here's a proponent of re-integrating theater with dance who approvingly cites Pepys:

https://www.dancemagazine.com/dance-theater-2440402975.html

Most people now mock him for his middlebrow taste. The thesis of "William Fortyhands" is that the bits theater-goers like Pepys liked and which people now disassociate from Shakespeare were precisely those which Shakespeare himself added to the plays.

Expand full comment

Im not buying this . Pepys was born in 1633, about 17 years after the death of WS.

WS’s plays were reinvented from the middle- late 1600s into the 1700s into all sorts of spectacles and deprived of “sad” endings. MacBeth in particular became a great vehicle for elaborate staging and pyrotechnics ( what with the Witches and all). It is still a common superstition among actors that you do not utter that name in the theater ((He is referred to as “the Scots King”) reason being there were so many mishaps in those juiced up 18th Century productions full of fire and other dangerous stunts. They weren’t doing that at the Globe. They were spitting out raw liver cause their tongues were getting cut off and cool stuff like that.

When WS Was rediscovered in the 19th Century it wasn’t “high brow”. People were passionate about it… see the great riot in NYs Lafayette St when two competing actor-managers we’re performing the same play across the street from each other.

The only thing highbrow about Shakespeare is the effort it takes to decode Elizabethan language after that it’s all soap opera human insight blood and guts and tragedy.

George Bernard Shaw hated him. He thought his ideas were utterly common, but his poetry was unassailable.

Expand full comment

My understanding is that the Folio versions of the plays are more intellectual and contain material (sometimes in French or Latin) that would go over the heads of many uneducated people. The quarto versions are shorter (and Alexander Pope thought they were better as performance pieces) and have more errors most people wouldn't notice.

Expand full comment

There's certainly some truth to your first argument (I guess it's a form of survivorship bias?), but I don't think it's all of it. I can give you some facts about Rio since I recently read a book about that: in the early days of radio, there were only classical music stations. People used to attend public readings of poetry, operas and the like. Music in cassinos was played by big orchestras, and Rio was known as the 'piano city' due to the high density of households with pianos. All of this was high in intellectuality in the sense that it's intellectuals producing these art forms and deciding upon the aesthetics to judge them (who writes poetry, operas, compose for orchestras etc.?) I'm surely not saying that the average person was more intellectual than now -- probably they were much less! This is before Flynn effect, universal education and everything else the 20th century provided for us. All I'm saying is that the small intellectual elite had much more of a grasp in defining the aesthetic standards of the population at large.

Expand full comment

I think there’s something important wrong here, as lay / commoner art and poetry was also much better X00 years ago vs now :)

Expand full comment

Although the barrier between the two was potentially quite porous, and elite art also “went bad” recently (especially poetry, lol)

Expand full comment

I'd ascribe the former more to the selection effect mentioned above. Also, a lot of (half-baked) intellectuals nowadays take part in the bandwagon of denouncing intellectuality as elitism, without noticing how paradoxical the whole thing is.

Expand full comment

I mean, I don’t think most agree, but IMO even just random casual songs in say Russia or some random “shithole” are much better than any American popular music. So I don’t think the decline was just a loss of intellectuality, exactly. American intellectuals love the modern bad music, and don’t classical as much I think. Plenty of classical pieces took from “popular” music at the time, although the different uses and groups of music were different.

And holy shit edit button please

Expand full comment

(can't edit) So I think that's behind a lot of the silliness in modern art.

Expand full comment

High culture used to be more prominent. Prior to mass affluence, there wasn't as much of a market for popular culture. So in order to make a living producing it, you needed an aristocratic patron. This culture then filters down to the masses, fading out over time as mass affluence means more culture appears targeted squarely at the masses.

Expand full comment

OK, seems plausible.

Expand full comment

An argument that our forgetting of popular fiction from the past has systematically misled us as to how popular female authors were:

https://www.takimag.com/article/distaff-writers/

Expand full comment

I'm not going to give my verdict, but I am going to recommend some books. Charles Murray's "Human Accomplishment" attempts to put some numbers on this by checking which works are considered great in standards texts on the subject. And Jacques Barzun's "From Dawn to Decadence: 500 Years of Western Cultural Life" is the magnum opus of a man who'd been around a long time (he was nearly a century old) and knew his subject as well as anyone.

Expand full comment

I can definitely recommend "From Dawn to Decadence", not necessarily because I agree with all that Barzun says, but it's great to get a Continental view rather than the usual Anglo-Saxon one and he is very entertaining when he really gets going on how everything has gone to hell in a handbasket (up to 2000, when it was published).

If he could see what has happened culturally in the intervening twenty years, it'd take a whole new book 😀

Expand full comment
Comment deleted
Expand full comment

Monopolization seems like it would increase the intellectual level of art. Don't like how high-brow our movies are? Too bad, it's all you get.

Expand full comment

I think the argument is something like "Companies that are successful enough to become monopolies are the ones that focus on selling things to the masses. In an environment that allows them to *become* monopolies, they take over the market and focus everything on producing stuff to sell to the masses."

Expand full comment

You could have said:

"Monopolization seems like it would decrease the intellectual level of art. Don't like how low-brow our movies are? Too bad, it's all you get."

Expand full comment
founding

Right, because that version can actually work. If you produce only high-brow movies, all the low-brow people go back to entertaining themselves the way they did before movies were invented, and your studio and theater conglomerate goes broke. Or becomes some small art-house operation without the market or political power to sustain a monopoly.

If you produce only low-brow movies, the high-brow people don't buy tickets, but you can make a fortune selling movies to just the masses. And plausibly monopolize it through anticompetitive dealmaking and regulatory capture. Laugh yourself all the way to the bank, and then on to the opera with your highbrow friends - you only *sell* the stuff, you don't *use* it, that's just not done.

TL,DR: Sell to the masses, live with the classes. Sell to the classes, live with the masses.

Expand full comment

It's not monopolization that gives the monopolists control over the production - it's monopolization where the monopolist knows best how to channel the popular demand.

If only one company had the technology to make films, then you could get the kind of intellectualization you imagine from that kind of monopolist. But if a competitor could easily swoop in and take the monopolist title, then you can't. And I think this might be exactly what happened when you moved from broadcast media, where the airwaves are controlled by a limited set of monopolists, who give you things like the Texaco weekly opera broadcasts, to cable and internet media.

Expand full comment

Is your argument that monopolization makes it harder for niches, like high brow art, to be fulfilled?

Expand full comment

Does anyone here listen to the Huberman Lab podcast (Stanford professor providing detailed discussion of neuroscience, health, and similar topics). I've been really enjoying it, but got very concerned with accuracy after listening to episode 19.

In episode 19, Huberman describes a study where cooling your palms in-between sets radically increased the number of reps a person can do (300% for dips!). The idea is that hyperthermia limits performance and cooling through the extremities prevents that.

Huberman claims that this methodology is being used by lots of sports teams and that they've been getting great results.

Frankly, this sounds way too good to be true.

There are a number of peer reviewed papers demonstrating the effect, however:

- They're from the original authors, who started a company to commercialize the technology

- The company was founded in 2000 and don't appear to have been able to bring anything to market in the last 20 years.

- I found a couple examples of replications that don't show anything like the effect size the original authors see, plus the results don't show a proper dose/response.

I'm tempted to try this myself to see if there's any validity to the claims (easy enough experiment and claimed gains are enormous).

1) Has anyone tried palm or extremity cooling themselves or know of any independent publications that either prove or debunk it?

2) Has anyone run across any other claims of concern from Huberman? Trying to decide how much I should update on my assessment of his credibility.

Expand full comment

I’ve listened to most of his episodes and wondered the same thing. I certainly don’t have the required expertise to judge which claims are most likely to be accurate.

Expand full comment

Is the proposal more complex than just holding a bottle full of solid ice for a few seconds before each set? If not I will try that tomorrow

Expand full comment

The authors claim that if you go too cold, you get vasoconstriction, which then prevents cooling of the core. To get around this, they use a complicated vacuum set up, but Huberman suggests you could get a similar effect using a water bath set to a temperature not too far below body temperature.

Expand full comment

Do you have a link available to the episode? Want to bookmark it so I don't forget to trial it

Expand full comment

I recall something about how one of the main ways amphetamines help performance is by temporarily increasing the body's temperature setpoint before exhaustion sets in, which would lend some support.

I play competitive dodgeball, season is starting in a couple weeks, I'll see about doing some self-reporting.

Expand full comment

I have a friend who's doing a PhD in neuroscience, and he called Huberman's appearance on Joe Rogan his least favorite Joe Rogan episode. He said Huberman spoke with a level of certainty far beyond what's justified by the data, and that for example, nobody really knows what increasing norepinephrine does, or exactly how the norepinephrine or acetylcholine circuits work. Even Joe Rogan called out his bullshit when he said his VR simulation of a shark attack is exactly as scary as a real shark attack. In short, my friend thinks Huberman is sensationalistic, egotistical, and irresponsible.

Expand full comment

Thanks for sharing this--I just ran a search for Huberman across the ACX community specifically to see if anyone had some informed criticism. I listened to a bunch of Huberman's episodes and was initially fascinated, but then became increasingly incredulous. He just seemed to have way too many high-value life-improving insights that I had never heard of before, and I wondered if he wasn't being wildly optimistic about the efficacy of many of the interventions he brought up, especially once they were outside of his immediate expertise.

I think I gave him too much free trust for being a Stanford neuroscience prof. I should remind myself--if Linus Pauling can confidently give terrible advice about supplements and life hacks, no scientist is immune.

Expand full comment

Yeah. My experience has been that experts are usually reliable in their own field, but once the move outside of that, they're often as susceptible to irrationality as anyone else. After I finished grad school, I switched research disciplines. I was amazed at how many of my old profs hold very confidently incorrect opinions about the new field...

Expand full comment

Hmm, that's what I was afraid of and consistent with my digging into the exercise episodes. Guess I'm going to have to double check everything...

Expand full comment

I was tempted to try back when the original study came out, but didn't due to technological constraints. Keep in mind that the protocol combines cold exposure with vacuum to prevent the natural heat preservation mechanisms from kicking in. I'm not a bio person, so low certainty that that's the right model.

Please update if you try.

Expand full comment

I'd like to be able to take notes while taking walks.

A few years ago, I very briefly tried note-taking via smartphone voice recognition (IIRC Google Assistant and various third-party apps that seemed to use the same underlying API), but found it unimpressive. In particular, these features did poorly when I wanted to flexibly switch between speaking in English and in German.

Nowadays, is there any solution for taking audio notes that's genuinely impressive? I'm not so much asking for service recommendations, but rather for glowing endorsements, i.e.: "I've been using <fantastic solution> for <at least a month> to take notes. It parses my speech correctly with no noticeable delay and with hardly any errors, and it's significantly faster than typing on the smartphone while walking."

Expand full comment

Might be an accent thing, but: I use google assistant for dictation in English and Spanish, and it has an extremely good hit rate if I'm using common dictionary words.

Expand full comment

How does a prior get trapped and what can be done about the problem? I don't want to have to deal with people insisting that "the other side is up to no good" every damn time something happens.

I'm not sure how to approach the question but intuitively it must have something to do with social dynamics. People organize into groups and communities with shared interests and share information among themselves. One of the particular phenomena that would actually impact people's beliefs, is that some of the most popular information tends to get repeated very often in such groups, especially when social media functions such as "Like" buttons etc exist. In particular information of the form "A member of Group A did Bad Thing X to one of us/an innocent victim" has high memetic potential. The mechanism is: coincidental spaced repetition (to memorize many such instances) + the availability heuristic (to overestimate the frequency of Group A doing Bad Thing X based on how easy it is to remember such events).

I don't know exactly what to do about this but I recommend staying away from highly partisan online spaces. The difference between how fast information spreads now vs. 25 years ago is so large, that we may have found some kind of phase transition where a rapidly increasing number of people hold these incredibly cynical worldviews, at least implicitly.

Does anyone know if the polsci literature on extremism suggests anything about these kinds of things? I would like to know more about these kinds of things, in particular how one might mentally guard oneself from the kind of mechanism I explained above. I'm worried that just observing some of these communities would expose a person to risk of implicitly adopting such beliefs.

Expand full comment

It seems like some people just... can't... consider the possibility that they are wrong. Maybe there is some circumstance when they would, but there's no known way to figure out what that circumstance might be. Scott has a charitable interpretation of this as a mental condition called "trapped priors", and maybe that's true, but I don't think it matters what we call it, as long as there is no known treatment. And since people "suffering" from this are 100% sure they are healthy and everyone who disagrees is ill, so they don't want you to "treat" them. My dad told me that I and 60% of the population is brainwashed (cuz we aren't red tribe), but I don't know he got to thinking that way.

I think I interpret the condition differently from Scott. Humans have conscious control over how they interpret evidence, and that control turns into habit by adulthood. Now, not only does almost no one get any epistemological training to avoid bad habits, but also bad habits are actively taught and preached (Dark Side Epistemology: https://www.lesswrong.com/posts/XTWkjCJScy2GFAgDt). Some people accept and live by Dark Side sermons, others don't.

Those who embrace the dark side end up with "self-sealing beliefs"... beliefs that protect themselves, kind of like those futuristic self-healing materials. One example is the circular reasoning of the conspiracy theorist: conspiracy C is true, therefore evidence against C is an illusion created by the conspirators to mislead us, and the popularity of this illusion means that C is an even bigger conspiracy than I thought! And of course anyone who thinks C is false is either a fool, a sheep, a rube - but he seems smart, so maybe he's in on the conspiracy?

But I think a more common thing is that a false belief is held up by a large network of, say, 100 weak arguments (often they will give you all of them in a convenient bullet list copied from Facebook). You, as a debate opponent, must necessary speak in English so you normally have to address arguments one by one. Even knocking down one argument is extremely hard because the person will have very high standards for evidence contrary to their belief, but extremely lax standards for supporting evidence. And even if somehow you convince the person they are wrong on that one issue, they've still got 99 other arguments in their favor.

I think there's more to it than that, but I don't know the details. I think that while it may *looks* like they have a trapped prior about the extreme harms of vaccines or the extreme harmlessness of greenhouse gases, *actually* it's probably more an issue of All MSM Lying About Everything, And Science Is Hopelessly Corrupt and Must Not Be Trusted, and underneath that is a whole set of other beliefs. Unfortunately, everybody's arguing about vaccines and CO2, so no one ever comes close to talking about the real heart of the matter.

So I don't try to convince them. I just try to convince anyone overhearing the argument. But if I succeed, I'll probably never know, and anyway I'm probably arguing the wrong topic entirely. Hmm, now that I think about it, maybe arguing never has a probability of helping anyone that is high enough to justify the time spent? I mean, in politically charged places outside ACX. So yeah, just stay away from those cesspools.

Expand full comment

Oh, and I forgot (as I often do) that it's not just about beliefs with many people, it's about norms and emotions and signaling and loyalty to the tribe. I guess I'm biased by having internalized lessons like "don't lie" from the very same dad who now thinks I'm brainwashed. I never put goals like "fitting in" above the values I was taught, probably because of some congenital brain damage that broke my social sensibility. It's easy for me to forget, then, that accuracy and truth are *secondary, implicit* goals for most people.

Expand full comment

I think a large part of what Scott calls 'trapped priors' is indeed about loyalty and "whose side are you on?" - there was a good post on the subreddit about this, https://www.reddit.com/r/slatestarcodex/comments/p57fpe/not_trusting_science_is_because_people_are_trying/. As I summarized in my comment on that piece*, whatever people say to justify their self-sealing beliefs, they're really saying something like "I don't doubt your intelligence. What I doubt is whether you're working for me or *against* me.". If you believe someone is lying to you because they hate you, you won't listen to anything they say about how they're not lying to you and actually have your best interests at heart. That's the clearest example of a 'trapped prior' I've ever seen, and it's possible that all examples of trapped priors are actually examples of people refusing to listen to "the *Enemy*".

That's why I think the norm of intellectual charity, and assuming good faith on the part of your interlocutors, is so very valuable. It's like assuming innocence before guilt - if it was the other way around, then every trial would be a show trial, because you can't trust the words of a guilty person. Innocence can be transmuted into guilt, but never vice versa, practically speaking.

(*: Link to the comment: https://www.reddit.com/r/slatestarcodex/comments/p57fpe/not_trusting_science_is_because_people_are_trying/hc5cgsu?utm_source=share&utm_medium=web2x&context=3)

Expand full comment
Jan 11, 2022·edited Jan 12, 2022

> for me or *against* me.

Hmm, I have trouble really buying that take. 3 weeks after this conversation (edited: end of September), my antivax uncle died from Covid, while his brother, my antivax father, continues with his firm belief that vaccines are more dangerous than Covid and that his children (who tried to convince him otherwise) are brainwashed liberals. Last summer he thought vaccines had killed over 3,000 Americans; recently it was 100,000.

Does my dad really think I'm "against" him?

I sent him a letter that was as compassionate/understanding as I could manage (i.e. not terribly, but I think better than my brother has done), which wasn't really about vaccines per se, but did mention them... along with a copy of the book "Scout Mindset". I don't think it helped... he called me to say that the author "overthinks things" and he's "heard most of that stuff before", after having sent me another antivax video by email.

Our conversations haven't gone well: https://twitter.com/DPiepgrass/status/1443632316207665164

Expand full comment

There's at least two things going on here. With respect to partisan trapped priors specifically, you have the phenomenon of people implicitly trusting their own more than others. There's nothing specifically political about this. The absolute worst example you'll ever find is probably sports fans. The responses to otherwise identical behaviors on the parts of players and coaches on other teams versus your own is night and day. That's just loyalty overriding logical consistency. To the extent that it may be more common now in politics than it was before, it is simpler much easier than ever for people to become part of a partisan community. You don't need to actually join and work for the local party any more, or go to MADD meetings, the SDS, KKK, whatever the heck it may have been. Thanks to the Internet, these communities are at your fingertips, so many more people are joining now.

But there is also the epistemic learned helplessness thing Scott has written about before. If no evidence at all is trustworthy, then nothing can possibly sway your posterior and your prior becomes trapped. We're inundated with information and much of it is not trustworthy. It might be explicit misinformation, propaganda, marketing, fraud. We know we're being constantly lied to and the liars are getting increasingly better at it and becoming more convincing, making it harder and harder to tell the difference between them and a trustworthy information source. Some of this is probably intentional. Foreign intelligence services really are flooding other country's networks with purposeful misinformation. A lot of it isn't. The targeted ad industry is just trying to make money, and the fact that they're destroying the value of evidence in the process is barely an afterthought.

As for how to guard against, I have no idea what sort of strategy scales. This is unfortunately easy for me. I don't care all that much about people and I'm not part of any communities. I'm perfectly happy to spend most of my time with my wife and interact regularly with a few other people who opinions on politics I don't solicit and don't care about. I understand that isn't a satisfying answer for most people. Most people want to be part of a community. I guess find some inherently non-political hobby. Rock climbing. Sewing. Community theater. Heck, become a sports fan. You'll become tribal and irrational and get your fill of feeling like you're a part of something bigger than yourself, but not in a way that directly maps to partisan politics.

Expand full comment

I agree with this. The more modern political and social struggles become like sports contests -- it's all about Our Side Must Win! Everyone Else Must Be Crushed! -- and the less like a group of fair-minded but different-minded individuals struggling to come to some good-faith consensus (to the extent that's realistic for our species), the more I just tune out.

I have to keep aware of these things to some extent, in self-defense, but once I have learned enough to avoid stepping on mines accidentally while talking to people, I go find other things to do. I generally find it completely not worth the time to discuss any of these hot issues at even the most basic level with anyone, and practice the art of saying completely noncomittal and conversation-ending things when people insist on asking.

I think it's a little like living during the Wars of Religion in the 17th century: you figure out how to blend in and keep your head down. I do try to withhold support from, or even punish, fanaticism and bad faith wherever and whenever I can, but my personal influence on that is of course infinitismal and I hold no illusions it will matter.

One strange advantage, though: I find when you discipline yourself not to talk about these things, except in the blandest most noncommittal way, it becomes possible to listen with greater understanding and less stress. You can just take in what people say, even fanatical things, and think "how interesting -- I wonder how many other people feel like this, and what are the consequences?" without getting personally worked up about it. And something else that I notice while practising this Zen detachment is that the most bonkers stuff tends to be short-lived and self-limiting. People don't act on nutty ideas nearly as often as one might think (or fear) if you allow yourself to want to help or prevent them, and even when they do, reality has a way of cutting it short surprisingly quickly.

Expand full comment

Purely introspection based intuition, but I'd guard myself by keeping my identity small, as Paul Graham put it. Renouncing invitations into identities of virtue & victimhood, in particular.

They're often so tempting though, that I find it easier to just not engage with the medium.

Expand full comment

What do you think of the following claim:

a) "thinking", defined properly, means the act of resolving cognitive dissonance.

b) most people aren't thinking, and don't realize this. They might be 'feeling' or 'subvocalizing', but these aren't really thinking, any more than reading a book is thinking.

In more detail, 'thinking' requires a thinker to be holding two different ideas in mind, "feeling" that they are both correct, "feeling" that there is come contradiction between them, "feeling" that this contradiction is not real, but a result of the thinker's ignorance, and then searching for a resolution - that is, some new piece of information, a new belief, or story, which resolves the apparent contradiction. "Thinking" necessitates at least trying to reduce informational uncertainty - that is, thinking means producing new beliefs (or at least, attempting to) which lower the net total dissonance between a person's existing beliefs.

I'm putting the word "feeling" in quotes in the second paragraph to say that this probably isn't' the right word here, either. It's just the best one i can summon at the moment.

Basically, i get the impression that most cognitive processes are only vaguely lit up in most people's heads, and that most adults do a bunch of feeling, and expression of these feelings, and then they tell themselves "this is thinking", maybe because words are involved. There's a fundamental difference between attempting to resolve conflict between two things you already belief, and just feeling something and putting that feeling into words. I think the later is almost always productive activity, whereas the former is just another form of angry (or happy, or confused) shouting. It might be a form of _communication_ - but the key thing is that no new beliefs are actually being formed. I get the impression that most of the subvocalizations that occur consist of other people's words, being repeated in their minds when corresponding emotions are triggered by external stimuli.

Is there something written about this idea? I know Eliezer seems to have brought this up somewhere, when he noted that most of the words people exchange aren't really about beliefs (in the anticipation-constraining sense), but are much more like flags being waved.

Expand full comment

Not sure what to think of this, but probably Eliezer's related writing is somewhere within A Human's Guide To Words: https://www.lesswrong.com/s/SGB7Y5WERh4skwtnb

Expand full comment

"'thinking' requires a thinker to be holding two different ideas in mind, "feeling" that they are both correct, "feeling" that there is come contradiction between them, "feeling" that this contradiction is not real, but a result of the thinker's ignorance, and then searching for a resolution"

That's one part of thinking, but it isn't all of it. Confining "thinking" to this kind of "resolve a+b= c" exercise only means that when I'm trying to decide what to cook for tomorrow's dinner, then I'm "thinking", but when I'm reading an article and following up a chain of connections, I'm not "thinking".

"Will I cook the leftovers from today's meal, or will I try that new recipe for steak and mushroom pie I looked up?" is 'thinking' by this metric, while 'oh, so that simile refers to a cultural trope, which means that now I know the trope it changes my idea of what was meant!' isn't, by the strict application. I think I'll stick with "both are thinking, just different locations in the entire space covered by the concept".

Expand full comment

Really liked your comment!

Without trying to go in depth, there might be an element of courage involved, since from the moment we have a thought, it becomes part of our identity, and to then even consider that this thought might be fundamentally incorrect is an assault to our person as conceived of as the sum of our beliefs.

Expand full comment

What is supposed to be the consequence of changing the mapping of names to categorical sets of cognitive processes? Does shifting what you call thinking and what you call feeling change your behavior in some way?

Expand full comment

Good question! The main use of this distinction is to encourage myself from doing more of “actual thinking” and less “rumination” where more or less I bounce around between topics that upset me, but don’t attempt to resolve any cognitive dissonance.

Expand full comment

By that standard, the following activities would seem not to involve "thinking":

- Solving a sudoku puzzle

- Writing code (maybe the debugging process involves resolving cognitive dissonance, but probably not the initial writing)

- Solving a math equation

- Calculating the trajectory of a projectile

- Reading a novel and mentally modeling each character

- Scanning a poem

Your proposed definition of thinking is designed only for the specific case where we have two or more competing beliefs, intuitions, or feelings and need to resolve the contradiction. But people think in many situations where no such tension exists in the first place.

Expand full comment

Thank you! These are great examples. I definitely want another word here.

Expand full comment

First you should probably come up with another word than thinking, because saying other people don't think is confusing and insulting.

But even by your definition of thinking, almost everyone thinks. Say you believe your keys is on the kitchen table, and then you look at a the kitchen table and there are no keys there. Then you have two contradictory beliefs "My keys are on the kitchen table" and "There are no keys on the kitchen table". Most people would resolve this, they'd might realize they misremembered where they put the keys for instance. So they would be thinking.

Maybe most people don't "think" about important stuff, like politics and religion, I dunno.

Expand full comment

Thanks for this example of practical “thinking.” Yes, the politics and religion angle is where I wanted to go, but clearly I should use another word here, or maybe even a phrase. Any ideas?

Expand full comment

Considering the statistical minutia often discussed here it is indeed amusing to see some good old anecdotal evidence and meaningless interpretation:

" (MicroCovid says that vaccinated people who attend an outdoor meetup with a known case have a 2% chance of getting sick, but since several people are reporting symptoms maybe it’s higher than that)."

Expand full comment

I don't see anecdotal evidence here. MicroCovid does calculations based on the parameters given, and cites the studies it uses as sources. It may not take enough information into account to be perfectly accurate, but it's far from anecdotal evidence.

However, there is a big mistake in its use here: the parameters Scott used assume that *everyone else* at the meetup has COVID (8 people in the calculation), rather than one other person. If one person is infected, the actual risk of a given other attendee catching it should be ~8 times lower; the expected value of the number of people who will catch it is ~2%.

Expand full comment

That assumes independent risk. Given supers-spreaders, it could easily be a 2% chance that everyone at the meet gets sick and a 98% chance that no-one does, for any given meet, rather than an independent 2% chance per attendee. (the real answer will obviously be somewhere between the two extremes, but I suspect it is weighted towards super-spreader events)

Expand full comment

Hmm? That seems like a heuristic approach to a very straightforward statistics question. If, say, there were 100 people, more than 9 getting sick would have a chance of less than 1 in 10,000 if the real probability is 0.02. That's really strong evidence that something about the meet-up makes the microcovid number wrong, even if we don't know what it is. I guess Scott could've shown the math, but it doesn't take all that much statistics intuition to get close enough. [caveat that I don't know how many actual people were involved, but 10 out of 100 is vaguely plausible and would be incredibly statistically significant]

Expand full comment

I have an inherent issue with trying to make society wide statistical approaches apply to individuals in specific circumstances. For instance, if masks work X% of the time in aggregate, it probably means that masks work >>X if worn properly and <<X if not worn properly. Similarly, your individual chance of getting COVID at a particular social function will depend heavily on whether you yourself were in close proximity to someone who actually had infectious COVID. If you stay 20 feet away from all other people the whole time, or even just talk with the same small group of people, you are much less likely to get COVID than a person who mingles with the whole crowd. Saying that there's a 2% chance of getting COVID from an outdoor meetup seems silly when trying to combine the wide range of possible human interactions but then ignore that wide range when looking at the individuals involved.

Expand full comment

> I have an inherent issue with trying to make society wide statistical approaches apply to individuals in specific circumstances. For instance, if masks work X% of the time in aggregate, it probably means that masks work >>X if worn properly and <<X if not worn properly.

If you are not sure whether you wear your mask properly, and you have no reason to believe that you are more likely (or less likely) to wear it correctly than the average person, then there is indeed X% risk that your mask will prevent an infection, given the information you have, and IMO it's appropriate to use that information.

Expand full comment

Where online do you go if you have a question on an arbitrary topic and want high-quality answers? Nowadays search engines have gotten so bad that I can't find any but the blandest results for any questions of interest; and while there are e.g. subreddits on arbitrary niches, your questions often get buried without any answers.

I'm particularly interested in getting high-quality answers to questions of health & illness, and would be willing to pay e.g. bounties.

Does anyone here have tips on such bounty services, or on other ways to ask questions on arbitrary topics and have a decent chance of getting high-quality answers?

Expand full comment

For medical stuff, $50 a month buys you a subscription to UpToDate, which Scott often refers to, and which I've heard other physicians refer to approvingly. Doesn't answer questions, but has a lot of good medical info. I subscribe off and on and try to stock up medical questions for a month when I've paid the $50.

Expand full comment

For health and medical stuff I go to Medscape first. They don't answer questions but you can access high quality research and standards of medical practice.

Expand full comment

Depending on how much money and/or time you're willing to put into it, you could hire a researcher. Some options:

1. Elizabeth at Aceso Under Glass is the darling researcher of the rationalist community. See https://acesounderglass.com/hire-me/

2. If you're part of an institution, you may be able to get your library/information services to research topics for you. Quality varies with organization and personnel, of course.

3. If your question is related to an academic discipline, you could just cold email random grad students to see if they're able to help.

4. There used to be a service called MetaMed that would research arbitrary questions in the medical literature. They may have gone under, but it's possible that there are similar services, perhaps for other fields.

Please update if you find other solutions. This is something I'm also curious about.

Expand full comment

Sabine Hossenfelder organizes this sort of thing for people who want to Talk To A Physicist: http://backreaction.blogspot.com/p/talk-to-physicist_27.html

Expand full comment

MetaMed has gone under.

Expand full comment

Pay for coaching from Renaissance Periodization. They have 1 hour consults. Check out their podcast and articles to see what they are all about.

Submit a Q to Stronger by Science's QandA tread on Reddit. Pray they choose to answer it.

Expand full comment

Stronger by Science is as good as it gets and exactly what I was going to recommend if the topic is health in terms of fitness and training. For illness, though, I have no idea where I'd go other than an actual doctor. Maybe one of those free nurse tip lines that insurance companies provide.

Expand full comment

Doh I read that as heath and fitness, not health and illness.

Expand full comment

Have you gotten good results from them? What have you asked them about?

Expand full comment

I consume content they both put out, but haven't asked any questions. The content they put out is good. I've paid for some products from both groups- workout plans and a diet app.

Expand full comment

Delta is a beast gang. It has ripped through Louisiana, including waylaying a lot of vaccinated people and killing elderly ones. Get jabbed and stay safe out there..

Expand full comment

My latest post in my series about human herpesviruses covers the Epstein-Barr virus: https://denovo.substack.com/p/epstein-barr-virus-more-maladies

This is probably the worst human herpesvirus, not because mono is particularly bad, but because it gives people cancer. It can also rarely cause long-term fatigue, which is interesting in context of other viral fatigue syndromes such as long COVID.

Expand full comment

This is a timely article for me, since I've recently speculated that my wife may be have some sort of recurrent viral issue, which might be Epstein-Barr. (She did have mono as a teenager.)

She doesn't have any of the classic recurrent Epstein-Barr symptoms, such as fatigue, but she predictably has cold-like symptoms (sore throat, sinus congestion) following stressful events, with an approximately three-day lag. Recently she had a very minor surgical procedure, and three days later, she had cold-like symptoms. We tested for COVID, of course, but it was negative. Then more recently she had a bad UTI, and three days later, she had cold-like symptoms again. Of course we tested for COVID again, and it was negative.

Back in the pre-COVID days, when we had a social life and would come into contact with colds, she would always get a second cold immediately following the first one. The symptoms from the first cold would subside, and three days later, she'd have cold-like symptoms again. I had always assumed she was re-infecting herself, but now I suspect that the "second cold" is some dormant virus, maybe Epstein-Barr, re-emerging.

Expand full comment

I don't know (Disclaimer: I have NONE medical or biological expertise) if this might be relevant, but I'm still sharing my perception of what happened to my wife (N = 1):

Three years ago she got what was at that time diagnosed as rheumatoid arthritis. She was prescribed analgetica and MTX, but this didn't improve really.

Finally her physician did some tests for lyme borreliosis which came out positive and they started a antibiotices treatment, that brought some minor improvements,

but she still suffered heavily.

Her pharmacist referred her to another physician (he was basically retired and had turned in his approval for billing the German mandatory health system, and was so

working only for his long-time patients on a strictly private compensation).

He gave her some additional diagnostic tests, which she had to pay out of her own pocket, and discovered that she also had toxoplasmosis (diagnostic level 200 times higher than expected value).

Since toxoplasmosis is prevalent in more that 50% of the German population, this is not considered a morbidity by the German health system, as most infected are asymptomatic, and so there is no treatment available.

She received a prescription for a combined treatment of both borreliosis and toxoplasmosis (which is not available in Germany, she went to Italy to get it).

She also changed her life-style: no stress, non-exhausting out-door activities (bicycle, SUP, ...), healthy mostly vegetarian food, lots of supplements (vitamins, hormones) and it worked out as a charm

(=> toxoplasmosis down to twice expected level).

She is still supplementing and avoids exhausting activities, but she is able to live a enjoyable life.

The doctor told her, he assumed, that when she got infected with borrelliosis her immune system was so busy fighting Borrelia, that it lost control of containing Toxoplasmae,

which then switched from the bradyzoid stage to the tachyzoid stage (the active form that really damages the host's immune and nervous systems).

So maybe it's valuable for your wife to get additional diagnostics for existing other "sleeping" infections that become active when her immune system is busy fighting a new infection.

BTW: In regards to Long COVID: I think it would be interesting to examine existing other "sleeping" infections that overwhelm the immune system while busy with fighting COVID ...

Expand full comment

I have a question regarding the Corona Pandemic and its recent wave. Perhaps someone can shed some light on things I have wondered about.

Multiple western countries are entering the forth wave, with the Delta variant increasing incidences despite 50%+ of the people being fully vaccinated.

Now, the clinical tests say that being vaccinated does not fully prevent the possibility of a covid-19 infection, but it does significantly reduce the number of dangerous infections and infections that lead to hospitalisations and death.

What surprised me: Why do we see this in most western countries, but not the US?

Details: According to https://ourworldindata.org/covid-cases, we have an incidence of 511 in the UK now, the peak of the last wave was at 877, so we have about 58% of the cases of the worst peak of the last wave.

And according to https://ourworldindata.org/covid-hospitalizations#how-many-people-are-in-hospital-due-to-covid-19-at-a-given-time, we have 7.600 hospitalisations in the Uk, down from 39k at the peak of the last wave, or 19% of the last peak. The hospitalisations do not keep up with the incidences.

Hospitalisations are on a time delay, but the recent wave started in June and was actually higher than now on July in the UK, so that should not be the reason. The widespread vaccinations probably reduce the severity in the UK, as expected.

France? Incidence 205, down from 376 a few weeks ago, worst of the last wave 675, so this wave reached 55% of the last regarding incidences. Hospitalisations 11k now, 31k at the last wave's peak, or 35%. Significantly lower.

Israel? Israel reached a new maximum with an incidence of 1.143 these days, their worst at the last peak was 981, so they are at 116% right now. Hospitalisations reached 1.400 a few days ago, the worst of the last wave was 2.387, so its 58%. Same picture: Hospitalisations do not keep up.

Same in Italy, which peaked its recent wave at 108, down from 384 in the last, or 28% of incidences. Hospitalisations are 4.600 now compared to 32.900 last wave, or 14%. Same in Germany or Spain.

But then I look at the US. The US right now has an incidence of 491, the peak of the last wave was 748, so it reached 65% of its last peak. Hospitalisations are 97k, the last peak was 133k, or 72%.

Why is the US the one western country where hospitalisations rise faster than instances? It's like in the US, vaccinations made corona worse!

(And when we look at deaths, the picture is almost similar: According to https://ourworldindata.org/covid-deaths, the UK is at 1,65 deaths per million people, their previous peak was 18.2, so they just have 9% of their last peak, although their incidences are at 58%. France has 1,96 of 9,2, or 21%, while their incidences are at 55%. Germany 0,37 compared to 10,6, so just 3 % of the last big wave of deaths, but its incidences are at 138 compared to 257, or 53%. Israel has a new maximum of incidences, but its deaths are at 2,96 and were at 7,3, so just 54% of their last peak. But the US has 4,66, the last peak was 10,27, so 45%, compared to 65% of incidences. That is way worse than the european countries I compared or Israel. But when we look at hospitalisations, the difference is more extreme.)

I thought of a couple of explanations, but none explain it to me. Without socialized healthcare in the US like there is in Europe, it is unlikely that the people in the US are just more likely to go to the hospital when they feel a bit ill. I would expect the opposite.

The numbers of fully vaccinated people are lower in the US than in western Europe (US 52%, UK 63%, France 60%, Germany 61%, Italy 61%), but while I would understand that to affect the R0 value and that the speed with which a wave grows, I would not expect it to so completely chance the hospitalisations picture.

It's also not that the numbers in Europe grew so fast that the hospitalisations aren't keeping up, most of the countries mentioned have waves cresting already, unlike the US.

It's not the vaccine either, while that could explain if the UK behaved differently, using mainly AstralZeneca, the US uses Biontec/Pfizer and Moderna, just like France, Italy and Germany.

So, what's up with that?

Expand full comment

This doesn't actually provide a comparison of delta(change in hospitalization rate, change in discovered infection rate) now that vaccinations are widely available compared to when they weren't, because the latter number isn't given. When you compare the last peak to the one before it, how much of a difference was there? Hospitalizations compared to infections then may have been even worse, though you may run into the issue that hospitalizations have a hard limit on the number of available beds.

On the question of why the US would have higher hospitalization rates than other countries, isn't the US just a less healthy country at baseline? On obesity rates for OECD members, we're not only #1, but over double the median and nearly 10 times the least obese country (Japan). We are relatively young, though, and being old seems to be the worst comorbidity.

I'm not sure what else to guess. I'm tempted to say the total number of hospitalized people is small enough in any given country that random factors overwhelm the attempt to find meaning, but that slope difference between the US and everyone else is one of the more drastic line plots I've ever seen. Looking at last November, though we had a higher total number, the slope was about the same for every country. This time around, everyone else is staying flatter than they did before while the US is skyrocketing. I can't see how to chalk this up to mitigating effects of the vaccines, though. The US isn't getting drastically less effective vaccines than Europe is. Something else is going on.

Nicholas Weininger's hypothesis sounds plausible, but I'd like to see some map showing exactly how much more segregated by class and health the US really is compared to other countries. I can believe we are given the history, but I'd like to see concrete evidence.

Expand full comment

To summarize the case AIUI (other commenters have hinted at parts of this):

-- Delta is more likely to cause hospitalization in unvaccinated people than prior variants and spreads faster in social groups of largely unvaccinated people.

-- COVID in general is more likely to cause hospitalizations in people with more underlying health conditions.

-- The US has, to a greater degree than other rich countries, distinct groups of largely-unvaccinated people that are relatively socially segregated from vaccinated people *and* relatively more likely to have underlying health conditions. This includes low-income and nonwhite populations with healthcare system access problems and/or system distrust in some places, as well as largely-white vaccine resistant populations driven by ideology. The states with the worst hospitalization rates tend to have more of both types than other places.

-- Our high hospitalization rates are caused by Delta spreading rapidly through those populations.

Expand full comment

Recent COVID cases in the US have been primarily among unvaccinated or not fully vaccinated individuals. Because of that, we should not expect to see a large change in the case fatality rate.

This KFF analysis (https://www.kff.org/policy-watch/covid-19-vaccine-breakthrough-cases-data-from-the-states/) that looked into July finds that in states that track this information, a very small percentage of COVID cases/hospitalizations/deaths are in fully vaccinated individuals. A related but more outdated analysis by the AP in May 2021 (https://apnews.com/article/coronavirus-pandemic-health-941fcf43d9731c76c16e7354f5d5e187) found that only 1% of hospitalizations/deaths were in fully vaccinated people.

This is actually a big difference between data we observe in the US and in many other countries. Other countries tend to have a larger fraction of their cases be in vaccinated people. I would hypothesize this is due to greater vaccination polarization in the US resulting in more clustering of people by vaccination status both by region and by social groups.

For example, the South is the most poorly vaccinated region in the US (https://www.mayoclinic.org/coronavirus-covid-19/vaccine-tracker), and the recent COVID wave in terms of both deaths and cases is mostly localized to the south (https://thezvi.wordpress.com/2021/09/02/covid-9-2-long-covid-analysis/). The vaccination rate differences between regions are even larger than they look in the mayoclinic chart - vaccination rates have been increasing more quickly in the south in recent weeks than the rest of the country, so the difference, say 2 weeks ago was larger.

Expand full comment

I keep track of the pandemic with a simple model to estimate infection rates: either estimated via death numbers and infection fatality rate or with test numbers (absolute positive tests compensated for the rate of positive tests). You can find graphs for a number of countries here on my personal blog: https://hmbd.wordpress.com/2020/12/30/covid-19-infection-estimates/

Both models match very well until the point where mass vaccinations became available, since then, the death-based numbers have declined somewhat:

- small effects for US, Israel (0 .. 30% reduction)

- larger effects for UK, Germany, Japan, Netherlands, Norway (each more like 80% reduction)

Even the "very large reduction" countries have death rates way above the 98+% effectiveness estimated for the vaccines. How come?

For a few countries, we have estimates on hospitalization and death numbers split between vaccinated and unvaccinated. So as far as I can tell, this is going on:

- small effect: we have skewed data depending on testing strategy, e.g. not requiring tests for the vaccinated part of the population unless they have symptoms; data is noisy anyways (e.g. my plots do not account for risk / age distribution of infections; in most cases where infections(model 1) > infections(model 2), high infection rates in nursing homes are to blame)

- large effect: nonetheless, the pandemic is mostly raging in the non-vaccinated part of the population. Countries with a vaccination-hesistant part (US: polarized; Israel: orthodox Jews?) are mostly on a non-vaccinated infection-death ratio (as the virus is mostly raging in the non-vaccinated groups). Countries with a high adoption rate for vaccinations + a priorization of vaccinations for risk groups have a drift of the infection-death ratio (see e.g. Denmark, Netherlands, Sweden, Switzerland).

Conclusion: We keep forgetting that not everyone is willing to vaccinate and that non-vaccinated people are not spread evenly through the society. A large share of the cases; hospitalizations and deaths we see is the non-vaccinated subgroup of each society and there's still plenty of people for the virus to go through.

Expand full comment

Could it be that in the US there are more elderly people refusing the vaccine? If so the US would get more hospitalizations/deaths than other countries, even if those countries have the same rate of vaccinated people overall.

Expand full comment

Vaccinations rates by age group are available for France and Italy here: https://ourworldindata.org/covid-vaccinations#fully-vaccinated-by-age.

France: 60-69 = 62%, 70 - 79 = 68%, 80+ = 92%.

Italy: 60-69 = 73%, 70 - 79 = 85%, 80+ = 75%.

US data can be found here (3rd slide), but of course, they have different categories: https://covid.cdc.gov/covid-data-tracker/#vaccination-demographics-trends.

65 - 74 = 85%, 75+ = 79%.

So while the US and Italy share a top age group that lags behind the one before it, all of these countries have high vaccinations rates for the eldest that should make at least some difference to the waves before anyone was vaccinated.

Expand full comment

I think Italy is a bit of an outlier in that statistic, most other countries in Western Europe are at >90% for the age group 80+. E.g. France, Spain, Denmark, Germany. Spain is at 103%, don't ask me why. Perhaps this is because so many elderly in Italy got sick in the first wave, so Italy would look better in a "vaccinated or recovered"-statistics?

But apart from Italy, I would guess that higher vaccination rates in the elderly is really the explanation. In countries like the US or Israel, there are subgroups in the population who don't get vaccinated for ideological reasons (orthodox Jews in Israel). Probably including the elderly. In most of Europe, there are also people who don't get vaccinated, but it's rather people who feel they are not at risk. So mostly younger people.

Expand full comment

Hm, I confused Italy and France (Harzerkatze, too, I think). France is the outlier, Italy has (in age 80+) 97% with at least one vaccination.

Expand full comment

I don't have a good (or any!) answer to your question but it is a good question nonetheless. I've been noticing the same thing for a few weeks and particularly the difference between the UK and the US.

The third wave in the UK has about an eighth of the death rate compared with the second, but in the US the death rate is getting on for half that of the second wave. The differences in vaccination rates aren't big enough to account for more than a tiny fraction of that so as you say "What's up with that"?

Expand full comment

How sure are you that the differences in vaccination rates aren't enough? Most deaths are in the older age brackets, where the difference in vaccination rates isn't so very small.

Let's do some super-crude back-of-envelope calculations. Let's say that people below 15 never get COVID-19 badly enough to appear in the statistics at all (this is very untrue during school terms, but it's holiday time right now); then let's divide the remaining population into 15-50 (50% of population, 1x risk), 50-70 (30% of population, 5x risk), 70+ (20% of population, 25x risk) where the "risk" figures are Pr(death|infected badly enough to get noticed). And let's suppose that vaccination makes you 4x less likely to get infected badly enough to get noticed, and 4x less likely to die conditional on that happening. I'm assuming equal risk of being infected badly enough to get noticed (without vaccination) across age groups; older people are more likely to have worse symptoms but I think also interacting less and so less likely to get infected at all. And let's suppose that our baseline young+unvaxed population has a 1/300 chance of death given infection sufficient to get noticed in the statistics. (This figure doesn't actually matter for the comparison between UK and US. It's just there to make a later calculation less confusing.) To be clear, all these numbers are basically made up; I hope they're in something like the right ballpark but claim no more than that.

Now, what are the vaccination rates? In the US it's something like 50%, 75%, 80% for the three groups. In the UK it's something like 60%, 90%, 95%.

So, let's suppose we have 10k people. Of these 5k, 3k, 2k are in our three age categories. In the US, that's 2500+2500, 2250+750, 1600+400 vaxed/unvaxed, which means infections proportional to 625+2500, 563+750, 400+400, which means deaths proportional to (156+2500, 2813+3750, 2500+10000)/300, so deaths/infections proportional to 72/5238 or about 1.4%. (This actually turns out to be the right order of magnitude, which is more than this crude calculation deserves.)

In the UK we get 3000+2000, 2700+300, 1900+100 vaxed+unvaxed in our three age groups, so infections proportional to 750+2000, 675+300,475+100, so deaths proportional to (188+2000, 844+1500, 2969+2500)/300, so deaths/infections proportional to 33/4300, or about 0.77%, so almost a 2x reduction compared with the US. (The deaths/infections figures depend on that 1/300 guess above but the US-to-UK ratio doesn't. Of course all the numbers depend on a bunch of other equally crude guesses.)

That's certainly not the same as 1/8 versus 1/2. But it's surely "more than a tiny fraction".

(Final reminder that all the numbers above are basically made up and just meant to give a rough idea of how big the effect of vaccination rates might be.)

Expand full comment

Fair point - I'm very happy with your guesstimates, they seem good to me. And the result is surprising (to me) but convincing. my 1/8 and 1/2 were also very approximate, so your reasoning would explain quite a lot - as in a half or more - of the discrepancy.

Thanks for the thoughtful reply.

Expand full comment

I don't think the US looks that different from Greece, but I'm just eyeballing the graphs. I think it would be better to do a scatter plot of all Western countries on your two dimensions, to get an overview, instead of looking at individual countries. Or maybe breaking it down by US state would give some insights, who knows ...

Expand full comment

"Why is the US the one western country where hospitalisations rise faster than instances?"

Perhaps it has something to do with triage standards versus available beds?

According to Wikipedia, as of 2017, the US ranked third (behind Hong Kong and Australia) in hospital beds per thousand people.

So perhaps the capacity is there such that US doctors are able to people who, with the same symptoms/severity, would be sent home for lack of space in those other countries?

Expand full comment

From what I see, no country in Europe is near its hospitalisation bed limit right now, so I would not expect them to be sent away any more than in the US. And since in Europe, you do not pay for your hospital stay yourself, less people will be willing to skip hopital if they can.

Expand full comment

That might explain some of the difference in hospitalisation rates but not death rates.

Expand full comment

The explanation is likely to be in testing.

You aren't comparing deaths and cases, you are comparing deaths and discovered cases.

It is in theory possible for it to not rely on testing, if delta is more deadly (boosting deaths per case) and elderly/vulnerable are more likely to be vaxed (decreasing the deaths per case). Where the difference between vaccination rates of elderly/vulnerable and non elderly/vulnerable is different between countries.

The last possible factor i can imagine is that comparing to the last peak creates some weirdness, but i don't have a clear model of how that weirdness would look.

Expand full comment

There is some evidence in favor of this: the US also seems to have the largest fraction of tests that are positive. This suggests that people are less likely to get tested if they are unsure if they have COVID, and so more less severe cases will be detected.

My own model of this: The fraction of the US that is "Definitely Not" getting the vaccine is larger in the US than in Europe. Nate Silver says it's 15% of the US and I would guess <5% in most European countries. [1] This group accounts for maybe half of the current wave - there are also some "Wait and See" unvaccinated people, and a small amount of transmission among vaccinated people. "Definitely Not" people are mostly dismissive of COVID in general, and so won't get tested unless they are hospitalized. This significantly decreases the fraction of cases that are confirmed by testing.

[1] https://fivethirtyeight.com/features/unvaccinated-america-in-5-charts/

Expand full comment

Thank you for your reply!

I thought about testing as a reason, too, but it does not really convince me.

According to https://ourworldindata.org/coronavirus-testing#how-many-tests-are-performed-each-day, the US hasn't greatly changed the number of tests it does, the peak was at 5,5 per 1.000, now they are at 4, so 72% of the max.

The UK was at a max of 19,1 last wave and now are at 12, so 62% of their previous wave.

Germany peaked at 2,8 and is now at 1,5, or 53% of the last peak. Italy is 4,0 compared to 5,4, so 70% of the tests of the last wave, almost exactly the proportion of the US.

France actually tests more now than they did before, 11 to 7,3, or 150%. And Israel is 15,6 compared to 12,6, so 123 %.

But I see no consistend pattern where countries with a better ration of incidences to hospitalisations have either increased or decreased testing.

Regarding elderly vs. others, I would not expect the ratios to be so different between countries, all countries started with vaccinating the elderly and then switched to access for all.

I have read the theory that it is because the US has so many overweight and thus so many at-risk people, but the difference in hospitalisations is too great for that to be the reason. Unless overweight people generally refuse to be vaccinated, having 50% of the people vaccinated should decrease hospitalisations compared to a time when they were not. The number of at-rist people would affect comparing hospitalisation rates betrween countries, but not hospitalisations in a country now vs befoer vaccinations.

Expand full comment

Hi Scott, I'm a long time lurker and first time poster. Now that I have a stable income I would like to subscribe, but I see that the only payment option is CC, which I don't have. Is there any way to subscribe via Paypal, iDeal, or something similar? Or plans to add them in the future? Thanks!

Expand full comment

PayPal offers you a virtual credit card number that should work

Expand full comment

Thanks, that is unfortunately not yet rolled out on my account but I will keeping an eye on it for the future.

Expand full comment

Since Substack allows gifting subscriptions, it might be possible to find a trusted third party that you pay with PayPal and that then buys a subscription for you.

Maybe Scott could endorse some people in various regions that are trusted to act as proxies in this way. The proxy should be fairly safe if they accept Paypal payments through the "Friends & Family" option (NOT the "Goods & Services" option), because these kinds of payments cannot be disputed or reversed. Similar for Bitcoin payments.

Expand full comment

A debit Visa card should work

Expand full comment

No idea if the payment mechanisms work this way, but could you buy a one-off pre-loaded CC andd use that?

Expand full comment

Thanks, worth a try!

Expand full comment

PayPal itself is never going to be a thing on Substack. PayPal is part of the high-status ecosystem that shuns ideological untouchables and forces other companies to shun them by threatening to withhold its services. Substack is in blatant defiance of that ecosystem - the whole business model of Substack is "agree to host untouchables, and you can break into the otherwise-network-effected social media market, *because the untouchables have been banned from everywhere else*" - so PayPal et al. don't want anything to do with it.

Expand full comment

Not sure if you just missed the whole OnlyFans debacle, but Visa and MasterCard are very much part of the set of payment processors who withhold service from some content producers. They're not at all uniquely content-agnostic in comparison to PayPal.

Expand full comment

Adult services are a bit of a different beast with respect to visa/mastercard, as there are well funded organizations deliberately and specifically putting pressure on the payment processors to shut down pornography/sex-work. “Exodus Cry” is one such organization that gets brought up.

Expand full comment

You are a scholar! Well said.

Finally it makes sense to me why substack will be adding support for censorship resistant money

https://www.prnewswire.com/news-releases/substack-is-now-accepting-bitcoin-payments-on-the-lightning-network-powered-by-bitcoin-payment-processor-opennode-301360039.html

Expand full comment

Can you use probabilistic reasoning and Nick Bostrom's observation selection theory (https://www.anthropic-principle.com/q=book/chapter_10/) to argue against supernatural Hells and Heavens? If all people really went to eternal or extremely long-lasting Hells or Heavens after death, then almost all conscious experiences would be had in those hellish or heavenly states of mind. Yet, what we observe are regular lives on Earth, so it's much more likely that the reference class from which our observations are taken contains mostly regular lives on Earth.

To put it more clearly, let's assume there's a bag full of 1,000,000 balls, and you don't know how many of the balls are red and how many are blue. You pick one ball at random, and it's blue. What is more likely: the bag contains mostly blue balls or the bag contains one blue ball and 999,999 red balls and you happened to pick the only blue ball? Because the latter is equivalent to us picking the rare regular Earth life observation from all observations that include eternal lives in Heaven and Hell. So eternal Hell and Heaven almost certainly aren't real.

Expand full comment

The problem with everything based on the self-sampling assumption (eg the Doomsday hypothesis, the simulation hypothesis, the big alien theory, etc) is that I have yet to hear a good reason to believe that I am a random sample from any distribution. The only truly defensible reference class is myself, which has a population of 1.

Expand full comment

On the other hand, you could argue "all the swans I have ever seen, and that anyone else has ever seen, have been white. Hence, there are no such thing as black swans".

Except there are, people hadn't gone to the places where black swans exist yet.

So suppose there are no eternal Heaven/Hell, how about non-eternal ones?

Expand full comment

To be fair, white swans are spread across many continents and black ones are unique to Australia - the ratios between "humans currently alive" and "all humans who have died in the past" are skewed the other way - it'd be like thinking all swans are black.

Expand full comment

Like the Doomsday Argument ( https://en.wikipedia.org/wiki/Doomsday_argument ), I think this is technically correct as far as it goes, but stupidly ignores the overwhelming majority of available evidence.

If "how many humans were born before me?" was literally the ONLY piece of evidence you could use to estimate the total number of humans that will ever exist, then guessing you are the median human is probably the best strategy you can use.

But given all the other evidence you can observe about how the universe works, this is an extremely stupid strategy. With all of your life experience, there are vastly better strategies you could use for estimating the number of future human births.

Similarly, if "am I in heaven/hell RIGHT NOW?" was literally the only piece of evidence you could use to guess the answer to the question "do heaven/hell exist?", then guessing "no" in the cases where you are not presently in heaven/hell would probably be about as good as you could do.

But if you are allowed to use all your observations from your entire life to answer that question, there are probably vastly better strategies available to you.

Expand full comment

Learning you are the nth person updates you about as far in the direction of a short lived world as learning you exist update you toward a long lived world.

Let us say for simplicity that there are N possible people and that N is so large that we can ignore the probability of someone being born twice.

Then on learning person X (one of the N possible people) exist at some point in universe U, makes you favour hypothesis where universe U have more people. To be exact the hypothesis universe U has M people assigns probability ~M/N to the event person X exists at some point in universe U being true.

Given you exist in universe U, the probability that you are person K depends on the number of people M, if M<K the probability is clearly 0, and if M>K the probability is clearly 1/M (remember we are given you exist at some point, and that you only exist once).

Thus the hypothesis "universe U has M people" assign probability 1/N to the proposition "Person X exists in univer U at some point and in particular is the K'th person to exist" for all M>K.

Therefore you shouldn't change your prior other than discarding the hypothesis where fewer than K people live.

Expand full comment

I think of all the replies I've gotten you're probably nearest to the mark. And good from you noticing the similarities to the Doomsday Argument. Bostrom wrote in his book the Anthropic Bias that the Doomsday Argument really is one piece of evidence, but you have to add it to all the prior knowledge you have about the world and adjust your conclusion appropriately. If we already had developed a stable powerful superintelligence that would take care of the future of humanity and give humans technological immortality then the Doomsday Argument wouldn't matter, our fate would be sealed anyway. As things stand, it matters somewhat, but no knows exactly how much. Thank you for the reply.

Expand full comment

This might be easier to see if you replace "heaven/hell" with some other location, like "China". If you want to guess whether China exists, and the ONLY thing you are allowed to know is whether you are in China right now, then guessing "no" unless you are actually in China is probably the best available strategy.

But in real life, there are much better strategies you could use.

Expand full comment

This makes sense to me and is related to my other comment on the thread. For similar reasons, I think people are reborn/repeatable.

Expand full comment

I don't know how you could possibly come to any conclusion at all on this basis until all of your conscious experiences are for sure over. Your sampling event isn't complete yet. You have no idea right now whether it will end up being predominantly earthly experiences of the sort we have written records of or if it will end up being predominantly some sort of afterlife from which you can't communicate to the minority of others still having regular earthly experiences.

To use your own example, consider now each ball is 0.00001% red and 99.99999% blue, and there is a restriction in that only people who are currently looking at the red part of their ball have any means of communicating with each other. You have no way of knowing you really selected a red ball.

Expand full comment

This thread super-confuses me. How does one know what reference class to pick - Observed Moments vs. Lives? Couldn't one go down even lower and consider individual Qualia, or so?

Maybe one should have a look at *all* reference classes one is part of, and collect the evidence from each?

Expand full comment

Say a group of children where talking. A child says, if children become adults most of our lives will be spent as adults. Then if you took a random moment from our lives it would most likely be from when we are adults. Since this is a random moment from our lives and we are children, we will most likely never become adults. The children-becoming-adults-hypothesis is most likely false.

(Not arguing that heaven and hell is real, by the way. If a church says you have to give us money or you'll burn forever, that's kind of an obvious scam. The main reason a lot of people don't see it is an obvious scam is they live in a community of people insisting it is not a scam.)

Expand full comment

No.

Both the classic hell/heaven hypothesis and the normal hypothesis say that 100% of humans born before 2021, are born on earth. You being born on earth doesn't discriminate between these hypothesis.

hell/heaven hypothesis are quite unlikely due to Occams Razor (or in some cases due to incoherence)

Expand full comment

And to clarify, by "Earth life" I meant the portion of our life we live on Earth since presumably Heaven and Hell would be lived in some other reality.

Expand full comment

It depends on whether the reference class contains whole lives or observer moments. I think it's more likely that this observer moment is chosen from all observer moments, in which case my argument stands because if eternal Hell and Heaven were real, approximately 100%, or 99.999...%, of all observer moments would be had in the afterlife since it would be eternal.

Expand full comment

But 100% of observer moments wondering if Hell and Heaven are real would take place on Earth because everyone in Hell/Heaven knows the answer (they also don't have to wonder whether Earth is real because they remember it, so there's no symmetrical question).

Expand full comment

I do experience other forms of observer moments though

Expand full comment
Comment deleted
Expand full comment

Unbounded suffering can be a "net positive", huh?

You sound like you're okay with being *personally* tortured for eternity, with makes me think you're not taking this hypothetical very seriously. You haven't even mentioned a threshold; it kinda sounds like you'd be satisfied if *everyone* went to hell.

Expand full comment
Comment deleted
Expand full comment

I guess this means you consider the suffering of "enemies" to be more valuable than your own well being. I hope this opinion is very unusual. I have no intellectual desire for my enemies to suffer, I just want their bad deeds to stop. Sometimes I get an emotional desire for revenge, but luckily, it fades with time.

How do you conclude that 60-80% of everyone deserves hell?

Expand full comment

(Re-reading, it sounds like 60-80% is sort of like a lower bound)

Expand full comment

I think if they'd mandated that everyone get vaccinated for Covid right at the beginning (circa January/February 2021), then we'd probably have a much higher vaccination rate for it than we do now. Some folks would obviously resist it, but I think most people's reaction would be "Oh, the government is requiring me to get the vaccine, it must be really important" and get it. Same reason people follow all kinds of public orders and such even if the actual enforcement side of that is pretty limited.

But saying, "We recommend you get the vaccine, but aren't requiring it" kind of sends the message that it's a personal choice above all, rather than a public health issue in the same way that kids have to get vaccinated before attending school.

On a lighter note, I saw the "Shang Chi" movie in theaters today. It amuses me to think of all the Ancient Secret Organizations in the MCU as being like that Simpsons joke about Mr Burns having every disease known to man, but they're all stuffed together and none of them can really get a foothold on him. In any case, fun movie, and it actually handles the Third Act stuff pretty good (not always a given with MCU movies).

So here's a weird thought. Do you think at some point, podcast advertisements will be able to take samples of your voice recorded when you use the voice assistant built in, and then emulate your own voice reading them to you? Or would that be too weird, or a violation of your IP rights?

Expand full comment

I think if Trump had signaled he would do so in the closing weeks of his administration, and then gotten behind the idea and pushed, it might have made a difference. The vaccines have become a political issue, at least for many, and if Trump had been talking up mandatory vaccines as soon as they're available as he was on his way out, that might not have happened.

Expand full comment

It certainly doesn't help that several prominent Democrats made it a point to be very skeptical of a "Trump" vaccine in the weeks and months leading up to the election. It was pure politics, and as soon as Biden was elected the narrative switched to pro-vaccine among Democrats. Trump himself was consistently pro-vaccine, considering it the culmination of a very good COVID emergency plan. I'm quite curious who, if anyone, would be the vaccine skeptics right now if Trump had won reelection. I suspect somewhat fewer Republicans would be skeptical and a similar number of Democrats would be skeptical instead, but I readily admit I have very little to back that assertion up.

Expand full comment

I'm pretty sure the left (what I saw of it) was in favor of getting vaccinated as soon as vaccines were available. They just didn't talk as though Trump deserved any credit for the vaccine.

Expand full comment

Here's Politifact with a group of quotes. They rated it "false" that Biden and Harris were concerned about the vaccines themselves, but that's not how I hear these quotes. They are taking it much further than "Trump didn't deserve the credit" with it too.

https://www.politifact.com/factchecks/2021/jul/23/tiktok-posts/biden-harris-doubted-trump-covid-19-vaccines-not-v/

I don't know how rank and file Democrats/Left felt about the vaccines, but their leaders weren't exactly inspiring great confidence in it.

Expand full comment

"If the public health professionals, if Dr. Fauci, if the doctors tell us that we should take it, I’ll be the first in line to take it. Absolutely. But if Donald Trump tells us that we should take it, I’m not taking it."

Okay, that's no way to unify the country, and she didn't say what she would do in case BOTH Trump AND the 'public health professionals' say to take a vaccine, which of course they both did ... but surely she's saying she trusts 'public health professionals' over Trump, and not saying she "distrusts COVID-19 vaccines".

Biden's statement does sound more negative on vaccines, but he was first in line to get one (Dec. 21, with Trump still president) and actions speak louder than words.

Expand full comment

Sure, he was in favor of vaccines after the election, which is part of the problem. Until November 3, Trump was trying very hard to get the vaccines out to the public and to claim credit for them (which he deserves, as much as a president deserves credit for a program like that). Also until Nov 3, Biden was negative on them. Then after the election suddenly Biden is saying that we should all get vaccines and is first in line, despite the fact that nothing at all changed about the vaccines or how they were made and approved.

You can claim that Trump was too bullish on getting the vaccines out early, but you can't do so and then try to sweep in and take credit for them once Trump loses the election. That smacks of opportunism and false concern.

Expand full comment

It's a fair (and interesting!) question. I think under a Trump presidency, there would still have been enough sources of information that Democrats trust pushing the vaccines that the left consensus would still have ended up strongly endorsing vaccination, though probably with a small but noticeable share of the anti-GMO crowd going anti-covid-vax. But of course there's no way to prove it.

Likewise, my theory that if Trump had been even more stridently pro-vaccine that it might have moved the needle may well not be true. It's possible that once he lost and a large chunk of the right decided the election had been stolen, no establishment information source was going to be trusted.

Expand full comment

My impression is that a lot of people don't like the sound of their own voice.

Expand full comment

I disagree on vaccine mandates. If you _are_ going to bring in vaccine mandates, you want to do it as late as possible in the game, not as early as possible, to take maximal advantage of social pressure.

You want to wait until everyone who is going to get it voluntarily has already got it. That includes both the super-keen folks who'll be front of the line, as well as a bunch of less keen folks who might be hesitant or lazy at first, but who will join in once they see that everyone else is getting it too.

Then, once you've got as many people through the door as possible by purely passive means, then it's time to start laying out the carrots and sticks. You want to lay out the stick when you'll only have to apply it to 5% of the population, not to 40%.

Expand full comment

The technology for podcast advertisements to do that will certainly exist, but I'm not sure there's any advantage to it. It's somewhat implicit in the fact that you're listening to the podcast at all that you trust whoever is broadcasting in some way, so seemingly paid placements made by the hosts themselves is about as good as you're going to get. In many cases, I would expect you trust the podcast host more than yourself, which is why you're listening in the first place. If it's about pop culture, they know more about it then you do. If it's about sports, they know more about the sport than you do. And listening to your own voice is likely to creep out a lot of people and probably have the opposite effect. You know you didn't really say those things and do not hold some positive opinion to make a recommendation in favor of a product you've never actually used.

Expand full comment

I am very pro-vaccine, but making it mandatory for everyone (not just specific occupations or activities) seems a bit much. One big problem now appears to be that governments are conflating what should be three different things: what's approved, what's recommended and what's mandatory. We see this is in media stories about concerns that approving booster shots will be sending the wrong signal about what counts as fully vaccinated.

With advertising using voice emulation, a bigger violation would be advertisers using your voice to pitch things to your friends. This isn't too different from what social media companies have already tried to do.

Expand full comment

The old Iron Curtain shibboleth is "Everything not Forbidden is Mandatory, and everything not Mandatory is Forbidden."

I think the govt. would have done 2X better if they had CHARGED MONEY FOR IT, or at least let private physicians and clinics have some supply for their patients. Things happen much faster when there is some money to be made and you (doctor or HMO) have an incentive to keep customer relationships.

Immediate screams come blaring from the Left about "BLACK AND BROWN PEOPLE $$$ BLAH BLAH." Fine. Designate 50-75% of the supply to be govt-controlled and administered in zero-cost settings. But remember that people may then value that product at exactly what they paid for it: ZERO.

I personally would have felt much better and more confident about vaccination if I could have paid somebody, some professional or company that would be around three months hence, say $200 for it. Because then I would have an implicit contract and some reasonable expectation of service delivery, and possibly somebody who was accountable if things went wrong.

I certainly would not have had to answer intrusive questions about my race and ethnicity just to be allowed to find out if I was eligible.

Regarding "messaging" about vaccines: Gosh, it is just too bad that the government shut down every other possible civic or community institution where people might talk to people they know and trust about things like vaccination, apparently a LIFE OR DEATH situiation. Work, Church, school, bars, barbershops, sporting events all shut down or severely restrictsd, so the only source of information is the government, or TV/radio news (utter fearmongering) or the Internet (largely the world's loudest a-holes, present company excepted.)

One final note, sorry this is so long -- I have not heard a single peep from actual physicians about any of this, or even know anybody who was vaccinated through their existing "PCP". It's like the "opiod crisis" where doctors have magically been whitewashed from the picture, like they never had any responsibility to make jusdements or give specific medical advice, or have any responsibility for any outcomes. They are HEROES.

BR

Expand full comment

In Jan/Feb none of the vaccines were fully approved, and mandating vaccines that aren't fully approved seems like a bad idea, even for old/frail people, and especially for health care workers. It would have been reasonable for the health care workers to strike over.

Expand full comment

Well, that's technically correct but also the vaccines absolutely should have been approved back in 2020 and the FDA is responsible for many, many deaths

Expand full comment

You can’t mandate something when there isn’t sufficient supply to meet that mandate.

Expand full comment

But you could be transparent with the public about this supply problem, and then you could mandate it in phases as your supply chain ramps up e.g. start by mandating that everyone over 70 gets vaccinated

Expand full comment

That's what I was thinking as well. I should have been clear in my original comment.

Expand full comment

Is e-ink screen or paper better for eyes than LCD screen?

Asking, as I have some premature eye problems due to eye strain caused by heavy computer use.

And I am a programmer so I cannot just stop using it.

I am considering switching to paper or e-ink partially, as part of a solution. But I have no idea is it likely that it will help.

Expand full comment

I had a kindle graphite with no back light that I used for reading. It looked almost exactly like paper and ink. I read manga for several hours on it and the picture quality was excellent. I normally get eye strain from using computers, but not from reading books. The E-ink felt the same as reading a paperback book.

Expand full comment

I hope to hear some more about this.

I have been reading at night using a dimmed phone in night-mode, with only the text showing up in just-white-enough-that-I-don't-think-it's-grey, and it seems to be working, at least for short-term eyestrain.

Expand full comment

n=1 evidence: I've tried using greyscale on my macbook and android phone. It just renders the screen black and white. It has significantly reduced the strain on my eyes, and I think also improved the quality of my sleep.

Expand full comment

E-ink is incredible for comfort of reading. If you use it without a backlight it's literally programmable paper.

Note that e-ink is total garbage for anything requiring fast updates (scrolling, user interaction), and the readers themselves are excellent for prose but bad for technical reading / looking things up for reference. If you want to read lots of PDFs definitely get an 8', possibly a 10' one.

Expand full comment

Maybe you're straining to focus? If you wear glasses, you could get a special pair of "computer glasses" that are corrected for the exact distance between your eyes and the screen (which you can measure), so you have ideal vision at that distance. Single-vision glasses are usually a compromise between ideal correction up close and far away, and if you're at the bifocal stage, the problem with the usual "continuous" variation bifocals is the exact correction you need lies in only a narrow strip of the lens, so you have to hold your head in exactly the right position for a long time, which is a strain.

Expand full comment

I've researched this. There's a small number of studies looking at it, but yes e-ink does seem to strain the eyes less than an active screen (like LCD). Anecdotally, when I've used eReaders in the past I've feel less eye fatigue than when I've used an iPad for instance.

Beyond this though. A major contributor to eye strain is spending a long period of time at a fixed focal length. Sadly(?) there's no way current way around your varying up the distance of things you're looking at.

Expand full comment

I certainly find e-ink screens kinder for the eyes. I use my e-reader quite a lot, for many hours at a time. I don't have any non-anecdotal evidence though, and I don't really find LCD screens troubling either, so take my word with an appropriate amount of salt.

TLDR; If you find paper easier on the eyes, I think e-ink is very much worth a try. But consider that your current LCD monitor might be going bad, and that modern screens might be better.

At the risk of explaining the obvious:

The light coming out of LCD screens can have both backlight-flicker and refresh rate-flicker, is polarized, backlit and often blue-shifted, all of which could reasonably be believed to cause eye-strain, especially in a dark room. Backlight flickering is probably the most noticeable, and it ironically gets worse if you turn the brightness down on most LCD displays, due to how PWM dimmers work (on screens with LED backlights). Older screens with fluorescent-tube backlights are generally terrible, though their flickering will be of a different kind.

As an LCD screen ages, the capacitors in the backlight power supply are often the first component to go bad. This can often make the flickering much worse before the screen goes black entirely. This is all to say that trying a new screen might be worthwhile. OLED displays don't have a backlight, though some still use different strobing effects for different reasons (often orders of magnitude faster than LCD backlights though), and many (all?) are polarized. There are also LCD screens available with non-PWM backlights (=non flickering) which might be worth trying.

E-ink displays are neither of those things. Instead they are essentially just ink behind glass, lit from the front like paper on a table. There is no polarization filter, no backlight (some have a backlight for use in the dark only), no blueshift and no constant refreshing. Instead of constantly refreshing, the screen will passively hold the currently displayed content in the pixels indefinitely even if power is turned off (which it in fact is after each refresh). Then you only need to refresh the screen once the content changes, and only the pixels that changed. This greatly reduces power consumption, as long as the content is mostly static. Refreshes are very slow though, on the order of a tenths of a second compared to only milliseconds for LCD displays, so moving or constantly changing content is out of the question (like a mouse pointer or scrolling text). They also suffer from a lot of ghosting, namely that pixels around the edges get the wrong color, and stay the wrong color until individually addressed. This is solved in most applications by changing every pixel to black, white, black again, white again, and only then displaying the new content; or by displaying the new content, then inverting it, then inverting it back again. This resets the screen and removes any ghosting, but is annoying. The screen on my current e-reader will refresh the screen every 10 pages or so, which is well before I notice any ghosting.

After all that though, they really are a lot like paper to read from.

There are several different E-ink versions, some better than others, though in this regard they are all the same. The nicer ones have very high effective resolutions, with a sort of hardware-antialiasing where the individual elements of the screen are several times smaller than the pixels, making the screen appear to have much higher resolution than it actually has (300 DPI +). Many have capacitive touch integrated. They can generally display 4, 8 or 16 levels of greyscale. Some have optional backlighting for reading in the dark (dimmable with flickering PWM).

I only have personal experience with E-ink from a few different e-readers. Being slow, low-power devices, the software they come with can be frustrating to use while searching for my next read for example, but once I'm into a book I really like them. I don't know about other brands, but Kobo e-readers are Linux based, and thus quite hackable. There are also large, high resolution e-ink displays produced, though I don't know how you might get hold of one for use as a computer display. (Examples: No affiliation, I have not tried their products: https://www.eink.com/product.html?type=productdetail&id=26 or http://www.dasungtech.com/)

Hope any of that helps!

Expand full comment

What sort of eye problems? And how long have you been working on the computer before these problems arose?

I'm not an expert but I doubt that paper or e-ink will make a difference if your eyes are physically deteriorating. It may be that using a bright screen in a dark room has a negative effect but if so, you should just make the room brighter and your screen darker.

Expand full comment

> What sort of eye problems?

One eye feels sore/dry. Also muscles around it (and/or on cheek).

No diagnosis so far, doctors that I visited so far disagree completely.

> And how long have you been working on the computer before these problems arose?

20 years, now problem is continuous what makes me scared.

Issue is not problematic right now, I mostly worry that it is only the beginning.

Expand full comment

Use eye drops with hyaluronan liberally. I have dry eyes in the winter constantly, it's just something you have to live with.

You can try increasing air humidity as well.

Expand full comment

Okay, I thought you might have vision problems but eye strain should be easier to mitigate. Were any of the doctors you visited optometrists?

Expand full comment

Yes, and so far they had zero agreement

Expand full comment

Unfortunately dimming most backlit displays will make them flicker more, due to how PWM dimming is implemented. Such displays will flicker the least att full brightness, so my recommendation would rather be to make both your room and screen bright.

Expand full comment

Quality of the pwm driver circuits are one of the big “invisible differentiators” between cost effective products: you can cut a lot of components out of a product's bill-of-materials without changing any of the marketing bullet points by skimping in this area.

Expand full comment

Are we back to "Chronological" for open threads, or did Scott forget to set it to "New First" for the past two?

Expand full comment

“New first” is the substack default afaik

Expand full comment

I have been wondering how much impact general fear of the sort "dawn of nation/civilization" have on interest rates.

If I assume that in the next 50 years, in my country there will be a terrible war, some hyperinflation, some X-risk event, or something else that basically makes my possessions void, then this should set a lower limit on the interest rates that I am a willing to accept for a loan. For 50 years, I should never accept interest rates below 1.4%. For 20 years, I should never accept less than 3.5%.

Does this have an effect, or is it beyond the time horizon of people? If interest rates are close to zero, then implicitly people assume that their bonds will still have value in 50-100 years. This may have been unreasonable in the 60s and 70s, with two world wars just a few decades ago, and a cold war looming. But it may be more in line with people's believe nowadays, reasonable or not. But I have never heard this argument come up in any discussion of why interest rates are low. Any opinions?

Expand full comment

The current answer to your question is - basically not at all. The longest inflation linked government debt right now prices real yields of: -0.30% for 30yr US, -1.5% for 30yr Germany, -2.30% for 50yr UK. In general, long dated inflation is priced around 2% in the major developed countries. There are 100 year nominal bonds which trade in Austria and some very highly rated corporates like elite Universities and they trade something like 40 basis points cheap to the 30yr paper.

Lending rates on bonds are composed of the risk free real interest rate, the expected inflation rate, and a credit spread to compensate for the probability of default. For long dated government bonds, the expected inflation rate is observable from inflation linked bonds. The credit spread is kind of observable through credit default swaps, but these are mostly a fiction since the presumption is that if the government defaults then your bank counterparty will not be able to pay you off.

Pre 2008 real rates in all of these places were much higher, something like +2% as opposed to negative now. I'll claim that perceived credit risk has not improved in the last fifteen years so this is a genuine repricing of the long dated real risk free rate. The explanation for why this happened is partly supply and demand (central bank buying from QE and demand from pension funds for long dated liability management) and partly the secular trend of perceived lower neutral central bank rates (a long and controversial topic that can probably be summarized as inflation failing to reach target despite policy interest rates remaining low).

The theoretical issues you bring up are interesting and are more relevant in emerging markets than in developed markets. In general EM bonds will be dominated by credit risk where markets will definitely price default risk and, in general, when bad things happen interest rates will go much higher. Developed market bonds trade as more of a risk free instrument so when bad things happen interest rates there will go lower.

The academic idea of where long dated real risk free rates should trade is kind of fun to think about since the compounding effects can become so large. For a perpetual risk free bond it is generally claimed that it should trade at the expected long term real growth rate of the economy which would leave your purchasing power unchanged in terms of a share of total output. If you knew however that an asteroid would destroy the earth in exactly ten years then presumably there would be a huge shift in the time preference of consumption and real interest rates would go much higher, reaching infinity as the asteroid strike date approached. When you add in credit risk that should push long dated interest rates higher still. And yet, long dated real rates are quite negative.

Expand full comment

That's very insightful, thanks!

I wonder whether this just reflects a general belief that nothing bad will happen in the next 30/50/100 years, or whether the markets can't account for such events. It's the extreme end of the spectrum of "black swan events" that Nassim Taleb describes. But perhaps if black swans are too rare then they are just impossible (or imractical) to exploit.

If it's the general belief, I find somewhat comforting that the markets believe we will still be ok in 50 years.

Expand full comment

I'd say the tightness of long dated credit spreads and the richness of equities is probably a more direct barometer of how sanguine markets are about long term risks. My intuition though would be to not take too much comfort in that as I consider markets to be pretty myopic. Occasionally they will price in longer dated themes (dwindling valuations of oil companies the last few years would be an example) but I'd say macro markets don't pay much attention to events outside a year or so. Take Taiwan now for example, where I think anyone would say geopolitical risks there are quite elevated relative to history but fx volatility remains very low. One should also take into account that many things which seem on the surface like a big deal, such as foreign wars, are pretty inconsequential for domestic financial markets.

Expand full comment

I'll take a risk and make a prediction with no past experience in "dawn of civilization" events. The nominal interest rate would rise while inflation would also rise. This means that you would need a lot of money to compensate for not using it today and the danger of losing it in the future. But the value of the money would decrease as the catastrophe approaches. The real rate (nominal - inflation) would probably be high as you need to compensate the lender for the delay and risk, and also because of the expected capital depreciation.

Expand full comment

>If I assume that in the next 50 years, in my country there will be a terrible war, some hyperinflation, some X-risk event, or something else that basically makes my possessions void, then this should set a lower limit on the interest rates that I am a willing to accept for a loan. For 50 years, I should never accept interest rates below 1.4%. For 20 years, I should never accept less than 3.5%.

Not seeing why "I give you $100 this year, you give me back $110 in 10 years" is forbidden by "all debts will be voided in 20 years". And for that matter, a usurous loan with 50% compounding interest is still a bad deal if the debtor simply lets it pile up with no repayments until it's voided.

Not sure why you refer to these sort of events as "dawn"s, either - aren't they more like dusks?

Expand full comment

Sorry, "dusk" instead of "dawn". Non-native speaker here.

Well, if all debts and all money are voided in 20 years, then money will be inherently worth less in 10 years than it is now. If the money becomes void in 20 years, then in 10 years people will be much less willing to give you a house in exchange for money. In other words, the price of the house (and of everything else) should be going up.

In the most extreme scenario, if the earth were going to end within a year, then we wouldn't cling to money. We would want to trade it for things that we can consume right now.

Expand full comment

>Well, if all debts and all money are voided in 20 years, then money will be inherently worth less in 10 years than it is now. If the money becomes void in 20 years, then in 10 years people will be much less willing to give you a house in exchange for money. In other words, the price of the house (and of everything else) should be going up.

The specifics of the doomy event matter quite a lot to the relative value of things. Debts are fairly likely to get the paperwork lost or ignored in most such events, but I could see paper money (as in, physical cash) surviving some of them. Real estate could potentially get its value voided as well, either via physical destruction (nuked/flooded by rising sea levels/etc.), or indirectly via the same sort of "lost the paperwork" issue (you can't carry an ordinary house around with you - ownership of land is a legal fiction reliant on societal record-keeping).

"The price of everything should be going up" is a prediction of inflation (which is already a thing), and while certainly hyperinflation will happen if everyone thinks it will, I'm not seeing where you're getting the "interest rates must be enough to double a loan before the catastrophe" numbers from.

Expand full comment

Also, from a boring legal specifics points, a lot of debt has a 'force majeure' clause that kicks in in any sort of doom-like scenario anyway, so is effectively unenforceable regardless of the other issues raised by, say, being nuked.

Expand full comment

I agree that there are different kinds of doom, and that they might wipe out different types of assets. Even in the world wars or more recently the war in Syria, not all people in the devastated countries lost everything. But a lot of people surely did. I don't think a lot of Germans maintained their wealth throughout both world wars and the hyperinflation in between.

Here is how I got the numbers: if there is a 50% chance that the value of a loan goes to zero, then in the other case your loan must double its value so that in expectation it breaks even. So if there is a 50% chance for the world to break after x years, then the interest rate of my asset should double its value in x years, conditional on a non-broken world. (In fact, it should double after compensating for inflation.)

Expand full comment

Ah. The 50%-chance bit is a key part which you apparently left out of the original post.

That logic works for assets that are intended to be kept through a probabilistic doom event, yes. It doesn't hold for a certain doom event (a debt that can only be collected the day after the end of the world is of zero value regardless of its interest rate, as indeed your maths shows) or for assets that are intended to be redeemed prior to the doom event.

There's also the question of whether you're going to survive the doom event - monetary gains and losses are moot in scenarios in which you're dead. I've raised this (or rather, the similar mootness of Internet reputation in such scenarios) in regard to both Scott predictions (https://astralcodexten.substack.com/p/mantic-monday-predictions-for-2021/comments#comment-1834123) and Metaculus predictions (https://astralcodexten.substack.com/p/highlights-from-the-comments-on-acemoglu/comments#comment-2549629).

Expand full comment

Thanks, that are really very similar thoughts! Yes, monetary gains and losses are moot in that case. Except for regretting not to have spent all that money when we had the chance.

Expand full comment

It seems like in some cases, it's obviously correct to reason using the anthropic principle, and other times it's way too powerful an argument and seems equivalent to throwing up your hands and saying "Because God willed it".

For example why are we on earth and not space? Space isn't conducive to life evolving. That seems like straightforward uncontroversial anthropic reasoning.

Which events in evolutionary history are fantastically unlikely and critical to intelligent life? All of them. Every step in our evolution is very unlikely and we are in a universe / Everett branch where we win the lottery billions of times in a row. This seems way too powerful of an explanation and leaves basically no reason to look for a general mechanism like natural selection.

Can anyone point me to some good discussion / literature about how to think about under what conditions it's best to use anthropic reasoning? I've seen Nick Bostrom's book online but... it's really large and intimidating and I would be curious to know if it deals with this kind of question.

Expand full comment

Related to Carl's answer, we have absolutely no idea how many possible ways there are for intelligent life to arise. Given we have been able to observe a fairly large chunk of space and see really ourselves and possibly some cetaceans or something also on Earth, we know it's rare on a galactic scale, but we still have no idea how rare it may be conditioned on having a planet that experiences abiogenesis and is capable of supporting life for a few billion years. For all we know, there are four possible outcomes, one of which involves intelligent life after the period of time we've gone through. The fact that there are a combinatorially large number of paths to take and you only took one only tells you that specific path was unlikely, not that the outcome is unlikely. Consider some continuously valued dynamical system with only a few attractor states. It will eventually end up in one of them, even though there are literally infinitely many ways to get to any of them. Ending up in one specific state being unlikely is only the case if the system diverges under some boundary conditions.

Expand full comment

Yes that's true. If we were to observe that the galaxy is teeming with intelligent life that looks exactly like us, except they all have some number of fingers other than 5 on each hand, we would not say "aha! This confirms our belief that the evolution of intelligent life like ours is incredibly unlikely," although, technically, it would.

Expand full comment

Nobody knows whether are evolutionary history is unlikely or somewhat likely or even a certainty. The problem is that our ability to *imagine* an event in our history going some other way, and our never evolving, means approximately diddly squat. We can imagine all kinds of things that are completely impossible, and fail to imagine many things that actually exist, so our ability to imagine, or not imagine, tells us nothing at all about the probability of things.

What we would need to do is *observe* an evolutionary history that is different from ours, and then we could say "aha! At T + 150 million years, it's possible for evolution to go left instead of right, and then no humans" and if we saw enough of these histories we could start to assign a probability to those turns.

However, we have never done so. We have not observed *any* evolutionary history other than our own. We have never seen a planet, for example, in which life evolved, but never advanced past algae mats, or colenterates, or monkeys. We've seen a number of worlds on which life hasn't come about at all, but none in which life *has* come about but didn't take the same path. So the probability of our evolutionary history could be any number in the interval [1,0).

Expand full comment

So one useful frame is to take an outside view.

In the first case you know a bunch about the universe, in particular you know the probability of intelligent life appearing in space is small, and the probability of life appearing on earth is comparatively large. Now you leant that life exists somewhere. You place much more probability mass on it being on earth than in space.

In the other you are trying to infer the probability of life occurring in a region from the fact that "life occurred in that region in at least one everett branch (or whatever framing you like", this only tells you the probability is non 0, the evidence is allowed by all other probabilities, and with some assumptions about your "everett branches" the probability is of "there is intelligent life on earth in at least 1 everett branch" is predicted equally well by any non zero probability.

The reason we think the probability is not very small is due to looking at the world and taking in evidence other than us existing.

Expand full comment

A contact hygiene failure could explain a cluster of cases at an outdoor meetup, and I just feel like handwashing and hand sanitizer aren't getting as much attention as they deserve. So PSA that your hands need to be clean before you handle food or touch your face, and a precautionary rule of thumb is to act as if they become unclean whenever they touch anything that isn't certainly clean. I've been adhering to this rather strictly since I was little.

Expand full comment

Tangentially, it also seems to me that undue reliance is being placed in being "fully vaccinated", when that does not prevent contracting or spreading COVID.[1] Yet, on the other hand, rapid "lateral flow" tests (free for me in Scotland still) seem to be overlooked as a better, immediate indicator of health.

That is, it seems a more prudent and reliable criterion for attending social events than "fully vaxxed", is that one present a negative lateral-flow test result from the past 24 hours. (I thought I had a link/source for that, but failing to find one just now....)

[1]: https://www.theguardian.com/world/2021/aug/13/common-myths-about-covid-debunked (I know)

Expand full comment

Whoa, I had been on some UK e-mail lists and saw information about lateral flow tests being available for people, but had no idea that lateral flow tests gave you results in minutes! I believe that in the United States, there are few, if any, ways to get your hands on a test that will give you results in minutes (or at least, if they do exist, they are expensive).

I've been hearing discussion of rapid tests as a way out of the crisis ever since last summer, but thought it was mainly hypothetical. I guess the fact that they didn't solve issues in the UK suggests that at least the ideal solutions (require everyone to present a negative rapid test result from today before entering public spaces) either weren't being done, or were less effective than I would have expected.

Expand full comment

Basically withing 20 (up to 30) minutes you have a very good sense as to whether you're negative or positive. The tests we (regularly) use look like the one pictured in this article, "How reliable are lateral flow COVID-19 tests?", *The Pharmaceutical Journal*, May 2021, Vol 306, No 7949;306(7949)::DOI:10.1211/PJ.2021.1.83246

https://pharmaceutical-journal.com/article/feature/how-reliable-are-lateral-flow-covid-19-tests

Also, from 10 March 2021. "New analysis of lateral flow tests shows specificity of at least 99.9%", GOV.UK release from Department of Health and Social Care:

https://www.gov.uk/government/news/new-analysis-of-lateral-flow-tests-shows-specificity-of-at-least-999

FWIW!

Expand full comment

Two quick addenda by way of postscript:

(1) "when that does not prevent" : I was debating best wording here, and should have said: "does not fully prevent" or "does not guarantee prevention of" or the like.

(2) "[Lateral Flow tests] are not a panacea and must be used wisely as part of an armoury of other effective interventions." Source: da Costa, Joana Pinto, Henrique Barros, and John Middleton. "Lateral Flow Antigen Tests-not a panacea for freedom from the pandemic." Association of Schools of Public Health in the European Region (ASPHER) Working Paper (18 July 2021).

https://www.aspher.org/download/795/aspher_20210720_lateralflowtests.pdf

Expand full comment
Comment deleted
Expand full comment

[citation needed]. The CDC is still saying it can spread from contaminated surfaces and AFAIK almost all viruses (except some weird ones like HIV) commonly spread through surfaces.

Expand full comment

The CDC bungled Covid so abominably that I'm not sure why you're giving them much credence now. Aren't they (and the FDA?) the reason there's was basically no way to get a Covid test for the first months of the pandemic in the USA? In which Month did the CDC finally admit that masks were a good idea - July?

Expand full comment

Since this is "politics allowed" I'll admit I'm quite sad by how from a political perspective covid panned out. The two sides started as:

- covid-denial (against quarantine measures and border closure) vs underpowered covid carefulness (maybe we should close borders with China, 3 months after covid was raging in China)

- then they promptly switched to the more stable configuration of covid-alarmism and covid-denialism

My hope was that, once vaccines arrive, this will provide an "out" for both sides to reach a more moderate ground.

One side could "<mumble-mumble vaccine is so safe that even the close-to-nonexistent risk of covid isn't worth it>"

The other side could "<mumble-mumble yes the numbers look the same but mortality is down, and it's basically confounded with background mortality... and maybe it was before but not quite look there's a few pct point diff>"

And things would go to normal.

As it stands the two sides seem to be polarizing more and more instead. With the covid alarmists now claiming the vaccine doesn't work, and that new variants are worst, and that long covid will destroy the world.

The covid denialists picked this fucking hill to die upon in terms of 2x placebo-level adverse effects being grounds for not getting a vaccine, and refuse to wear masks even when it's sensible, and refuse to close down even the dumbest kind of gatherings (e.g. concerts, sport matches)

My one hope right now is that the "get the vaccine, wear a mask in stores, go about your own business as usual" is becoming more common but just doesn't get promoted as a view, since it's no polarizing and rather boring. It seems to be the case given that I see less people wearing masks outdoors in most places I go and more people are getting vaccinated in spite of increasing fear-mongering.

Expand full comment

As part of the exhausted middle (or is it exhausted majority?), I'm sick of political everything. I've lost faith in almost all information sources. (Because they are mostly associated with one tribe or the other.) Could we move above the tribal level, and discuss how to reduce the tribalism?

Expand full comment
founding

Please do not use the term "denialism" unless you are talking about Literally The Holocaust. No, not even if it's about something you think is as bad as the Holocaust.

First off, literally denying that COVID exists and is a disease substantially more dangerous than e.g. the flu, is now a fringe position of no great importance. What I think you are referring to is much larger and more important position that believes that the mainstream consensus w/re COVID countermeasures (mask up, lock down, and socially distance the COVID away until we've forced everyone to double or triple or quadruple vaccinate themselves) is misguided due to some combination of ineffectiveness of the proposed countermeasures, perceived exaggeration of the harm caused by COVID, perception of great harm caused by the proposed countermeasures, dangerous precedents set by making the proposed countermeasures mandatory, and lack of trust in the authority figures proposing and mandating the countermeasures.

Second, "denialism" is too often a cheap rhetorical trick aimed at short-circuiting rational thought and, by association with holocaust denialism, casting the target as Literally Nazis.

We can talk about the reasons why people dissent from the official line on COVID countermeasures, and maybe learn something. Or we can call them "denialists", and turn this into the sort of fight where nobody learns anything.

"Alarmism" isn't *quite* so poisoned a term as "denialism", but it might be better to pick something else there as well.

Expand full comment

Both terms I used ironically and both stances I exaggerated.

My main point is that the polarization is obvious in behavior (read: policy, vaccination numbers, people on the street)

I get that many people in these camps arrived at their conclusion "rationally" and outside-view they might be equally or more probable than mine.

But like, I'm just whining on the internet here, not dictating policy.

Expand full comment

I've also been disillusioned by what the pandemic has shown about American politics. There was a post on the subreddit (https://www.reddit.com/r/slatestarcodex/comments/m829uk/interesting/) that linked to https://slatestarcodex.com/2014/10/16/five-case-studies-on-politicization/, which was describing the situation with Ebola back in 2014:

"How did this happen? How did both major political tribes decide, within a month of the virus becoming widely known in the States, not only exactly what their position should be but what insults they should call the other tribe for not agreeing with their position? There are a lot of complicated and well-funded programs in West Africa to disseminate information about the symptoms of Ebola in West Africa, and all I can think of right now is that if the Africans could disseminate useful medical information half as quickly as Americans seem to have disseminated tribal-affiliation-related information, the epidemic would be over tomorrow.

Is it just random? A couple of Republicans were coincidentally the first people to support a quarantine, so other Republicans felt they had to stand by them, and then Democrats felt they had to oppose it, and then that spread to wider and wider circles? And if by chance a Democrats had proposed quarantine before a Republican, the situation would have reversed itself? Could be.

Much more interesting is the theory that the fear of disease is the root of all conservativism...

...

The proposition “a quarantine is the best way to deal with Ebola” seems to fit much better into the Red narrative than the Blue Narrative. It’s about foreigners being scary and dangerous, and a strong coordinated response being necessary to protect right-thinking Americans from them. When people like NBC and the New Yorker accuse quarantine opponents of being “racist”, that just makes the pieces fit in all the better.

The proposition “a quarantine is a bad way to deal with Ebola” seems to fit much better into the Blue narrative than the Red. It’s about extremely poor black foreigners dying, and white Americans rushing to throw them overboard to protect themselves out of ignorance of the science (which says Ebola can’t spread much in the First World), bigotry, xenophobia, and fear. The real solution is a coordinated response by lots of government agencies working in tandem with NGOs and local activists.

It would be really hard to switch these two positions around."

Scott was wrong, but he has no idea how right he was. It really is just a matter of partisan tribalism, to the death if necessary. No one actually has any consistent principles. Things were randomly reversed at the start of the Ebola outbreak, and that’s the only thing about public health politics that’s changed in the past 6 years. Actual ideology is only a fig leaf of justification for the real reason anything gets done in politics nowadays: hate.

(If you want to read more of my 'doom and gloom'-ing about this, see https://www.reddit.com/r/slatestarcodex/comments/oakstw/the_early_awareness_of_covid19_broke_me_and/h3ijqim?utm_source=share&utm_medium=web2x&context=3)

Expand full comment

What I have been finding disturbing about the situation, especially after I put a post on FB with back-of-the-envelope calculations on vaccine benefit implying that there were probably some people for whom it wasn't clear that getting vaccinated was worth it, is the tribal feel of the discussion. I got a long thread, with a significant number of people ignoring what I wrote, in particular that I am vaccinated and obviously should be, classifying me as "on the other side," and responding accordingly.

Expand full comment

Obviously there's lots of awful tribalism at play, but I wonder if you considered collective effects in your analysis or in the subsequent discussion. i.e. a child getting vaccinated may not be beneficial to the child *directly* but could be beneficial to society as a whole (though perhaps only if most/all children get vaccinated) and thus potentially be beneficial to children indirectly.

When schools reopened last year, it concerned me that the school board had apparently given no thought to the question "if someone tests positive in a class, how can we protect *elderly* people who live in the same household as children in that class". I bring this up because it is also the kind of question that I don't expect libertarians like yourself to pay much attention to. You tend to think about individuals in isolation, not about the network of interconnections that allows the disease spread and kill.

Expand full comment

Can you summarize the back-of-the-envelope calculations here? I'm curious.

Expand full comment

Nice compressed summary. I totally agree that it's quite sad how it panned out. I assume you are describing the situation in the USA. Ignore the rest of this comment if not 😉

It is important to note a couple of things here:

- The Davos crowd ranked the US as being nr 1 in terms of outbreak preparedness [1]

- Billions of taxpayer dollars were provided to the US military to come up with a plan for how to deal with a scenario like Covid [2]

IMO one can not assign a root cause for the failed response to "sidedness" and that ultimately the responsibility for the failed response to covid lies with the government including the military.

The implications of this failure of the government to execute well are underappreciated IMO, and actually have massive and far-reaching consequences for all our futures.

[1] https://www.statista.com/chart/19790/index-scores-by-level-of-preparation-to-respond-to-an-epidemic/

[2] https://www.defense.gov/Explore/News/Article/Article/1637439/new-biodefense-strategy-combats-man-made-natural-threats/

Expand full comment
Comment deleted
Expand full comment

Well, you didn't follow any of those procedures in terms of not having the PPE stockpile the USA was meant to have, the CDC bungling the tests and the gov (can't recall which TLA) making it illegal for anyone to make a working Covid test, The CDC and the WHO saying masks don't work, etc. etc.

As for "stampeding into lockdowns after a week or two", that might actually have worked. In real life the lockdowns came months into the pandemic, by which point it was far too late to stop the spread.

Expand full comment

With respect to masks specifically, I firmly refuse to wear them unless absolutely required, in part because it has been made quite clear that if I give an inch, they (the alarmists) will take a mile. I'd be much more willing to mask up on specific occasions if I wasn't being brow-beaten over it the whole time.

Expand full comment

I agree with this and am more or less trying to do the same.

If you normalize it you get some pretty weird behavior.

Also I find it disgusting that certain countries (e.g. Spain) mandate outdoor masks but indoor dining is completely fine...and people actually respect this. There's few sights more dumb I've seen that hikers coming of the trail and taking their masks OFF to sit at a crowded bar.

As mentioned by others, since masks are now a point of contention, wearing them is also sending a message of obedience with more totalitarian policies and a prolonging of restrictions, since it basically signals ideological alignment :(

Expand full comment

Ah, would that people could cease from tribal signaling in such times. Alas, "Same as it ever was, same as it ever was".

Expand full comment
Comment deleted
Expand full comment

They might just have forgotten. My kids do that sometimes. Maybe older but still young people do too?

Expand full comment

Yeah my mother in law is in town and she regularly forgets that she still has her mask on in the car when traveling with me. She is 70

Expand full comment

I'm 45, and while I cheerfully comply with masking recommendations, I can't imagine forgetting I have one on. I find them very unpleasant.

Expand full comment

I'm curious: under which circumstances do you see masks as absolutely required?

A couple scenarios I'm interested:

1) walking your dog in a park

2) outdoor stadium at full capacity

3) indoor stadium at full capacity

4) grocery store

5) long distance flight

6) small, busy nightclub with poor ventilation

Expand full comment

That is not what I mean. I mean, "will I be required by someone in charge of something to wear a mask". For example, the person running a seminar requires all attendees to wear a mask? I'll either not attend, or wear a mask if required to attend. Grocery store requires masks? Fine, I'll go tothe one down the street. Doctor requires masks? All right, don't have much choice there.

This is a separate matter from "where do I think masks are worth wearing", which is again different from "where do I think it's worth requiring masks". These are too variable to explicate fully, other than that the main considerations involve vaccination status, ventilation, duration, (probably some others I can't think of, it's rather late at the moment).

Expand full comment

It sounds like you're optimizing for wearing a mask as infrequently as possible, which I must admit I found a bit strange given how effective they are.

Expand full comment

That is an example of why people get brow beaten, I think.

Your existence is sufficient to justify my wanting Singapore style public hygiene enforcement.

Then again, I'm fine with exiling people who spit gum onto the sidewalk into the outer darkness.

Expand full comment

You have either misunderstood what I meant, or you are willing to cut off your nose to spite your face. If your goal is to increase the proportion of people wearing masks in situations where they are of more than dubious benefit, then a constant dirge of self-righteous anger at those disinclined to wear masks is not your friend.

Expand full comment

Can anyone say much of anything about covid with much confidence? My impression is that the efforts of the press and the public health establishment to suppress disinformation have exacerbated the problem to the point of undermining their own credibility. I think I know better than to place much stock in the rumor mill which is the alternative, but that just means I can’t put much stock in anything at all. Scott has credibility for me, but his analysis didn’t make me think, oh now we understand where things are going and are taking only appropriate steps. It has become politicized, I guess I should just accept that as the new normal.

Expand full comment

I'm in a similar position. I don't really trust the official word, but I distrust the alternative.

Expand full comment

I think we can say with confidence that:

1) a novel virus was introduced to the world

2) more people died than usual, most likely due to this virus

3) the situation in many hospitals was incredibly dire with military style triaging of patients being required

4) our leaders and experts bungled the situation. Yes their attempts to suppress disinformation backfired and undermined their credibility, but IMO their own incompetence also undermined their credibility (remember the CDC's "masks don't work", "two weeks to flatten the curve", "vaccines are 100% safe" and Fauci's "no way did covid leak from a lab"?)

But that's about where certainties stop for me.

I don't know whether a failed response is better than no response.

I don't know whether we should have just let the virus run its course instead of creating 1 million new alcoholics in the UK.

I don't know what the future looks like given that covid is here to stay.

I don't know what my own risks are of actually getting covid.

Expand full comment

I feel your sentiment, but I'm not even sure about 3 & 4 on that list! In particular for (4) I'm not even sure there exists a response that wouldn't be considered bungled a large portion of the public in which case I'm not sure we can really call it bungled (though it's hard to imagine that CDC and FDA's behavior in particular is anywhere even close to the tradeoff frontier).

Expand full comment

I'd like to believe this has nothing to do with people's opinions, and that there is an optimal response. Certainly the current "we just need to vaccinate children under 12 then everything will be fine" aka "nobody has an actual plan for how to get out of this mess" stalemate outcome we are in is suboptimal. We'll need the optimal plan and the ability to execute on it one day when the real superbug emerges. Covid was a dry-run, and we failed miserably.

Expand full comment

We're trading off different things so it's not clear (and imo not likely) there's some single optimal response but rather a frontier of optimal responses depending on your preferences and utility measures. Though I di think the CDC and FDA pretty much assured we weren't particularly close to that frontier.

Expand full comment

Two weeks of everyone quarantining/wearing masks even prior to the existence of vaccines would have been plenty to not just flatten the curve but essentially eliminate Covid in the US.

That large swaths of the population ignored the advice. Furthermore, the mandates were operated locality by locality. It is as if we decided to quarantine in alphabetical order by last name.

As much as I would like to blame the Snowflake in Chief, I cannot think of a president in the last 40 years who could have motivated the kind of universal and coordinated response we needed.

Expand full comment

Re: 2 weeks, yes that might have worked, but as soon as the 2 weeks was over and things opened up again there would have been another spike, no? Kind of like what we are seeing now with the 4th wave.

Re: last 40 years, I think you're right, and that's super depressing. How will we handle The Real Deal one day? Some kind of trust in autonomous decision making by individuals? Seems unlikely to work.

Expand full comment

> With the covid alarmists now claiming the vaccine doesn't work

I haven't seen that. By and large the people who claim that vaccines have negligible effectiveness tend to also be people who claim COVID just isn't such a big deal in the first place. All COVID alarmists I can think of do think that COVID vaccines are at least as effective as e.g. flu vaccines even against Delta, just not sufficient on their own for resuming business as usual in 2019, and their main worries nowadays are about unvaccinated people or possible *future* variants.

Expand full comment

The way I interpreted OP's statement here was that "vaccines don't work" == "vaccines are only ~50% effective after ~5 months and used alone are insufficient for us to reach herd immunity"

Expand full comment

The inevitable conclusion of this line of reasoning is that we will have to either 1) play it safe and maybe never go back to 2019's way of life or 2) say screw it and just go back at an arbitrary point (like "now").

I feel like the alarmists would be best served by trying to find a compromise situation instead of alienating anyone who disagrees to the point that they collectively say "screw it."

I'm fascinated looking back to March 2020 and seeing many millions of people (I didn't see millions, but I did see hundreds that represent many more) in jobs and industries that could neither shut down nor work in a safe manner. I remember seeing a construction crew doing road work that spring in the middle of the "full lockdown" going about as normal. There weren't any masks to be had back then, and their work was essential and required in person. Truck drivers, grocery store workers, distribution networks, factory workers, miners, really hundreds of professions all with no option but to keep going to work. The working class never had a choice to lock down, so they rode out the entire pandemic not too far from normal. Watching the professional class gripe about people not following the restrictions is surreal, as if most people ever had an option. Even more surreal is the total lack of understanding that Amazon wouldn't be able to deliver their food anymore if these people did. It's not like we could actually pay unemployment indefinitely and people magically still not starve to death when all the supplies ran out.

Expand full comment

Okay, that one I have heard from COVID alarmists (though only about delta and not about any earlier variants, which is becoming less and less relevant as delta becomes the dominant variant in more and more places)

Expand full comment

I haven't seen that either. My Facebook friends are taking Covid pretty seriously, and several of them are at a convention this weekend, which they feel is ok because of masks and vaccines.

Expand full comment

I do not think that calling the anti-lockdown/quarantine and anti-border closure side "covid-denialist" is a very fair characterization. That seems to imply denial of the existence of COVID, rather than a political and personal stance about the appropriate response. One could even think COVID-19 is a very serious disease without advocating for quarantine due to a belief in liberty or a limited role for government.

Expand full comment

The widespread, polarised version is currently pushing denialism (mostly weak denialism 'it's no worse than flu, more deaths and problems are being attributed to it than are actually caused by it' but some strong denialism 'viruses and contagion haven't been proven to exist').

There are still some people who oppose lockdown/quarantine on principled grounds even though they believe Covid both exists and is more dangerous than flu, but unfortunately they are generally drowned out by the outright denialists.

Expand full comment

"it's no worse than flu, more deaths and problems are being attributed to it than are actually caused by it" doesn't seem like "denialism" to me. Trivially speaking, the second one is almost certainly true. SOME deaths are probably attributed to it that is not caused by it.

I really haven't seen any public person say that the virus and contagious diseases haven't been proven to exist. That would be denialism in my view but to group the first in with the second seems unfair in my view.

Expand full comment

I think she means "more deaths and problems are being *incorrectly* attributed to it than are actually caused by it" (I've heard several people assert that the vast majority of excess mortality in 2020 as compared to 2015-2019 was due to reaction to COVID and not to COVID itself), but even interpreting it literally, it's not *quite* trivial -- certain people have died of COVID before they got a chance to be officially diagnosed to it, and it's possible in principle that in certain places and times (e.g. Lombardy in March 2020) such people were *more* than the people who coincidentially died of something else after testing positive with COVID.

Expand full comment

A while back, the relevant authorities in Santa Clara County (aka Silicon Valley) announced that previous vaccine death figures had included people who died for other reasons while testing positive for Covid. They revised the figures to exclude such deaths, lowering the estimate for total deaths by about 20%.

Expand full comment

Yes, all back-of-the-envelope estimates of "deaths attributed to COVID but not actually due to COVID" I've seen from people who seemed to have any idea what they were talking about were in the ballpark of 20%. Still, I've heard quite a few people claiming that *the vast majority* of death attributed to COVID were not actually due to COVID. I've even heard somebody believing that COVID was already widely circulating in Italy since mid-2019 without making much damage and that virtually all of the excess mortality in 2020 was due to disruptions to the healthcare system and wouldn't have happened if we hadn't known COVID was circulating.

Expand full comment

Sure, there are some overcounts. But claiming that they mean that covid is no more dangerous than flu, or nothing to worry about, is a classic motte and bailey - 'not all counts of deaths are accurate, some contain other causes of death' is the motte, 'therefore we shouldn't worry about the disease' is the bailey.

Expand full comment

Unfortunately I see more and more of the later on FB - it's bad enough that Reuters have a fact check about it: https://www.reuters.com/article/uk-factcheck-harmful-viruses-idUSKBN23335V

Expand full comment
Comment deleted
Expand full comment

Not to address your main point, but it's interesting having gone through this with kids. The masks really don't seem to bother them. They have their favorites and in some cases mix and match by outfit, will put them on without grumbling or even without being told, and have been known to forget to take them off after we've been someplace, to the point of me noticing when they're getting out of the car that one - three of my offspring have been masked an entire ride home.

Expand full comment

Michael Huemer is a philosopher at the University of Colorado, Boulder. He recently published an interesting article entitled "Existence is Evidence of Immortality" in which he makes the case that persons are repeatable. [https://onlinelibrary.wiley.com/doi/epdf/10.1111/nous.12295] In other words, it is an argument for reincarnation. The TLDR is available on this blog post [https://fakenous.net/?p=128]:

Tl;dr:

1. Premise: There is a nonzero initial probability that persons are repeatable (can have multiple lives).

2. Also, the probability that you would be alive now given that persons are repeatable is nonzero.

3. Evidence: You are alive now.

4. Claim: The probability that you would be alive now, given that persons are unrepeatable, and that there is an infinite past, is zero. Rough explanation: there were infinite opportunities for you to exist in earlier centuries, which, if persons are unrepeatable, would have prevented you from existing now.

But you do exist now, so either the past is finite, or persons are repeatable. Bayesian calculation: Let H=[persons are repeatable], E=[You exist now]:

P(H|E) = P(H)*P(E|H) / [P(H)*P(E|H) + P(~H)*P(E|~H)]

= P(H)*P(E|H) / [P(H)*P(E|H) + P(~H)*0]

= P(H)*P(E|H) / P(H)*P(E|H)

= 1 (provided P(H), P(E|H) are nonzero)

If persons are repeatable, then they will repeat, given sufficient time. Conclusion: if the past is infinite, then persons are reincarnated infinitely many times.

Expand full comment

There's so much that is nonsensical about this, much of which has been explained below. But just to pile on: a specific clear error in Huemer's actual paper is that he thinks 'probability zero' means 'impossible'. To quote just one thing showing this (of many, since he absolutely relies on this) he says:

"If a theory predicts, with probability 1, that some event should not occur, and that event in fact is known to occur, then the theory is thereby conclusively refuted"

But, on the other hand, it's key to his argument that time is (or might be) infinite to the past. (And thus - and this is explicitly argued in his paper - the chance of _exactly_ you arising in any given century would have to some strictly positive number, because if it's zero you couldn't exist at all.)

So much of his paper is, over and over in many guises, confusion about what 'possible' means in a context of an infinite sample space, trying incorrectly to formalize it with probability theory, and generating nonsense - covered in a cloud of other gibberish.

Speaking of P, if P is "my subjective probability", I'd say P("this is meant seriously") is less than P("this is an attempt at Sokal-like hoax").

Expand full comment

"there were infinite opportunities for you to exist in earlier centuries, which, if persons are unrepeatable, would have prevented you from existing now."

That seems to rely on "no new people come into existence", e.g. if there is a fixed number, let us say 10 million persons, then if there are 50 million possible lives over time, you are more likely to have already been born and not exist now. But since you do exist now, you must have been reincarnated!

No, I'm not convinced. This is like saying "there were infinite opportunities for you to have been born a member of [opposite gender] so you as [current gender] shouldn't exist now".

Expand full comment

I find another conclusion that approximates this one more intuitive, and that is the immortality of the self as consciousness, but not personhood. The rough idea being that we have no conception of a non-existence, and to assume that after our personhood dies our existence and consciousness dies with it is just unfounded speculation. You've never had a non-experience, you've never not been aware, so to say that death is such is just a guess.

Expand full comment

What about before birth?

Expand full comment

Similarly, before you are born as the person you are now, you are alive as all people, animals, etc. It's the idea that we are all the exact same animating force of life (consciousness, awareness, being, what have you). The metaphor I heard once says we, as life, are like a river. We are the water, and in the river, little whirlpools can occasionally form. These whirlpools are not separate or different from the water, essentially they are just water, but they appear to be distinct entities within the water. After we die, the whirlpool "disbands" and carries on as water. It didn't go anywhere, and it didn't come from anywhere, it just formed itself from the fundamental essence of what we all are.

Expand full comment

I'm just pointing out that any incredulity that we may cease to exist after death should logically be matched by a similar incredulity that we did not exist before birth.

Also, I would say most of us feel we *do* have a regular, daily, experience of non-existence, which is sleep. So far as the conscious I can tell, I don't exist, or at least exist in some weird fragmented form, between when I close my eyes and when I open them every morning. Hence the common impression that death is like going to sleep except you never wake up again.

Expand full comment

This is exactly as persuasive as the mediaeval theologians' (and Kurt Gödel's!) ontoligical proofs of the existence of God: i.e. completely convincing to those who believed in the conclusion beforehand, and a load of bollocks to everybody else.

For me, the sticking point is not the finiteness or otherwise of past or future time; it is the notion that the idea of the persistence of personhood is a testable hypothesis.

Put another way: whether or not I am in some way the "repeat" of an earlier "person" is not something that can be determined by observation; therefore it is a non-question.

Expand full comment

Something doesn't have to be directly observed in order to not be a non-question. The argument above does not depend on having to observe repeat existences, only observe the current existence.

And Huemer convinced me despite previously not believing in reincarnation so "a load of bullocks to everyone else" isn't true.

Expand full comment

Perhaps I should have said "completely convincing to those who want to believe in it".

Expand full comment

Just make sure you aren't in a bathtub. (Sorry, it's a reference to a science fiction short story I don't think many will recognize. Took me a while to even remember/find the name or the author, I think it was"A lamp for Medusa" by William Tenn.)

Expand full comment

The premise is that the universe progression is cyclical with little to no variation (which also means you can effectively travel into the past by going into the next cycle minus some years).

Expand full comment

“ Given unlimited time, every qualitative state that has ever occurred will occur again, infinitely many times.”

This seems incorrect. Why not an infinite number of qualitative states with no need for repetition. Or am I getting this wrong in some way.

Expand full comment

He’s assuming a finite number of persons or probably a finite number of person types ie same looks, emotions, brains and therefore personality.

The math here might work for multiverses but the past was finite.

Expand full comment

This is also weird.

“ Also, the probability that you would be alive now given that persons are repeatable is nonzero.”

The chances that I am alive now is not related to repeatability at all. It depends on previous sexual encounters of my ancestors. The further back in time you go the less likely it would have been to them that I would be born in their future, but here I am.

Expand full comment

The past is finite.

Expand full comment

And even if the past were infinite, it's not necessarily true that life was possible at any time before the big bang, let alone the specific instances of life found on Earth today.

Expand full comment

That does seem to put a bit of a damper on the thought experiment.

Expand full comment

This paper is full of epicycles. The Bayes equation is a really circuitous way to make the above argument, you can do it with just modus tollens, all the pieces are there:

1. If ~H, then ~E. ("P(E)|~H = 0")

2. Assert E. ("Evidence: you are alive now")

3. Therefore, H.

Actually, the problem is actually worse than that; the modus tollens above *works*, where the Bayesian argument *doesn't*. (No strike against Bayes, Huemer just set it up wrong.)

The problem is: P(E|H) = 0. If you *can* be a Boltzmann brain, then you *are* a Boltzmann brain, with probability 1. Boltzmann states are less complex than actual human beings and therefore infinitely more likely to fluctuate into existence via Poincare cloning. Given all the evidence you have, in a universe allowing Poincare clones, you should assume that you are not a human being. (Does being Boltzmanned count as being reincarnated? I read Huemer as saying "no"; section 4.3, bottom of p. 134, a plausible theory of persons can and likely does involve complex conjunctive and disjunctive sets of repeatable features.)

Expand full comment

Consider the set of all prime numbers. The first one is even. None of the subsequent numbers are even.

It is not the case that given an infinite future any instance that has happened will happen again.

But with regard to the probabilistic argument, it is not the case that an event has an equal chance of happening at all times. It could decrease over time*. An infinite sum of decreasing probabilities can easily sum to (much) less than 1. To make this mistake hundreds of years after calculus was invented is embarrassing.

*and there is good reason to think it does given the infinite expansion of the universe, and thermodynamics.

Expand full comment

"Rough explanation: there were infinite opportunities for you to exist in earlier centuries, which, if persons are unrepeatable, would have prevented you from existing now."

Yeah, this does not make sense. We certainly have not iterated through all the possible combinations of human synapses and environmental experiences they might end up in since human species came to be.

If one prefers to ignore the physical limitations and argues instead that during the lifetime of universe, there is a large chance that a brain experience identical to you may from in infinite random quantum, then is is the same argument as for the Boltzmann brain experience, which makes the form of repeatability or "immortality" entailed by the argument quite boring in the usual meaning of "immortality".

Expand full comment

Another line of counter-argument. Does Huemer's argument prove too much?

1. Suppose you are in desert and meet a line of piles of rocks. Each rock formation has one more rock than the previous one. The rock formation you first see has n rocks. The formation in the other direction has n-1 and the other one n+1 rocks. The line goes on as far as you can see, even if you walk to investigate.

2. Despite the evidence of the pattern that you can see, there is a non-zero probability that over the horizon, there may be a repeating pattern of rocks, and the line of rocks could also be infinite in the direction where the piles get smaller.

3. Evidence: Probability of observing a pile of n rocks given infinite past and repeating rock formation patterns is non-zero.

4. Claim: Probability of observing a pile of n rocks given infinite past and non-repeating formation patterns is asymptotically zero.

5. Spot the parts where the argument does not make sense.

Expand full comment

*in infinite random quantum soup.

When we have the edit button, again?

Expand full comment

There were infinite opportunities for an edit button to be included, so by this logic, we already have an edit button, we just don't realise it 😁

Expand full comment

Worth noting some implicit assumptions he needs for his argument to work:

Assumption 1: The set of possible persons is finite.

Assumption 2: He needs some restriction of how the likelyhood of instantiation of different persons changes over time. (assuming it constant works and is likely the intention, but less restrictive assumptions would work)

Assumption 3: as many already has said, that infinite persons exist before you.

Expand full comment

It's pretty clear that the set of all possible persons is finite. Consider all possible placement of all the particles that make up a human body. The quantization of space means that there are not an infinite number of such arrangements. The number is very, very, very, very large, of course. But an argument that involves comparing a finite number to infinity does not care how big the finite number is -- most numbers, of course, are bigger.

Expand full comment

Agree to all these objections. But (a) pixelization is good enough for government work: if the difference between two alleged instances of me is smaller than the difference between me now and me after I have coffee, we’d be inclined to accept them as the same, (b) the argument from infinite time — which I haven’t actually read; I’m just assuming I know what it looks like from this discussion — obviously isn’t talking about the time since the Big Bang, which isn’t infinite, but rather assuming an infinite sequence of such histories, like expand/contract, or black hole cosmology, and (c) I’ll freely concede that I won’t consider a Jupiter brain to be me.

I don’t find the overall argument compelling (or rather, heh, I *doubt* that I would, which is why I didn’t follow the link to read it in its entirety), but I don’t think its problem is the finiteness of possible players but rather the infiniteness of the stage.

Expand full comment

this requires the further assumption that the physical size of possible persons are bounded, which isn't straight forward eg Jupiter brains

Expand full comment

Sure, but it's also clear that there are more people alive now than at a particular moment in the past. So where are all these new people coming from, if the repeatable people have been born the second (or millionth) time round? And if this is their first life, then that explodes the "infinite chances to have already been born before".

Expand full comment

I don't think that's how the quantization of space works. It doesn't mean that space is made up of a bunch of pixels, and everything can be in one pixel or another. It's rather that interactions tend to have some blur when they depend on things below a certain scale.

Expand full comment

It also makes an assumption about some kind of "even-ness" of the past, that it is of the same kind as the present. For instance, if you accept an infinite past in the first place (which you shouldn't, because it's ridiculous), there's nothing that says it can't have been an infinite time of just an empty void until recently.

Expand full comment

An infinite past isn't ridiculous, it just rather useless and undefined. It's not totally clear that the universe isn't cyclic, though that currently seems unlikely. It is rather clear that not much information travels between cycles.

FWIW, I, personally, am fond of the theory that big bangs happen once in awhile, and don't necessarily destroy that which was there previously. This has almost no evidence in its favor, and there are some arguments against it, but I like it anyway. In this case the Big Bang isn't the start of time, its only the start of the ability to measure time within its light cone. And there's no reason that all big bangs should be the same size, or otherwise identical. (Just how different the universes could be I haven't considered, as I obviously can't have worked out the math.)

But note that this theory is rather useless. It doesn't make any testable predictions. Still, within it's bounds the concept of an infinite past has a reasonable meaning.

Then there's the "eternal inflation" models, which may not have an infinite past, but have an infinite future, which is sufficient for most purposes, and a past which, while not infinite, is arbitrarily long. So it may well be long enough for his argument to work for any particular finite number of repetitions with any particular degree of accuracy.

IOW, while I think his argument is garbage, the only thing about is that is implausible is that the repetitions of "you" have any meaning.

Expand full comment

Amazing what philosophers can dream up. Anyway, there are multiple obvious objections that occur to me:

(1) So far as we know, the species H. sapiens is no more than about 300,000 years old, which is well short of infinity. In general, evolution changes genomes slowly but surely over timescales of 100k to 1M years, so it is impossible for the same human to be repeated, at least physically, over timespaces larger than that, because we're no longer talking about the same species and ipso facto identical individuals are impossible. I mean, unless we're will to so broaden the definition of "person" that we can believe it's possible for a "person" to have had a previous life as an ape, or T. rex, or algae colony.

(2) It has not been demonstrated that the cardinality of the set of previous times at which a person could have existed -- even if we assumed an infinite past and unchanging species -- exceeds the cardinality of the set of potential persons. That is, it's not obvious that given an infinite number of previous humans, it is inevitable that one of them must have been me. For all we know, the set of all possible human souls is infinitely bigger than the set of all points in time that have occured so far, and the percentage of all possible humans that have already existed is essentially zero.

(3) Even if every possible human has already existed, it's not obvious that implies reincarnation in any meaningful way (i.e. with some continuity of memory). I can own a Ford Pinto when I'm 16 and when I'm 60, in principle, but no one would assume they must necessarily be the same car. A person exactly like me could have existed infinity/2 years ago, but that does not necessarily imply that person has any continuity with me (which one assumes is what reincarnation requires). Otherwise, "I" could be in two (or infinity) places at the same time by cloning myself, which seems logically and legally absurd.

(4) Saying there's a finite probability reincarnation exists is like saying there's a finite probability the transporter on the USS Enterprise draws more than 100 watts while operating. Since there is no precise definition of what reincarnation is, it's not meaningful to attach a probability to it. Probability only makes sense when we can unambiguously determine whether a thing has happened or not, and we cannot do that with reincarnation, because we don't know what measurement would show it has, or has not, definitely occurred.

Expand full comment

> there were infinite opportunities for you to exist in earlier centuries

Nope. A finite number of people existed before the 21. century. (And if you go like "but imagine infinity of *alternative* histories", then by the same logic, imagine infinity of alternative *presents*, so you have infinite opportunities to exist now, too.)

And given how fast the human population is growing, I would not be surprised to find out that if you take a random human who ever existed (starting with the first humans, ending right now), there is maybe a 30% chance that they are still alive at this very moment.

Expand full comment

>>And given how fast the human population is growing

I think you mean, "WAS GROWING". Fertility (births per woman) has been declining everywhere in the world for decades, and the total population has stopped increasing, and may well have already started to decline.

Sorry, it's a pet peeve. Any news or opinion article that starts with "In an era of skyrocking human population..." is completely wrong. Like, Malaysian Airlines level off-course. And yet, many if not most people take it as a fundamental assumption, probably dimly recalling the Doom/Gloom scenarios of the early 70's, but never checking back to discover that projections like "16B people in year 2000" were absurd.

Expand full comment

According to Wikipedia, human population keeps growing in recent years by approximately 1% a year: 7,38 billions in 2015; 7,46 billions in 2016; 7,55 billions in 2017; 7,63 billions in 2018; 7,71 billions in 2019; 7,80 billions in 2020.

Mathematically speaking, "population growing" means the *first* derivation is positive, "declining fertility" means the *second* derivative is negative. These can be both true at the same time.

Expand full comment

The consensus number of humans that have existed is around 0.1T, so it's about a quarter of 30%.

Expand full comment

Do you reject the infinite past argument? If not, then there could have been human-like beings prior to the existence of present day humans.

Expand full comment

> Do you reject the infinite past argument?

According to current physics, if I understand it correctly, there is no past older than 13.8 billion years, and the humans have evolved relatively recently.

Expand full comment

Come off it, you don't get to invoke "human-like beings". Stick to the original claim, which is that human beings as we understand the term, which is "people like us", are repeatable, which seems to mean "you yourself personally as you are now will have another spin on the merry-go-round in a future life", and that this claim depends on some mathematical/statistical trickery around "given the number of people who have been born in the past throughout human history, there are an infinite number of chances for you to have been born in another time than the one you have been born in, hence it is logically untenable that you would only have been born in this particular era, so by the mechanism [Step 3: ?????? Step 4: Profit!] we can confidently state that you will be born again in a future life because of infinite number of human births still to come".

Now, 'do the dead outnumber the living?' seems to be a sticky question, and while this article alleges the original claim goes back to the 70s and is wrong in its conclusion (the alive are greater than all the dead), it recalculates the problem and says no, the dead outnumber the living:

https://www.scientificamerican.com/article/fact-or-fiction-living-outnumber-dead/

Depending, of course, where you start your count: do you go back 50,000 years as in this article, do you start it at "recorded history" or the Stone Age or where? Anyhow, one solution is:

"According to the United Nations' Determinants and Consequences of Population Trends, the first Homo sapiens appeared around 50,000 years ago, though this figure is debatable. Little is known about this distant past and how many of us there might have been, but by the time of the agricultural revolution in the Middle East in 9000 B.C., Earth held an estimated five million people.

Between the rise of farming and the height of Roman rule,population growth was sluggish; at less than a tenth of a percent per year, it crawled to about 300 million by A.D. 1. Then the total fell as plagues wiped out large swathes of people. (The "black death" in the 14th century wiped out at least 75 million.) As a result, by 1650 the world population had only increased to about 500 million. By 1800, though, thanks to improved agriculture and sanitation, it doubled to more than one billion. And, in 2002 when Haub last made these calculations, the planet's population had exploded, reaching 6.2 billion.

To calculate how many people have ever lived, Haub followed a minimalist approach, beginning with two people in 50000 B.C.—his Adam and Eve. Then, using his historical growth rates and population benchmarks, he estimated that slightly over 106 billion people had ever been born. Of those, people alive today comprise only 6 percent, nowhere near 75 percent."

So okay, the number of people alive today are not greater than all the number of humans that have ever lived. But it's still a finite number, and we're not even counting "died as babies, stillborn, miscarried, aborted" in that number. If you never got a chance to live, can that be counted as a past life?

Expand full comment

So we are reincarnated from another planet?

Expand full comment

Do not open the door to Scientology.

Anyway, this reminds me of an SF story I read years ago, can't remember title or author, where it is noted that babies are being born who aren't responsive in a human way (I'm putting this badly); they are alive, but there is no personality or development or anything. This comes during a time when aging has been cured and people are no longer dying.

Turns out in the end, it's down to reincarnation: there are a fixed number of human souls, but the number of human births has now outstripped that number, so the new babies are being born without souls and hence they are not human beings, they're just living but mindless beings. So in order to get the cycle re-started, people have to start dying again. (This is presented as less of a problem since the old, who were too old for the magic cure, are happy to die and some people are virtuous enough that they will turn down very long lives in order to ensure parents have children who can grow up normal. I suppose, as well, that now reincarnation is proven true, dying is less of a fate to be feared as you know you'll get another life).

Expand full comment

I think that's either _The Eskimo Invasion_ or "The Vitanuls" by Brunner.

Expand full comment

You're correct! It's "The Vitanuls" by John Brunner, as described in this summary:

“The Vitanuls” (1967) proceeds on the premise that the current population explosion, plus the introduction of a longevity drug, has used up the entire pool of human souls and that babies are being born mindless. An Indian holy man, once a noted obstetrician, chooses death in order to free his soul for another."

I must have read it in one of those 80s SF anthologies.

Expand full comment

But there's no reason to think that you in specific will be reinstantiated. "You" is an indexical state. The handwaving about "you are alive" is just to distract from the fact that you have no reason to presume that the fact that you are alive holds any power over the future. Like, unless you thought you were literally an impossible arrangement of matter.

To be clear, "you" will reinstantiate, yes. So will every variant of you. This is just Boltzmann brains all over again.

Expand full comment

Feck it, my atoms are going to be recycled anyway, so the elements of my physical body will go on:

"Imperious Caesar, dead and turned to clay,

Might stop a hole to keep the wind away.

Oh, that that earth, which kept the world in awe,

Should patch a wall t' expel the winter’s flaw!"

Expand full comment

They've *already* been recycled, at least in part. Something like 60% of you is water, and very few of those water molecules are the same as they were years ago, most are recycled in and out of the environment on a scale of days to months. Not to mention all the molecules in your blood and skin were in something else, e.g. a cow's blood and skin, not more than a few months ago.

Expand full comment

Right. If we're arguing for repeatability leading to immortality, you the infinite past just confuses things. As well as Boltzmann brains, there's:

- Quantum immortality: Schrodinger's cat always perceives itself as alive. This has the benefit of preserving continuity as we experience it.

- Infinite Universe or multiverse: Similar to a Boltzmann brain, but repeating the whole history of the world so there is continuity within the other system (though not with us).

- The simulation argument: Even if you happen to die before brain uploading is possible, we can imagine some Omega Point supercomputer that could stumble across a simulation of you by trial and error. If we believe that math has a platonic existence independent of hardware, we don't even need the supercomputer.

Expand full comment

I think there is a reason to think I'll be reinstantiated. I think that being alive now tells us something about the future. I don't think I'm an impossible arrangement of matter.

I suppose I don't understand your critique all that well. Which premise do you object to?

Expand full comment

So the psychic energy or whatever the argument is part of you may be recycled into another body (let's hope a human one, you have no guarantee that you won't end up a tree or an animal since there are infinite numbers of them past and present as well).

Do you imagine you will remember who you were in a past life? Do you remember any of your past lives, if you hold this to be true that you will be reinstantiated? We tend not to believe people who claim they were Cleopatra or Napoleon in a past life. Suppose this is true; it makes no meaningful difference, as the collection of memories and experiences and moments in space-time that make up you (body and brain) right now are not going to have continuity. Your physical elements will be mixed up with the other elements of the earth, and your spiritual elements will go into a new body, in a new time, of new parents, in a new place, possibly different sex etc., and your experiences in that body will be different because it will be a whole new life.

The 'you' that is repeated is like mixing up old clothes that are only fit to be turned into rags and making a patchwork quilt out of them. You won't know who you were or who you are, and if the idea of mere continuance of existence is satisfactory for you, well good for you.

Expand full comment

Consider the following: why are you the same person now as two seconds ago? There is no fundamental basis for identity other than continuity - all electrons can be thought of as the same as fundamental particles lack identity, but because electrons' positions vary continuously it makes sense to talk about them being the same. I think the basis of that argument is that if you lose continuity you lose any basis for establishing a shared identity. Those repeated configurations would be as much you as a clone or a twin.

Expand full comment

I think we have to add in the fact that there is no way to establish continuity between two points in time for any subset of particles in a many-particle system, except in the trivial case that the subset has zero interaction with the full system, because there is no way to observe a trajectory for each particle in the subset that would connect two positions in space and time. If it were otherwise, particles would obey Boltzmann statistics instead of Fermi-Dirac or Bose-Einstein.

Expand full comment

It's not clear that electron positions vary continuously, though I think you need to assume the continuity of space-time to make his argument work, and it is clearly at least a rough approximation of the truth. (FWIW, I suspect that the continuity breaks down at around 10^-33 cm., and that time is similarly quantized, though I couldn't guess at what level.)

If my presumptions are true there are a finite number of possible mass/energy distributions within the universe at any particular time, then two identical universes may be considered the same. I.e., there will be no possible test to distinguish between them. Whether one wishes to lay them out in a linear manner is an arbitrary choice. You do whatever you find makes the rules of transition between states easier.

Now if the universe is infinite (in the sense that the integers are infinite, allowing both space and time to be infinite, then any finite collections of mass/energy states within the universe that is arrived at anywhere within the universe will be exactly repeated, including its environment, a countably infinite number of times. (There's lots of presumptions here, which I don't give strong credence to.) But while you can do a mapping between those states, there is no other connection implied between them. If the identical environment included within the state is finite, then there will be an eventual divergence, but it wouldn't be reasonable to say that one came prior to the other, as there would be no causal connection, however indirect.

Someone said "this is just Boltzmann brains again", and that's almost right, though I think there are slight differences.

Expand full comment

Yes, I basically agree. I don't think eventual small deviations from continuity matter in this discussion since all that matters is that the positions of macroscopic objects look like they vary continuously to us so it makes sense for our brains to develop a sense of identity. I was merely trying to unpack the Boltzmann brain argument in relatively simple terms.

Expand full comment

Being alive does not tell you something about the prevalence of your pattern in the future beyond your death, except that it is greater than zero. The point is that existence is selection. If "you" will only be reinstantiated when your configuration randomly recurs, then this is no different than a method of uploading where you launch a new randomly configured neural network every day - *eventually* a network that thinks like you will occur. So will the version of you that has any different configuration of memories, preferences, loves, hates, etc. Inasmuch as existence is *contrasted with* nonexistence, I don't think that you can be said to "exist again" from this process, because nothing that makes you unique will be privileged.

Expand full comment

Abusing concepts that don't have a signifier nor approximation of such (infinitiy) by mixing with concepts that do point to a real events (death) and concepts that do not have a signifier but can be combined with others in order to point to something "in the real world" (probability).

You can take this exact logic to prove that couches, tables, any animal on earth, and any configuration of a finite-sized bit of the universe is repeatable.

It's first-class bollocks.

Expand full comment

Why can't you mix concepts like you said?

And if this sort of logic did prove the repeatability of tables, why would that indicate the argument is wrong?

Expand full comment

Not sure where I got this link from - might have even been one of the previous open threads. Anyhow, I was curious what anyone who has tried to learn Mandarin (or any other of the Sinitic languages) thinks of this article: http://www.pinyin.info/readings/texts/moser.html

Expand full comment

It's a lot easier these days due to the Internet. Just watch videos on Ixigua on topics that interest you. Most of them are subtitled with Chinese characters. You can then type the characters (via Wubi/Cangjie or handwriting recognition) into Google Translate. The pronunciation is simple and straightforward (though requires a bit of ear training); see here for table:

https://www.yellowbridge.com/chinese/pinyin-combo.php

Learning the Korean alphabet seemed more confusing than this table (at least, for me).

The romanization system (outside Taiwan) has been standardized, but, due to the limited number of syllables in the language, learning a new romanization system isn't very hard.

It's still mostly true you can't cheat by using cognates, but that's the case for most non-Western languages.

Expand full comment

Also, if you want clarification on certain points you can't get through Google Translate, you can ask the ChineseLanguage Discord server.

Expand full comment

If anything, the fact that the characters have been maintained has made things easier for automatic translation.

Expand full comment

The best way to learn it is like children do - speaking and listening only for the first few years. The written form is too much cognitive load until you have a grasp of the sound.

Expand full comment

Maybe, but there's no reason to give up entirely on learning the characters. Especially if you are going to live in China. A few months of study is enough to be able to guess half the items on a restaurant menu, enough road signs to get by, and introduce yourself / have a basic conversation. Just don't expect to be reading any novels (or children's books).

Expand full comment

It's all true, but it comes across as a bit dated. Especially the bit about dictionaries - the author obviously learned Chinese in a time before smartphones. A free app can let you look up characters instantly by sound, by radicals or strokes, or by sketching it on the touchscreen.

And I believe that the number of Chinese people who have heard of Santa Claus and Rambo is rising rapidly.

Expand full comment

Having tried (and failed, abjectly) to learn a useful level of both Hungarian and Mandarin, Hungarian is harder to speak and Mandarin is harder to write.

Expand full comment

I have read that Hungarian is the most "alien" or "ugly" language for most native English speakers. (Non-Asian division) There was actual observation and counting done to establish this -- it seems that Hungarian has the highest incidence of letters and phonomes that are extremely rare in English, giving it the greatest "distance score" by their metrics.

In Los Angeles, I live in Little Armenia and work near Glendale, so I hear Armenian all the time. I volunteered at a food pantry for a year (court-ordered, I am a dangerous criminal....) and learned that Armenian has 39 (!) characters -- a somewhat Roman 26 and then thirteen extra that all seemed to be the sound of spitting or coughing (plosives and friccatives, the terms of art.)

My only achievment was: I learned how to say "You are welcome" from a kind old woman. I will translate the phrase phonetically as ['han-THREM], and I don't know what it really means, apparently it's an "old -fashioned" or "cute" way of saying it, old women would smile, and men would nod or look mildly surprised.

I am ashamed to say I have lived in California for 25+ years and speak maybe 10 words of Spanish. That includes living in East LA for five of those years. While I feel bad I don't know Spaninh, I remember that nobody I met in East LA had ever heard of the band Los Lobos (until I mentioned "La Bamba" then they had heard of that) then I don't feel bad at all.

Expand full comment

I have studied Mandarin, as well as some extensive travel in China. The link you provided is quite Euro/Anglo-centric. Mandarin is one of the easier Sinitic languages to learn, but it is hard for Europeans to learn for a few reasons, some of them noted (a bit over the toply) in the link. On the other hand it is often very hard for Chinese speakers to learn European languages for other reasons.

1- Chinese grammar is really quite simple, with no conjugation or declinations, barely any role for (grammatical) gender, mostly just word order. Languages with complex and inconsistent rules like English and French are obviously harder.

2- Chinese are really not used to using phonetic alphabets... This is even more difficult when the spelling does not match the pronunciation (English, French again).

3- Chinese pronunciation lacks many sounds and groups of sounds that exist in European languages.

4- They can't use cognates

And also, just because they say it is the hardest language doesn't make it true. It is a bit self-congratulatory.

Expand full comment

Is 2 actually the case though? I'd guess most Chinese people under 40 type way more characters via pinyin-based IMEs than they write by hand

Expand full comment

I think the question is about Chinese speakers starting to learn English/Other language from scratch.

I have seen a a few Chinese use pinyin for input, but then when trying to remember an English word they would try to string Chinese characters to approximate the sounds (Chinese slang has a few examples were this has become common, like Ba-Si used for Bus). I can only guess why they can connect the sounds to pinyin Romanization but not use the same letters in a foreign language.

The Chinese people I met that did use phonetic letters "properly" were already reasonably fluent in English, so they can't be counted as learning English.

Expand full comment

Pinyin isn't actually the most commonly used input method. That would be Wubi, a much faster method that relies on the shape of the characters: https://en.wikipedia.org/wiki/Wubi_method

Expand full comment

Pinyin is, indeed, the most commonly used input method.

Expand full comment

Why do you say Wubi is the most commonly used input method? I would have guessed pinyin is. The link you shared says "Wubi 86 is the most widely known and used shape-based Input Method" ... it's the most widely used "shape-based" method, not the most widely used method, period.

This link ( http://xahlee.info/kbd/chinese_input_methods.html ) says "Sound based methods are used by most Chinese, probably greater than 95% of all Chinese speakers...in China, the second most popular Chinese input method is Wubi." (although I'm not sure where they're getting that from or how trustworthy it is)

Hacking Chinese also says "Pinyin is of course the most commonly used system..."

https://www.hackingchinese.com/chinese-input-methods-a-guide-for-second-language-learners/

Expand full comment

You're right, I was relying on anecdote rather than data. Other anecdotes indicate the opposite is true and pinyin is more popular than wubi: https://www.quora.com/As-a-Chinese-literate-person-do-you-type-in-Pinyin-or-Wubi-What-generation-are-you

Expand full comment

I also would thing that Mandarin is not the hardest. Speaking is easier than Japanese where word endings change with context.

In general, adults are thought languages through reading and writing. Quite different from children who first learn speach.

Expand full comment

Every language has its pros and cons. Japanese has the multiple kanji reading thing, grammar conjugations, etc. which make it hard for English speakers.

On the other hand, it also has a number of advantages over Mandarin for English speakers.

1) Tones tones tones

2) kana means you don't have to learn the characters immediately. In fact, Japanese children themselves use kana.

3) Loanwords are written semi-phonetically in a distinctive alphabet

4) Even in Kanji heavy texts, the max of kanji and kana makes it much easier to recognize word boundaries.

Expand full comment

I suspect the hardest language would be one of the language isolates spoken in almost total isolation so it has no real influence from other languages. That would be hard for speakers of just about every language. Japanese is the major language which seems on a not particularly-in-depth knowledge to be most like that: it's not quite a language isolate but has only two minor languages in its family, and whilst there is some Chinese and Korean influence and the inevitable modern English contribution, there haven't been any colonial or substrate languages for most of its history. So in terms of a language people are likely to learn Japanese should logically be the most difficult.

Expand full comment

Seems reasonable. My wife and her father (who lives with us) are originally from China. This motivated me to learn Mandarin about 20 years ago, and I failed. I could say some things about relatives and food, and ask where is the bathroom, but had difficulty with tones and characters. Listening comprehension also difficult, though that may be because the Beijing accent is taught, but most persons I hear speaking in person are not from around there.

I took a beginner class last summer again, and it is still a struggle. tThe internet and the iPhone have made things a bit more convenient to write or look up. I’m using two apps and reviewing the book to try not to lose it all by the time I can take a follow up class. And google translate can help a lot, though I don’t want to depend on it.

Is that what you wanted to know?

Expand full comment

Somewhat. I'm actually planning to start learning Thai (my fiancée is Thai) and although Thai has an alphabet (well, technically an abugida), consensus seems to be that it is one of the harder languages for speakers of European languages to learn.

I can (if paying an inordinate amount of intention) hear tones, but this portion of the essay strikes my as frighteningly apposite should I ever progress to even an elementary acquisition of Thai:

Okay, that's very Anglo-centric, I know it. But I have to mention this problem because it's one of the most common complaints about learning Chinese, and it's one of the aspects of the language that westerners are notoriously bad at. Every person who tackles Chinese at first has a little trouble believing this aspect of the language. How is it possible that shùxué means "mathematics" while shūxuě means "blood transfusion", or that guòjiǎng means "you flatter me" while guǒjiàng means "fruit paste"?

By itself, this property of Chinese would be hard enough; it means that, for us non-native speakers, there is this extra, seemingly irrelevant aspect of the sound of a word that you must memorize along with the vowels and consonants. But where the real difficulty comes in is when you start to really use Chinese to express yourself. You suddenly find yourself straitjacketed -- when you say the sentence with the intonation that feels natural, the tones come out all wrong. For example, if you wish say something like "Hey, that's my water glass you're drinking out of!", and you follow your intonational instincts -- that is, to put a distinct falling tone on the first character of the word for "my" -- you will have said a kind of gibberish that may or may not be understood.

Intonation and stress habits are incredibly ingrained and second-nature. With non-tonal languages you can basically import, mutatis mutandis, your habitual ways of emphasizing, negating, stressing, and questioning. The results may be somewhat non-native but usually understandable. Not so with Chinese, where your intonational contours must always obey the tonal constraints of the specific words you've chosen. Chinese speakers, of course, can express all of the intonational subtleties available in non-tonal languages -- it's just that they do it in a way that is somewhat alien to us speakers of non-tonal languages. When you first begin using your Chinese to talk about subjects that actually matter to you, you find that it feels somewhat like trying to have a passionate argument with your hands tied behind your back -- you are suddenly robbed of some vital expressive tools you hadn't even been aware of having.

Expand full comment

Regarding hearing tones: I "feel like" I don't hear tones very well, but I understand Chinese just fine. Once my hearing skills got to a certain level, I was just able to understand what was being said, even if I wasn't following along with each syllable in my head going "1st tone - 1st tone - 3rd tone - 4th tone - 3rd tone".

So I would say it's fine if you focus on "understanding what I hear" more than "distinguishing tones". If you can do the former it doesn't matter how well you do the latter.

Regarding using tones - The part you quote in your follow-up comment (that using tones gets in the way of expressing yourself with emotion), is true, to an extent, but I think you'll get past it with time and practice. I think they're playing it up a little bit for the sake of writing an interesting article. Yes, it's harder than Spanish/French, but it's doable.

Expand full comment

I think a lot of it is just inferring from context.

Expand full comment

Thank you Scott for announcing the infections without calling for other meetups to be cancelled or for them to be held with mandatory masking. We really do need this kind of sensibility these days.

Expand full comment
Comment removed
Expand full comment

I was particularly upset with another recent JRE clip on youtube with one of the Weinsteins citing a veterinarian “vaccinologist” (or something) whose theory is now making the rounds amongst vaccine truthers, whereby the vaccines are (potentially) specifically responsible for the development of our more infectious covid variants. Weinstein notes that the theory doesn’t exaaactly add up all the way (yet…), but he still gives it credence based on the fact that theory was recorded before the advent of the variants? That’s just… It’s a consciously irresponsible argument.

Please, anyone, tell me if i’m wrong here, but the basis for such an evolutionary “pressure” to be even possible comes from a study with chickens, investigating a non-sterilizing vaccine for a very-deadly virus in livestock chickens. The vaccine decreased the rate of chicken deaths, which allowed the infected chickens to live longer and thus spread more viral particles, facilitating more opportunities for successful mutations to take hold. I get how that mechanism works, but…

There are numerous very important ways in which this arcane mechanism does not at all apply to covid in humans or the delta variant thereof, but the most striking implication…is that anti-vaccine advocates would effectively be saying that we’d have to sacrifice more PEOPLE to the vaccine in order to prevent the theoretical possibility that these people’s continued existence prolong the reign of covid. Where is the end point here, after we consciously choose to cull X percentage of the human population?

Since the delta variant also has a virulence-enhancing mutation in the spike protein, which is the obvious target of all the vaccines, there also seems to be some very reckless conflation between a theoretical “Antibody Dependent Enhancement” whereby positive vaccination status can actually increase your risk of death….conflated against the speculative evolutionary-pressure-theory where there is some evidence that (1) vaccine (in expendable livestock) could have fostered new mutant variants of this chicken virus.

These two mechanisms seem like they should be anti-correlated to begin with (outside some more complicated mechanism i’m not thinking of), but even if they were positively correlated, wouldn’t that depend on all our vaccines actually increasing the individual/clinical virulence of any of the covid variants, which…they definitely do not…(?)

Am i missing anything here guys, or are the (remnants of the) Intellectual Dark Web just totally bankrupt, intellectually and/or morally?

Expand full comment
Comment removed
Expand full comment

I didn’t take any position on non-FDA-approved medications for covid, but the question has never been whether covid vaccines are perfect/sterilizing. The question is whether non-sterilizing vaccines actually somehow exacerbate the pandemic, due to this very singular argument or any other. We do have very definitive data supporting the vaccines as much more efficacious (on the individual level) in preventing death than we do for any drug, FDA approved or otherwise , but though the Weinsteins are at once vaccine skeptics AND ivermectin prophets, , i am only focusing on their specific position that, despite the data of efficacy on an individual level, there is still the risk that vaccines exacerbate the pandemic at the population level, perhaps by driving the evolution of new variants.

The way you formulated the argument is not exactly what they’re saying, but it is a popularly repeated point, and i don’t understand why, because by the same logic, ivermectin and corticosteroids and monoclonal antibodies all also “allow” the virus to keep spreading and mutating. Weinstein’s argument gets much more arcane, and at this point it’s the only legitimate argument left for a personal conviction to think that taking the vaccine is somehow a poor personal choice for any individual person, and this seems to me to be the reason WHY he’s focusing on that argument. It’s seems like neither the mechanism cited in argument nor any clinical or population data reflect it, so i’m hoping to call him out as disingenuous for focusing on this argument, when presumably he has internalized that most/all anti-vaccine arguments (on the scientific level anyway, if not the political one) have been debunked, so now he’s actively grabbing at straws.

Expand full comment
Comment removed
Expand full comment

Huh, when I analyzed a bunch of anti-vax claims, that was the article I cited as the "weaksauce" argument against ivermectin. https://www.lesswrong.com/posts/7NoRcK6j2cfxjwFcr/covid-vaccine-safety-how-correct-are-these-allegations

Better arguments would include "vaccines work better than ivermectin", "even if ivermectin works, you can also take a vaccine for extra protection", "forget what you've been told, Covid actually IS sometimes fatal and you probably will get it eventually", "Even an mRNA vaccine is much safer than Covid: those VAERS deaths are probably not caused by vaccines, and even if they were, the vaccines are clearly still much safer than Covid, oh and the ovaries thing is a lie" and (for those few unvaxxed who care about others) "you'll reduce your chance of giving Covid to others, slow down viral evolution, and get us closer to herd immunity"

Expand full comment
Comment removed
Expand full comment

Unvaccinated people in that age range are simply rare.

NHS reports a vaccination rate ranging from about 86% for 40-44, up to nearly 100% for the 75-59 age bracket. With the numbers I'm seeing, even if the vaccine is 90% effective (and thanks to Delta I don't think it's that high), and even if the vaccinated don't engage in risk-compensation behaviors, we should *still* expect that the majority of cases in the 40 to 79 age range would be among the vaccinated.

Expand full comment

(correction: it's 92.4% for 40-44 but 86% in London; source: 16 Sept spreadsheet at https://www.england.nhs.uk/statistics/statistical-work-areas/covid-19-vaccinations)

Expand full comment
Comment removed
Expand full comment

It occurs to me that counterfeit $20 bills are probably more expensive than $50 or $100. Might be interesting to see price plotted as a function of denomination.

Expand full comment

On the $20's you could do the litmus configuration to test for authenticity. "Midnight Run" reference ;)

Expand full comment

Well, I suppose if it’s really top notch counterfeit currency they are extra careful to line up the paper when they run it through their inkjet printer to print the back sides of the notes.

They probably have one of those fancy lever arm paper cutters too. You know the kind with embossed grid and English *and* metric rulers.

Expand full comment

The really fancy ones might use high-quality rag paper of the right weight instead of regular wood pulp printer paper.

The exact paper dollars are printed on isn't available to the public, as a security measure, but the private supplier (Crane Currency) is a subsidiary of a company that also sells high-end writing papers under the Crane's Crest and Crane Lettra brand. I've heard claims that Crane's Crest in particular feels similar enough to money to have a useful psychological effect in some contexts (e.g. printing paper resumes for job hunting, back when that was still a thing).

Expand full comment

Interesting info. Thanks.

Expand full comment

Looks like they changed their URL and format from last time

Expand full comment

wait is there no report button on this site?

Expand full comment

I'm more insulted by the level of morality these spambots attribute to us on here. I'm getting little love-notes from the herbal medicine spambot reminding me to get an STI test, and now this one is offering us the chance to acquire funny money off the dark web, doubtless so we can cheat and defraud when paying for goods!

Is this the reputation Rationalists have gained? 🤣

Expand full comment

I mean, you did say it.

Expand full comment

I don't know whether to be flattered or insulted that they assume my social life is so hectic, I need medical attention for STIs 😁

Expand full comment

;)

slow down slayer, leave some for the rest of us.

Expand full comment

Well I admit to being a fan of the IDW (intellectual dark web) And since this is dark web money, I feel almost compelled to get some... :^)

Expand full comment

I like to think spam is pretty random rather than carefully targeted. At most, that one is aimed at places that mention bitcoin.

Expand full comment
Comment deleted
Expand full comment

"Equal" quality of life? No, no way.

Even within USA, residents would not have achieved an equal QoL in 20 years. Income disparity will continue - we're just entering a cycle automation destroying categories of jobs. This phenomenon will ripple across the world and reach epidemic proportions, and policymakers would have just started grappling with the mass effects of job losses and ensuing societal dysfunction. So unless "equal QoL" means "equally bad QoL" somehow, I don't think equal quality of life everywhere (or mostwhere) will happen in 20 years.

As to confidence levels, I'd give my prediction about a 51% confidence.

There is both left-wing marxist/protectionist and right-wing populist/nationalist strains of politics converging in a lot of places towards concepts like UBI/stipends, populist-regulatory impulses with an eye towards avoiding these problems, attempts at reinvigorating manual industry as a bulwark against China and also as a mechanism for injecting jobs into economy, and these are all trends that could conspire to falsify my prediction.

I think across the board we'd continue making proportional gains in real GDP everywhere.

https://www.imf.org/external/datamapper/NGDPDPC@WEO/ADVEC/OEMDC/CHN/USA/FRA

Expand full comment

That's interesting that you see UBI coming from either a left-wing marxist or a right-wing populist angle. I see UBI as being opposed to both of those, and coming from a technocratic neoliberal angle.

Expand full comment

Bryan Caplan is an anarchist right-winger who explicitly advocates for the welfare state to be more paternalist (as well as means-tested), coining the term "Ward Paternalism": https://www.econlib.org/from-ubi-to-anomia/

Expand full comment

Sorry, I meant quality of life in each of those places in 2040 compared to 2021, not compared to each other. I really wish we had an edit button...

Expand full comment

This seems very hard to answer. Of the factors you listed, none of them impact my quality of life, and I don't see how they impact median quality other than median income and affordability of basic goods and services. Too little of your life is spent at the moment of death for that to matter to the median experience. Same thing with the rarity of homicide and suicide. Those impact the margins, not the median.

My country is the USA, and spread between the USA and France is median income right now is quite large, enough that it is unlikely to change in 20 years. The gap from France to China is ridiculously massive, to the point that it will almost certainly not change in 20 years.

These hardly fully quantify quality of life, but I don't know what else to do. So USA, France, then median developed world, then China, then median developing world.

I will without justification just drop my confidence 10% per decade, so 80%. If you asked me to rank 2120, I have no idea at all. I'm not sure that confidence is really evenly distributed. USA on top and median of the entire developing world on bottom seems much more certain than the placement of France compared to the median developed world.

Expand full comment

Sorry, I meant quality of life in each of those places in 2040 compared to 2021, not compared to each other. I really wish we had an edit button...

Expand full comment

> The gap from France to China is ridiculously massive, to the point that it will almost certainly not change in 20 years.

Chinas income per capital and general tech development continues to rise, and while other factors like demography or whatever may be problems, I don’t see why closing of that gap will stop soon

Expand full comment