417 Comments

Back on the longevity thread, I proposed (without really knowing if or how it was possible) how we could just replace organs with young cells piecemeal.

Looks like it's working well for skin:

https://www.statnews.com/2021/12/08/last-gasp-gene-therapy-saved-syrian-refugee-clinical-trial-starting/

Expand full comment

Scott: Do you have a citation for the claim about Ritalin neurotoxicity that appears in this post? https://slatestarcodex.com/2017/12/28/adderall-risks-much-more-than-you-wanted-to-know/

You write: “Never mind, recent studies suggest Ritalin is just as likely to cause this problem.”

However, a study out this year seems to indicate no evidence of neurotoxicity from long-term Ritalin use at above-therapeutic doses in primates: https://www.sciencedirect.com/science/article/abs/pii/S0892036221000714?via%3Dihub

I’d like to compare this recent study to whatever sources you had in mind as evidence of Ritalin neurotoxicity.

(Bonus question: do the conclusions of this recent article look credible to you?)

Expand full comment

Does anyone know what happens to the barrels of tanks (armored vehicles, not containers) after they wear out? Can old barrels be restored and reused, or do they have to go to the junkyard?

Expand full comment

Let's say I get a positive covid test today. (I don't, but I want to pre-plan.)

What's the current best practice of what to take? Paxlovid is still illegal, but what could I ask my doctor to prescribe off-label, or what could I take OTC, to give me the best outcomes?

Expand full comment

Did you find out? Great question.

Expand full comment

Pretty sure fluvoxamine's on the list.

Expand full comment
Comment removed
Expand full comment
Comment removed
Expand full comment

That looks like it's from January 2021. Is there anything more recent?

Expand full comment

The author's Twitter handle is @plain_fiction, which is telling, but do we have any physicists here that want to explain what a "warp bubble" is supposed to be, and why this thing featured on Slashdot is surely nonsense? Anyway, in addition to the usual prior against such trings, Harold White and his team previously failed to notice that the physics-defying EM drive they tested didn't actually work, suggesting, I suppose, a lack of rigor on their part (or worse).

https://thedebrief.org/darpa-funded-researchers-accidentally-create-the-worlds-first-warp-bubble/

Expand full comment
founding

Yeah, this isn't the place I'd go for reliable information on "warp bubbles", but it is an interesting read.

Miguel Alcubierre's theoretical "warp drive" is fairly well regarded by the physics community, at least to the extent of not being obvious nonsense and worth talking about. The math is pretty hairy, and I've never tried to sit down and work my way through it, but:

If the math is right, the general relativity theoretically allows for the creation of a "bubble" of warped space-time with a volume of normal space in the center but the space ahead of the bubble being compressed and the space behind the bubble being expanded - and this is a dynamic effect, so the bubble progressively "moves" forward despite not having a velocity. If this can be done in a stable and controlllable matter, voila, space travel without pesky fuel requirements and speed limits. And maybe you can build a time machine as well, or maybe you'll blow yourself up or erase yourself from history if you do because the theory and math of closed timelike loops gets really messy.

BUT: to do this, you absolutely need for part of your warped space-time to be made out of a structure with a negative energy density, and negative energy densities don't seem to be a thing (no, antimatter doesn't count). So, nice theory, impossible practice.

BUT BUT, there is a quantum-mechanical phenomenon called the Casimir Effect which creates what appears to be a negative energy density in very small volumes by suppressing quantum vacuum fluctuations. Maybe we can build a "warp drive" that way?

BUT BUT BUT, we don't know whether "negative energy density" in quantum mechanics means the same thing as "negative energy density" in general relativity. We'd need a theory of quantum gravity for that, and we haven't got one.

So now White thinks he's done the math and figured out how to build a microscopic testbed for this. As you note, he's not always 100% on this sort of thing, so it's quite possible he got the math wrong. And the necessary microscale engineering might be impractical as well. And if it *does* work, it's probably not going to scale very well.

Even so, a demonstration of an impractically microscopic warp drive would be theoretically valuable, so I hope more credible researchers communicating in more credible forums check White's math and figure out whether this is worth trying.

Expand full comment

If I'm reading that article correctly, they didn't create a physical object. Rather, a scientist described a nanoscale structure which, if built, should create a tiny but detectable warp bubble.

>Specifically,” said White during the AIAA presentation, “a toy model consisting of a 1-micron diameter sphere centrally located in a 4-micron diameter cylinder was analyzed to show a three-dimensional Casimir energy density that correlates well with the Alcubierre warp metric requirements.”

>“This qualitative correlation,” he adds, “would suggest that chip-scale experiments might be explored to attempt to measure tiny signatures illustrative of the presence of the conjectured phenomenon: a real, albeit humble, warp bubble.”

So, no warp drive, but maybe an actual physical experiment we can run that might teach us something interesting about quantum physics.

And I'd say it's still a pretty big step forward. All the theories about FTL travel rely on things like "negative energy" or "exotic matter", stuff which isn't *impossible* according to the laws of physics but we've never actually seen it in real life. So if we can actually make a physical object with those impossible-sounding properties, that's a pretty big deal. Assuming the theory pans out.

Expand full comment

If someone speaks physicist, here's the actual paper they're reporting on: ​https://epjc.epj.org/articles/epjc/abs/2021/07/10052_2021_Article_9484/10052_2021_Article_9484.html

Small correction: It looks like there is an experimental part to the paper. They were studying some other nanoscale structure and got measurements that looked sort of warp-drive-ish, and used that to come up with another structure they want to test. Again, assuming I'm reading this correctly.

Expand full comment

There's an experimental part, but what looked warp-drive-ish was their calculation of what their proposed-to-actually-build structure would do, not their measurements of the actual structure (which as of the paper they seemingly haven't/hadn't built yet).

Expand full comment

For start, is it reported somewhere reputable? Given that someone claims that NASA invented FTL (or suggests it) I am going to assume that it is pile of blatant lies.

Note that "Related posts" has "Meet the Man Building an Anti-Gravity Device, and the Alien God That Inspired Him" and link to itself. Not really trustworthy site.

Not sure is it parody or really inept attempt at science popularization.

Expand full comment

To be clear some of facts mentioned there are true (NASA exists for example), but I suspect that more interesting ones are misleadingly twisted or blatant lies.

Expand full comment

DataSecretsLox user WeDoTheodicyInThisHovse has started a December 2021 Welcome thread for new users who wish to introduce themselves. Having met her, I think she's a nice person and a great entry point for those who are interested.

https://www.datasecretslox.com/index.php/topic,5274.msg190011.html

Expand full comment

Sure but it is the other active posters you have to worry about. IIRC Theodicy is one of the posters on the "People I'd love to interact with if only I could do it somewhere besides DSL" list.

Anyone one standard deviation to the right of center would probably really enjoy the welcome thread, though.

Expand full comment

Well, perhaps they do. I think it's worth noting all the libertarians that enjoy it there as well. As for me, I see nothing intrinsic to someone on the left that would keep them from spending time on DSL and thinking they got something worthwhile from it.

Expand full comment

I'm curious how people interpret the behavior of a business that's sending me daily emails, usually with one limited-time offer after another, including reminders of the amount of time left on the latest special.

Note that IIRC, this was not their practice until perhaps last year, and I've been a happy customer of theirs for at least a decade, and subscribed to their mailing list. Again IIRC, emails were often announcements of new products, and specials weren't continuous.

I'm especially interested in any insights from people working retail, particularly a smallish online business which also has a few storefronts. Products are physical, and consumable - but slowly, in case that matters.

Expand full comment

Well, they've finally got the response from me anyone who knew me would have predicted; I've unsubscribed from their mailing list.

I'll still buy their products; they are my go to place for herbs and spices. And I hope the constant specials are not a sign that they are in financial trouble, desperate for more business, as I'd rather not go back to getting stale overpriced herbs and spices from my local grocery store.

But I need to stop waking up to more emails in my personal mailbox than fit on a single screen.

Expand full comment

Is it a small business? Maybe hired a marketing person? Big place? I blame consultants.

Expand full comment

What newsletters do you have a paid subscription for, and why?

Expand full comment

None currently, unless you count my local newspaper - which I mostly subscribed to to keep my housemate from risking covid buying it retail, at a point pre-vaccination when we were buying groceries only once a month, and soon started getting even those delivered.

I'm frustrated with the substack model. I want to interact with bloggers, and substack's commenting facilities are substandard; LiveJournal was better than this, 20 years ago.

As I see it, the problem substack solves is "how to get certain well known journalists more money, while taking a rake-off for the founders". It does little or nothing for the less well known, and I strongly suspect the sum of the individual subscriptions is commonly way more than they are in fact worth. I don't want to contribute to yet another winner-take-all / star system payment model.

Mostly, though, I'm frugal, and especially disinclined to take on recurring costs, such as subscriptions of any kind.

At any rate, if I do take on one more subscription, I'll probably go with the New York Times, in spite of excessive politicization and what appear to me to be declining standards. Even at its prices, it's more cost-effective than substack. Though when it comes to political posturing, I can and do get more than I want, for free, as email and paper mail, in spite of aggressively unsubscribing whenever possible.

Expand full comment

So far only ACX and Freddie deBoer. The former for the excellent community and deep-dives into topics I don't have the skills to research for myself. The latter because he has repeatedly put into words the ideas that I have felt but couldn't articulate.

I'm open to subscribing to more, but none have the post output per month plus content I care for in the same way as ACX and FdB. There's one more that I'm on the fence about, but I haven't found myself reading it enough lately.

Expand full comment

I've been a regular reader of a few newsletters. I tried ranking the ones I read regularly by the value I get, and this is where I ended up with:

- yourlocalepidemiologist

- astralcodexten

- bariweiss

- https://stratechery.com/

- persuasion

- razib

- greenwald

- www.theinsight.org

- hardcoresoftware.learningbyshipping.com

- fx

- glennloury

I think some of these newsletters have potential value that could be unlocked by paying for a subscription, but I have a hard time convincing myself to do that. At the end of the day, I've chosen to get a paid subscription for my top-1 newsletter; when something else replaces it and becomes rank no. 1 for me, I'll get a paid subscription to that.

I wish Substack had a bundle-subscription option like NetFlix so I could pay for more than one newsletter (versus the current model that requires N x ~5USD per-month in costs).

Expand full comment

Yeah, a bundle subscription would be great for those of us trying to stick to a budget 🙁

Expand full comment

I think there are a large number of content adjacent newsletters that would do well financially from joint substack. There are Substack tools for joint blogs although you can't bundle multiple separate blogs. So you'd give someone Contributor access or w/e permission and then you and your buddy or buddies could post content as a group. The cost-benefit would probably be complex when you also run an individual substack, though.

Expand full comment

I’d probably spend around $500 on substack alone of I paid for all that I read. Compare it to Apple News* which is $120/year for family access of a host of traditional newspapers and magazines. Newsletters are shockingly expensive right now unless one’s interest is very narrow and limited to 1 or 2 subscriptions.

Expand full comment
Comment deleted
Expand full comment

Thanks @Gunflint! I guess I wasn't clear - by 'newsletters' I really meant Substack style newsletters, and not traditional newspaper/magazine media. That said, it's certainly valuable to hear about the relative tradeoff here!

Expand full comment

Way back at the beginning of the pandemic, people were publishing lots of articles and videos explaining simple models of how disease spreads, showing for example what happens in the Susceptible-Infectious-Recovered (SIR) model in which a population is uniformly mixed and randomly spreading a virus around, producing a neat little exponential curve that turns into sort of a bell shape.

I always assumed that these toy models were only just for explanatory purposes, not for any serious epidemiological research. The following article is interesting because it explores a non-simple model of disease spread, but it also makes a shocking claim that models "like" SIR are "the most commonly used".

https://cspicenter.org/blog/waronscience/have-we-been-thinking-about-the-pandemic-wrong-the-effect-of-population-structure-on-transmission/

Surely the average epidemiological researcher is not that stupid? So, is this article is being misleading/disingenuous, or are trivial models like SIR/SEIR *actually* widely and directly used by epidemiologists to make predictions, make recommendations and evaluate policy effects?

Aside from that one stunning allegation, the model presented in the article is interesting (but as the author notes, not necessarily accurate) because it produces vaguely similar hard-to-explain patterns like those we see in the real pandemic. Basically the conclusion this type of model suggests is that the course of the pandemic in a region *cannot* be predicted in detail without a lot of good data we don't have about how specific people are physically interconnected with other people in the real world. A corollary: attempting to evaluate the effects of public policy based on things like "cases before and after policy was put in place" can give more noise than signal.

Arguably the most interesting result from this model is that a variant can come to dominate in a population *without* being more infectious/contagious, which has caused me to wonder if delta/omicron are less infectious/contagious than they appear to be. But remember, there are *thousands* of catalogued variants and it's very rare that one of them "takes over the world", as delta did, which suggests that delta really is more infectious.

Expand full comment

Thank for that link! I'm bookmarking it.

I like the fact that Lemoine's model can explain the phenomenon that I've been noticing — which is a clear downward trend in case numbers as certain variants become more prevalent. For instance Alpha didn't come to dominate either the US or UK's viral landscape until after the 20-21 winter surges. In fact, despite the popular delusion among epidemiologists that Alpha kicked off case surges in various countries, I can't find it dominating any surges until after the peak had passed. Likewise we see Delta only coming to dominate the viral landscape in Brazil after the Gamma surge. (OTOH, unlike Alpha, outside South America, Delta definitely creating surges all over the world PLUS it came to dominate the viral landscape worldwide.)

Expand full comment

The best reason for using a mind-numbingly simple model is if you don't have enough data to fit a better one. We all know that an agents-in-a-network based model would be more realistic, but it's got a zillion extra parameters and you don't necessarily have enough data to fit them sensibly.

Expand full comment
founding

OK, but why don't we have enough data? This isn't a COVID-specific thing; this is something the epidemiological community has needed since the 1980s at least. If you're trying to model an epidemic, whether HIV or COVID or anything in between, you need data on how people actually interact with one another, not spherical disease vectors randomly colliding in zero-dimensional space, but actual human behavior patterns. And it is data that would have been easy to get to at least a first order through surveys and the like; possibly something that could have been piggybacked onto existing surveys like the GSS. And then validated by comparing model to reality in e.g. annual flu outbreaks.

So why, after forty years, are epidemiologists still working with zeroth-order models?

Expand full comment

Just a reminder, I did not accept the premise. Wondering if someone would want to confirm or (better yet) deny.

Expand full comment

First off, how accurate are your R0 values? That's the heaviest weighted variable in most models, and hardest one to determine. I'm still seeing wide ranging R0 values of SARS-CoV-2. In the early stages of the pandemic, I saw values as low as 1.5 and as high as 5.0, but it was population and nation dependent. B.1 barely touched SE Asian and African populations, but it ripped through European populations. Some populations seemed to have a higher natural immunity against B.1, and there were some cool studies correlating HLA frequencies and IFRs. But all those became moot when B.1.617.2 (Delta Classic) hit the scene. Previously "resistant" populations were getting hit hard by Delta, but the R0 for Delta was still showing up as between 2.5 and 5 depending on the country (and depending on the research team performing the "calculation" — R0 = wild ass guess?). And worse yet, I never saw *any* epidemiologists publish their reasoning for how they calculating their R0 values. Nowadays the generally accept R0 for Delta is 3.0. But is that accurate?

Anyway, right off the bat you've got an uncertain variable underpinning your model. You might say — well, run your model with a range of R0 models and see which fits the actual data best. But, it's pretty clear that the reproduction value of COVID-19 change over the course of an outbreak. For instance, with a R0 of 3.0 the virus should keep spreading until roughly 70% of a population has been infected. But none of the surges have infected more than 10% of the population before burning out. Why? One of the theories goes that once the density of people with convalescent immunity gets beyond a certain level, an R0 of 3.0 can't keep propagating. Which has raised all sorts of questions about the connectedness of different groups of people. For example, how do you calculate say the connectedness between Hassidic Jews and Blacks in NYC? Or even at a more basic level, the differing connectedness between family members, friends, coworkers, and fellow community members.

An alternative theory is that once a surge starts happening, people start taking more precautions (masks, social distancing, etc.). But how do you quantify that variable? Oy, my brain hurts!

Anyway, modelers seem to be very invested in their models. But none of the modeling predictions that the CDC publishes every month have been accurate for more than 2 weeks out. That makes sense to me. We can't predict the weather more than 2 weeks out, why should we be able to predict the course of an epidemic?

Expand full comment

Blegging for a recommendation for a Silicon Valley GP that's rationalist-adjacent(-ish?). I don't expect a full-send, but anyone even marginally closer to the community or that others get along with would be super appreciated.

Expand full comment

My therapist recommended that I try to meet women at Asperger's/HFA support groups. Has anyone here done this? No offense, but I feel like this would be a good place to ask.

Expand full comment

There seem to be a lot of programmers here. And it's often said that there a lot of autistic people here. Why doesn't someone program something like Bumble, a dating app where only women send messages, specifically for autistic women who are open to dating autistic men?

Because isn't something called a support group meant to be a low-stress place for attendees?

Expand full comment

Both me and my girlfriend are autists. We didn't try to find autistic partners on purpose, just happened to have similar interests. Actually, we didn't even know that we are on the spectrum until we started dating. It was funny, really. At one moment she started investigating whether she is autistic, found lots of evidience in favour and I was like: "Pff, this doesn't have to mean anything, I mean I fit like 80% of this as well, so what, does it make me autistic now?" Took me some time to consider the possibility that the answer is actually "Yes".

We have pretty good and deep relationship. We explicitly talk about our feelings, do not need to mask and in general have short inferential distances and understand each other really well. For her it's the best relationship she has ever had. For me its at least top 2: more challenging but also more fulfilling. I think it has something to do with the fact that I'm more high functional than she is, and nearly every of her previous relationships was a mess which gave her couple of traumas.

Expand full comment

This could make an interesting essay, if you wanted to write more. It reminds me of a New York Times story that I read a decade ago (https://www.nytimes.com/2011/12/26/us/navigating-love-and-autism.html).

Expand full comment

Wouldn't those places be very dense with males already?

Expand full comment

I would expect so, yes.

Expand full comment

I would not recommend this, I think people overvalue similarity in relationships. 2 aspergers types will frequently butt heads in my experience and you don't exactly want someone who shares your weaknesses in a relationship. Instead I would recommend pursuing women who grew up with autistic siblings. I say this because a while ago I looked back at the set of women I had gotten along well with and noticed that an absolutely statistically improbable number of them had autistic siblings. I suspect that growing up around someone helps them not be so off put by certain behaviors (in women unfamiliar with neurodivergent types, you can easily trigger Anamoly detection filters and just get thrown in the "creepy" bucket)

Expand full comment

Love the concept of "Anomaly detection filters"!

Expand full comment

Given the heritability of autism, isn't there a big overlap between "is autistic" and "has autistic siblings" anyway? This sounds to me like the best partner would be someone who is on the autistic spectrum, but has little or no visible problems.

I am not opposing your advice, though. People who are on the spectrum but have little or no problems will probably not join a support group. But you could find them by metting the people in the support group... and then trying to meet their siblings.

Expand full comment

Oh boy. This sounds like it could go horribly wrong, then again a lot of my paternal family are 'on the spectrum' yet they managed to get married, so it could work!

I think there might be trouble communicating romantic interest with autistic women, but what do I know, it may well be many of them are open to finding love and partnership.

Expand full comment

> I think there might be trouble communicating romantic interest with autistic women

I guess a proper pickup autist might start like this: "Hello! I am romantically interested in you. If you are not interested in me, that's okay, no pressure. However, if you decide that you would like to explore this topic further, here is my card, send me an e-mail, and we can have a dinner together."

(Sorry, my game is a little rusty.)

Expand full comment

hey baby, do you publish your API?

Expand full comment

"Pickup autist"

I see what you did there!

I'm kind of on the spectrum myself, and back in the days of yore when anyone might have been interested in approaching me, you really would have needed to go the "here is a written sign saying I want to ask you out", as subtle hints and non-verbal signals would go right over my head.

(Not that I would ever have said 'yes', but part of ignoring any possible approaches was simply not twigging that they were possible approaches).

Expand full comment

In my RSS reader, whenever I scroll past an item, it's faded out to mark it as read. Does anyone know of a similar Chrome extension for Twitter (ideally the read tweets would be synced between computers)?

Expand full comment

The best workaround so far is to use Tweetdeck, in which you can clear the column containing your timeline after you've read it.

Expand full comment

Fun fact. The Spanish flu - which was a novel virus at the time - never went away. Modern seasonal flus are direct descendants of that. What happened is that more transmittable strains were less fatal. Could be good news for Omicron.

Expand full comment

My understanding is that influenzas descended from the Spanish Flu were the dominant seasonal flu strains for several decades after the 1918-1920 pandemic, but they largely died off in general circulation in the 1950s. The pattern being that pandemic flus tend to be followed by endemic seasonal flus descended from the original pandemic strain.

The 1957 H2N2 pandemic was the dominant seasonal flu until the 1968 H3N2 pandemic. The latter's descendants are still in widespread seasonal circulation (or were, until the Covid lockdowns cancelled the last couple flu seasons), but were joined by a different H1N1 strain (same general variety as Spanish Flu, but not necessarily directly descended from it) from the 1977 flu pandemic, and then by yet another H1N1 strain from the 2009 flu pandemic.

There were also two families of Influenza B among the seasonal flu, although I don't think either of them have been traced back to any known flu pandemics; instead, they seem to have been endemic seasonal viruses since time immemorial (in the colloquial sense, not the specific legal sense of having been demonstrably so since the ascension of Richard the Lionheart as King of England). One of these two families, the Yamagata lineage, looks like it may have died out due to the Covid lockdowns.

The pattern of Influenza A pandemics developing into less-deadly seasonal/endemic flu strains is partially hopeful and partially dreadful sign for Covid. Hopeful because less deadly is obviously a better outcome from our perspective than more deadly, if we must have an endemic virus. Dreadful because "less deadly" isn't the same as "harmless" or even "mostly harmless". Tens of thousands of people in the US alone die of seasonal influenza each year, which isn't an ideal circumstance.

And it's only a weak signal for what to expect from Covid, since while SARS-COV-2 and Influenza A are both viruses that cause upper respiratory infections in humans and can cause deadly pandemics, they're not at all closely related viruses (different phyla: SARS-COV-2 is a positive-sense RNA virus, while Influenza A is a negative-sense RNA virus) so we can only infer so much from one's evolution as an endemic human virus about what to expect from the other. Endemic/Seasonal Covid might be a moderately deadly threat like seasonal flu, picking off tens of thousands of people mostly from among the old, sick, and immunocompromised. Or it might truly become mostly harmless, like the other endemic human coronoviruses that are among the causes of the common cold. Or it might remain deadly at some significant fraction of the deadliness of current Covid strains among the subset of the population that's vulnerable to infection.

Expand full comment

Great comment 👍

Expand full comment

My understanding is that flu reproduces sexually. The H and N antigens are modular. When a new strain is introduced from pigs or birds, I think what happens is that it keeps most of the virus specialized for humans, but swaps in new antigens, which humans haven't seen for a while.

Expand full comment
founding

That's horrifying/fascinating!

Expand full comment

I didn't know about the recombination part, so thank you for pointing that out.

After digging a bit, I see that Influenza A has eight RNA strands that seem to work kinda like mini-chromosomes. Two of them contain the genes for the HA and NA proteins respectively (variants of which give the H and N numbers by which the strains are traditionally categorized). The strands do indeed mix and match when the same host has concurrent infections by different strains.

From what I gather, three of the eight strands differ radically between common seasonal Influenza A varieties (the HA and NA strands, plus one more) while the other five are much more closely related. The 1918 pandemic strain's genome, at least the parts that we've recovered of it, seems to be near (but not quite at) the base of the philogenic tree for both the HA and NA strands relative to modern seasonal H1N1 and for at least one of the other strands common to all modern major seasonal Influenza A varieties.

The standard interpretation seems to be that the five common strands and the H1 and N1 strands are descendants of a recent ancestor of the 1918 pandemic strain. The 2009 H1N1 strain is believed to have come from a descendant of the 1918 strain that had become endemic in North American domestic pigs during the original pandemic, and the 1977 strain is suspected to have come from a stored sample of 1950s-era seasonal H1N1 that got loose while being studied in a Soviet lab.

Expand full comment

What do you mean by reproducing sexually? Viruses reproduce by tricking cells into producing more viruses.

Expand full comment

https://www.cdc.gov/flu/about/viruses/change.htm

If the same host is infected with two different strains of influenza at once, you can end up with genes from each one, and potentially get HA/NA genes that weren't circulating in humans before and so nobody has any immunity to them.

Expand full comment

Yeah, I shouldn't have used the word reproduction. Sex is never for reproduction, but only for exchanging genetic material. Some species have mechanisms to force sex before reproduction. Maybe other species shouldn't be said to reproduce sexually, but that really draws the line in the wrong place.

Bacteria have a lot of adaptations for sex, but viruses only exchange genetic material incidentally, when two viruses infect the same cell at the same time. But some viruses have some adaptations to make the mixture more useful. In particular, flu has a modular genome, to make it easy to swap one H for another and separately N. While the host-specific adaptations are separate

Expand full comment
author

I had a long talk with a biologist about this, and I think it's somewhat wrong.

Flu viruses are constantly mutating. They can't not mutate. Their genetic code is unstable. Only the evolutionarily imperative to be good at what they do (infect people) keeps them even mildly similar to each other.

The Spanish flu was very, very good at being a flu virus. But when it infected everyone in 1918, everyone got immunity to it. So continuing to be the Spanish flu was no longer an option. It mutated into other flu strains, including the ones we have today. So it's true that all modern flus are descended from the Spanish flu.

But that doesn't mean they're "better than" the Spanish flu from a flu-fitness perspective. It's entirely possible that if we dug up the old Spanish flu from some laboratory and set it loose, it would give us another 1918-level pandemic event, much more severely than any existing flu, because it really is just an incredibly successful and effective flu virus, and nobody has immunity to it anymore.

Existing flu strains can't mutate "back" into the Spanish flu because it's too small a target. It took evolution however many thousands of years to invent Spanish flu, and it will take it another thousand years to reinvent it, even though it still has some of the pieces left over.

Expand full comment

Not to argue against you or a biologist who knows more about viruses than either of us...

But one factor that did not exist for most of the thousands of years pre-Spanish-Flu was Industrial-Age warfare. (The first death was a cook at a US Army training facility in Kansas. Not sure whether he was Patient Zero or not. The spread of infection from that point depended on the size of the Army training camp, and large numbers of people traveling via railroad/steamship to many other places in the United States, and the rest of the world. Cases were seen on the East Coast within a week or two, and cases were seen near the Western Front in Europe within eight weeks.)

There may have been earlier versions of various viruses which could have done a Spanish-Flu style outbreak, but weren't able to. After all, if the nations of the world are not moving large armies across the globe using Industrial-Age transportation tools, the outbreak might be local and take many months to move across a single nation.

It's very hard to tell what various Influenza viruses were capable of before the Industrial Age transportation networks allowed carriers to circle the globe within a few weeks.

Expand full comment

That shakoist substack post linked was especially good. I am not sure he explicitly pointed it out but it seems like the central limit theorem strongly argues against the methodology used by ivmmeta.com people. If all of the studies (they include) were equally valid then they should nicely form a normal distribution about the actual value of ivermectin's effectiveness. That they do not, ought to invalidate the methodology of...pretending that they do.

Expand full comment

Question for the resident psychiatrist: How does fluvoxamine look side effectwise?

(Background: triple vaccinated but zero antibodies due to immunesuppression, contemplating it as preventive measure, prior experience with SSRI and SNRI almost a decade ago)

Expand full comment
author

It's a pretty standard SSRI and my section on SSRI side effects at https://lorienpsych.com/2020/10/25/ssris/#5_What_side_effects_might_I_get_on_an_SSRI should apply.

Expand full comment

Thanks, it just does not seem to be used all that much so was assuming there was some reason for that.

Expand full comment

I just learned that large passenger planes have their own radars that can be switched between weather mode and terrain mapping mode (this explains where the pilots get their information from when they announce over the intercom that turbulence is expected in X minutes). I assume the pilot flips a switch to toggle between modes.

I found videos showing what the radar screen looks like in weather mode, but nothing for terrain mode. Can anyone find one for me?

Also, how often do commercial pilots use radar terrain mode, and under what circumstances do they generally do it (e.g. - when lost, when approaching for landing)?

Do the radar computers come pre-loaded with detailed topographical maps of the whole planet, allowing them to automatically deduce where the plane is based on what the radar sees in terrain mode, or are pilots left to look at the screen and figure it out by comparing the image to paper maps in the cockpit?

Expand full comment

I assume it is used to avoid mountains.

I can't imagine how a large passenger plane could get lost and for landings the amount of ground clutter reflections might make it more trouble than it's worth given that altitude radar plus orientation instruments exist (and glide path beams now that I think of it).

Expand full comment

Why would they have to do any of that when GPS exists?

Expand full comment

GPS gives you position, does not give you a mountain height.

Expand full comment

I would be interested to see which passenger flight routes pass over mountains at a height where that would be a concern.

Expand full comment

Guess before reading further: how many (% of all) commercial airplane crashes between 1993 and 2002 were caused by fully controlled airplane crashing because pilots failed to notice they are flying into ground or water? Where it would be completely avoided as plane was steerable?

Which % was between 2008 and 2017?

1) when you land you want to end just at ground level - not higher, not lower and airports have various ground level

1a) In normal operation only during landing/take-off plane is low but not all operation is normal

2) some airports are near mountains. Following are extreme cases

- Lukla Airport (LUA)

- Courchevel Airport (CVF)

- Toncontin International Airport (TGU)

See https://en.wikipedia.org/wiki/Controlled_flight_into_terrain in general - "the second-highest fatal accident category after Loss of Control Inflight (LOCI)."

"CFIT was identified as a cause of 25% of USAF Class A mishaps between 1993 and 2002."

Expand full comment

And between 2008 and 2017 "CFITs accounted for six percent of all commercial aircraft accidents"

Expand full comment

It absolutely is a concern at takeoff and landing near mountains, also depending on where ATC directs you at those times. Such airports exist.

Expand full comment

Presumably your navigation system knows where the mountains are, so if it knows where you are it can tell you what mountains are nearby.

Expand full comment

I doubt GPS is fast and precise enough to pilot an airplane in rough condition or for an emergency landing. And I'm not sure GPS coverage over polar region or large oceans is top notch either.

Expand full comment
founding

GPS is fast and precise enough to avoid mountains, at least mountains that haven't moved since the last time the terrain database was updated. They are also precise enough for instrument landings in adverse weather. Source: own one, use it for avoiding mountains and landing in adverse weather. GPS also works over oceans and polar regions, not that it matters for this purpose due to the lack of mountains to dodge or runways to land on.

Nobody uses on-board radar to directly land airplanes, in emergency or otherwise. The military does use it for terrain avoidance; I haven't heard of civilian aircraft doing that but it's not absurd and might be a reasonable backup for GPS in remote mountainous areas.

Expand full comment

Is worldwide terrain database good enough to use just GPS? I assumed that in some areas or during emergency on-board radar may be more likely to be used.

Expand full comment

You can download 30x30m terrain data for most of the world right now.

Expand full comment

SRTM? Is it really good enough? Even for maps it is often lacking due to voids, gaps, flattened peaks and so on.

Expand full comment

GPS coverage over the polar region and large oceans is top notch; it's better than GPS coverage anywhere else, because you have respectively more satellites and less interference. GPS also works much better in airplanes than, for example, in cars, because there's less crap above you to block it. Its precision is a few meters, which is plenty fine for an emergency landing if you have some way to tell how far away the ground is, such as a radar or visibility over 10 meters.

Expand full comment

Given that the US government made GPS freely available for use worldwide after a Korean Air passenger flight was shot down over USSR prohibited airspace in 1983, precisely to prevent such events in the future, I'm pretty sure it's fast and precise enough to do all of those things.

Expand full comment

[Accidentally navigating hundreds of kilometers out of course] and [making an emergency landing] are not similar events. GPS prevents the former well but don't help that much with the later.

Expand full comment

Correction, the US government *would* make it freely available, it was still in development then.

Expand full comment

Also in existence are ground-based navaids (VORTAC/DME, NDB). Some vehicles (unsure if any civilian) can also do star fixes in broad daylight.

Expand full comment

High school senior here, planning to attend college. A few wonderings I'd love to have some perspective on:

1) Is a Computer Science degree as could as people say it is? I've tentatively decided on CS, but I've heard alternatively that there's either a crippling shortage of CS professionals in the US, or you can spend months on the jobsearch and the interview process could be a new circle of hell. I'm also specifically interested in cybersecurity; how does that compare with CS?

(To be quite honest, I'm also worried that if I go into CS I'll never get a girlfriend. Sue me, I'm a teen.)

2) What should I direct my effort toward in college? I've heard that college isn't useful for the education, but for the signalling power a degree holds. So rather than academics, what should I funnel my effort toward? "Enjoying my youth"? Building ties with competent like-minded peers in preparation for the future? Building work experience with internships? Working on personal projects?

3) How can I up my conscientiousness/build self discipline? My test scores are always excellent, but my gpa is abysmal in comparison. Part of that is being hit by depression and later a behavioural addiction to reading, but I'm also just lazy and weak-willed. 'Executive Dysfunction' was suggested back when I had a therapist.

4) General college/job/life advice? I can't see myself sticking with one career for the rest of my life; I'd rather switch it up every few years. I'd also like to travel, and am thinking of attending a year of college abroad, perhaps in Germany. These may just be juvenile sentiments that will pass, however, and in the case that I'm romanticizing something that sucks in actuality I'd like to know.

Expand full comment

Can confirm that buggering off to Germany in the middle of a degree is a good idea. Though it was before Brexit so I had the opportunity to work there.

Expand full comment

I'm responding primarily to question 1. For context, I'm a software engineer on the cusp of retirement, and in the prosperous/high status end of the income distribution, though not at the very top.

1) A computer science degree from a good tech college, or a very well regarded generalist school, will almost certainly get you employed after college, with decent or better pay. It will also put you on a path with potential for very good pay indeed, either as a senior high end technical person, or on the management track.

A coding course from an also-ran outfit will probably keep you employed, but the first job will be harder, and you won't be a candidate for the high end.

Be aware that there's a huge range of salaries in this area. The funnel for the best US jobs takes most of its input from MIT, Stanford, etc. Waterloo if you are Canadian. This funnel starts with internships during college, which are vitally important. Also, the best career paths now generally require a masters degree.

Essentially all the male software engineers I know are married; most of the females are too. A good salary can be a potent aphrodisiac, at least for those seeking a long term relationship.

Some of them wind up repeatedly divorced, due to putting their careers ahead of their relationships. Be prepared to work long hours, especially if you are ambitious.

I know nothing specific about cybersecurity.

2) I learned a lot in college, and am glad I paid attention to my courses, but I'd probably have earned more over the course of my life if I'd paid more attention to building long term relationships with people outside of my field. Contacts turn out to be vitally important.

Internships and contacts made on them will, among other things, quite likel get you your first job in CS.

4) I suspect CS will be a decreasingly good career as time passes. I certainly liked it better 20 or 30 years ago, but part of that may be having been promoted to a level where I have to deal with a lot more things I consider BS, rather than the code, which I love.

It's probably good that you see yourself as moving to new things later in your life.

I'm a big fan of learning languages. If your German is remotely good enough to manage, go to Germany and make it a whole lot better. But if you do, don't hang out mostly in an American enclave, speaking English, even though that will be very tempting.

Expand full comment

> Be aware that there's a huge range of salaries in this area. The funnel for the best US jobs takes most of its input from MIT, Stanford, etc. Waterloo if you are Canadian. This funnel starts with internships during college, which are vitally important. Also, the best career paths now generally require a masters degree.

FWIW, I got a FAANG job right out of college with just a CS bachelors at a state school. Internships are important though.

Expand full comment

Try out advent of code (https://adventofcode.com/) and see if you like it. Married CS person here, seems to work out as long as you know, you talk to girls sometimes and don't just stay in front of the computer.

If you're really concerned, make sure not to skip out on the gym. Strength training can be fun too. Or dance, as someone else said.

Cybersec is cool if you can get into it, and if you are interested in it, don't wait till third year for the school to teach it to you. CS is a really really wide field, and school will try and build a foundation and then show you some stuff (e.g. one graphics class, one database class, one security class, one UI class, one AI class), but none of the classes will really go as deep as you can go on your own if you put the work into it. And honestly, the lectures aren't always better than youtube tutorials and trying stuff yourself.

However it is important to understand enough about how computers work, what memory is, why does this algo take O(n) vs that one taking O(n^2) time. And the curriculum should make sure you understand all that by the time you graduate.

I think my Math and Chinese language classes were "harder" in the sense that I don't think I could have learned the stuff from them from the internet / textbook. So sometimes you might decide to just study CS stuff for fun and take the classes which are harder to learn on your own. (For me, I also failed to learn graphics and OpenGL on my own, so I gladly took the class.)

Re:3

One piece of advice I heard from someone about motivation and such goes as follows.

"Even if I don't feel like it, I always at least get ready at the gym and tie up my shoes. If after that I still don't want to do it, I have my own permission to switch right back. But often, once I am fully prepared to do it, it doesn't seem so bad, and I can go and complete my workout"

I think that approach works quite well, where you make some time, say "I will at least get everything set up here, maybe set up my editor, turn off the internet (Download the docs ahead of time) and set a timer for 15 minutes"

If after all that, you still don't get any work down, well, go for a walk or something and try again in a bit. But often I have trouble with the putting down the distractions and less with the picking up with the work.

Expand full comment

You're getting lots of great replies already, but I'll add a few thoughts:

1) CS is good, and learning to program IMHO is one of the most valuable skills you can have. Basically every career involves computers already, and it's safe to say computers will be used even more in the future.

2) Do all that stuff you mentioned! Include a mix of potentially useful stuff (e.g. internships) and just-for-fun stuff (e.g. anime club). Some stuff will be awesome, some won't work out, and college is the time to experiment and find out.

3) Learning about yourself is just as important as academics, if not more. I'd highlight mental health as being something particularly important to work on, since nothing else really matters if your brain isn't in good shape. Campuses usually have mental health resources available, don't be shy to try them. That said, for self-help you can also check out https://www.lesswrong.com/posts/JgBBuDf5uZHmpEMDs/how-you-can-gain-self-control-without-self-control.

4) It's hard to give much general advice because everyone has different values; there are a lot of perfectly valid approaches to living life! I'd say first, take advantage of what college offers you. Go to office hours, go to career fairs and that sort of stuff, socialize, take elective classes that sound cool even if it's way out of your usual field, and try some clubs/activities. Second, keep an open mind about careers. It's better to think of college as a chance to collect information on potential careers, rather than training for a career. A lot of people end up happy in a career very different from what they could have imagined during college.

Expand full comment

Married to a software engineer. We have a nice life. It helps that he’s good at what he does, and he’s good at what he does because he likes it. I can tell when he’s been writing code because he has a spring in his step.

As for girlfriends, my standard advice is to hang out around the library science program. It’s 95% female, and I know a lot of librarian-programmer pairs. I think it has something to do with CS people liking communication to be specific, and librarians being able to rise to the occasion.

Expand full comment

Strike a balance between doing things you enjoy and that are useful to you (e.g., making and saving money, maintaining good health, finding and keeping friends you truly like). Ideally, often these two areas overlap, to people's points about figuring out whether you enjoy CS. Not everything needs to be enjoyable, and not everything needs to be useful.

Just because some people don't learn a lot in college doesn't mean you shouldn't. Try to learn a lot, whether it's the material itself or the meta-skills (how to learn hard things, how to write and communicate well, etc.) This is also a good way to build up your discipline, along with an exercise habit.

Find people you really like and keep them in your life. Old friends who have known you for decades (when you're older) are among the best things in life, so find those people now. That may mean you need to meet a lot of people in college. Don't assume the first people you meet are the ones you'll click with the most. In both college and grad school, I felt a little out of place for a year or more and then found great friends that I've stayed in touch with ever since. Try out different activities and groups and see what and who sticks.

Travel, a year abroad, and whatever enthusiasms you have now are definitely worth pursuing. You only get one life, as far as we know, so make sure you have some fun stories to tell when you're older. Even many experiences that suck in the moment will seem interesting and enjoyable as memories. So don't try to be *too* mature and practical. There's a place for that, but make sure to really live life and do some weird things.

You definitely don't need to stick with one career if you're reasonably smart, likable, and a good communicator. In fact, having multiple skills sets that overlap in interesting ways often make you more valuable on the job market.

Expand full comment

> there's either a crippling shortage of CS professionals in the US, or you can spend months on the jobsearch and the interview process could be a new circle of hell.

Both could be true at the same time.

> I've tentatively decided on CS

Have you tried going through a programming tutorial?

If you can do this then it is a strong hint that it is a good field for you.

If not - can you do this with a tutor? If not, then it is a strong hint that it is not a good field.

Have you programmed for fun? (another strong hint).

> Building ties with competent like-minded peers in preparation for the future? Building work experience with internships? Working on personal projects?

Good ideas. Note existence of projects such as Google Summer of Code or Github Student pack. Look for more like that.

I would add "join interesting open source projects"

> 3

Please let me know if you have good hints here.

Expand full comment

EDIT: obviously also enjoy life - as long as it is not involving life-destroying narcotics or harming others. Explore, travel, dance etc.

Expand full comment

If you're good at it, CS is the best thing ever. It's the closest real life equivalent to being a Wizard developing magic spells. You should probably start learning to program before you get to college so you can get a feel for it and decide whether it is something you want to pursue or not.

Expand full comment

As for "getting a feel for it" and "being good at it", don't be alarmed if it's hard and you can't figure things out. It's hard for everyone and it's always hard to figure things out, but especially when you start; nobody is born good at it.

Expand full comment

1) I don't know if you enjoy computer programming. CS will open the door to making lots of money, but if you intrinsically enjoy programming, most of the process will be at least somewhat enjoyable; if you hate programming, it will be torture. Cybersecurity seems like a specialization that you can choose later. (But I am just guessing here.)

To find a girlfriend, learn to *dance*. It is a great balance to the sedentary CS lifestyle, and not only you will meet lots of girls, you are allowed (and socially expected) to touch them and do something that seems like a vertical simulation of sex (somewhat exaggerating here, but not too much, depends on the specific dance). So you essentially skip the first few steps of dating.

2) Yeah, it sucks that the part of your life when you have most free time is also the part when you least know how to use it well. Try a bit of all, I guess. Explore. At this phase of life, learning is more valuable than doing (you will have the following decades to profit from what you are learning now)... with the exception that sometimes doing things is also a source of learning. Just don't spend *all* your time doing *one* thing; that is, if you e.g. start an internship, don't let it consume all your time.

3) https://lorienpsych.com/2021/06/05/depression/

4) Learn German e.g. here: https://deutsch.info/ and get good at skills that are useful across a large range of professions, such as communication and math. If you know German and programming, you can easily find a well-paying job in Europe. (English and programming is already quite enough, but if you add German to the mix, it is even better.)

Expand full comment

Seconding the dance recommendation--swing dance clubs are incredible for building social confidence and spending a lot of time interacting closely with the opposite sex.

Expand full comment

Pay is generally a lot lower in Germany than the US though.

Expand full comment

Speaking very generally...

In USA:

- huge salaries;

- more business opportunities.

In Europe:

- free-ish education;

- free-ish healthcare;

- actually 8-hour workdays;

- more vacations;

- more maternal/parental leave.

So the optimal strategy is to be born in Europe, study in Europe, move to USA and make money until you burn out, return to Europe, retire early in your 30s, start a family.

If you are already a teenager in USA and feel adventurous... I am not sure whether it would make sense economically to learn German, move to Europe and study there, then return home. As a foreigner, you would probably have to pay something for the school. Also, travel and accomodation are not free. But given the astronomical costs of some American universities, is there a chance that it might still be cheaper?

Expand full comment

Travel and accomodation are not free, but college education is free or costs little in a lot of European countries, including Germany (except Baden-Württemberg and parts of Sachsen, where it costs ~3000€ per year for non-EU students).

Accomodation cost can vary extremely, depending on the city. If you don't have excessive tuition fees like US or UK, this is by far the most expensive part. But you have accomodation cost regardless of whether you study in the US or abroad.

Expand full comment

Do you enjoy problem solving and do you not enjoy people? Then major in CS like me, because every other job is either dirty (I would prefer to sit in an office), about people, and/or has way less problem solving.

It's not the most fun thing in the world, but I actually kind of get a dopamine hit doing leetcode (and DSA type problems are one of my least-liked areas of CS). That doesn't happen for me when I'm memorizing biochemical pathways. I was going to go to medical school but I realized that it would be hellish, because I hate memorizing factoids. I like understanding complex concepts that you can't just google and I like solving problems. The existence of programming is pretty much a miracle with regards to trying to find a problem solving job.

And to comment on some other comments nay-saying CS. Yes, CS is not perfect. Your job might feel like a job. That's sadly normal. But what's the alternative? Other forms of engineering? Most of them end up writing code. I was doing premed + an engineering major, and I realized that medicine is very boring, and that engineering means you will probably end up coding, and if you don't, your job will be no more interesting than programming, and will probably be dirty, i.e. in a plant or a construction site or something gross.

So yes, there are some people below basically talking about how corporate meetings and making webshit and Javashit is boring. I agree. The thing is that there are no alternatives I can think of that CS doesn't provide a pathway to. For me the alternatives would be self-employment, teaching/research, or some sort of employment that is low on meetings and high on interesting work.

I can't think of a field that qualifies for the last thing more than CS. There is a lot of webshit being produced these days. I don't find it very exciting to write corporate applications in high level languages compared to doing harder, more cutting edge stuff. But the only better option outside of CS I can think of would be researching another STEM subject. Guess what? If you want to be a professor, you can be a CS professor. It's probably easier to get a CS professorship or private research position because the field is expanding more than the old sciences.

I can't think of a job that I would find more interesting than coding other than other STEM research, and since CS gives me the opportunity to do such research in the future anyway, it's pretty much a no-brainer for me to do CS.

As for degree difficulty: if you are good at math, CS is the easiest STEM degree available, period. Have you programmed before? If not try building a calculator in Python or something. If you make As in math, and Python kind of fun, you will find CS super easy. If you are not good at math, however, a memorization based "STEM" degree might be easier. I think there is only one of these, depending on your definition of STEM. Maybe two. Biology has the absolute least math, then Chemistry. For Chemistry, you will take a truncated version of chemical physics. CS probably comes after Chemistry in terms of the math involved.I would say if you aren't good at math, CS is probably the third easiest STEM degree after Biology and Chemistry.

And I don't think it's necessary to compare to humanities degrees. Frankly, those are pointless and you shouldn't have to waste your time with them. Although in the job market right now they serve as a ticket to random office jobs that need a certain IQ level but not any skills. You could easily do them in 8th grade, but you need to get a business degree or whatever to show that you can read and right at at least a slightly above average level. If you're really smart though, you need to be majoring in STEM, and if you don't want to be a professor, it needs to be engineering or CS, or maybe math or statistics.

>2) What should I direct my effort toward in college? I've heard that college isn't useful for the education, but for the signalling power a degree holds. So rather than academics, what should I funnel my effort toward? "Enjoying my youth"? Building ties with competent like-minded peers in preparation for the future? Building work experience with internships? Working on personal projects?

Maintain a high GPA (aim for a 4.0 and if that fails aim for Summa Cum Laude). Education, for most majors, is a hollow signal. For CS, you will actually learn a lot of useful things. For STEM in general, if you get a job in your field, you will actually learn useful things. Business/humanities degrees are hollow signals unless you become a professor of those things.

Education being a signal means your GPA matters a lot. Education being about skill means it GPA matters less. In my experience, your GPA matters some amount which is proportional to how much of a hollow signal your degree is. The floor, however, is still really high. So GPA still matters a lot for CS and engineering.

I have this rich uncle who's a dentist, he told me the number one thing that matters in college is your GPA. So far my experience has proven him correct. You can recover from a low GPA to some extent proportional to how useful your degree actually is, but it's still a recovery. Don't make yourself have to recover.

As for what to do besides your GPA: research and internships. It's that simple. Nothing else will go on your resume. This isn't high school, you're not applying to college, you don't put bowling club on your resume. Do it if you like it, but keep in mind it's all about research and internships when it comes to building your resume. For CS, you also have a portfolio you can build from your basement, called github. Isn't that cool? You should put that on your resume too.

>I'm also specifically interested in cybersecurity; how does that compare with CS?

AFAIK there are no "cybersecurity" bachelor degrees and if there are you probably don't want one. I go to a good public school and they don't have one. The cybersecurity classes are CS electives. I am taking them because I also like cybersecurity. However, the field is somewhat hard to break into. You want to get a CS degree with cybersecurity classes on the side, and then potentially follow up with a masters if you feel like you need it. The reason why is that the jobs are very rare and you box yourself in with a "cybersecurity" degree.

>I've heard alternatively that there's either a crippling shortage of CS professionals in the US, or you can spend months on the jobsearch and the interview process could be a new circle of hell.

Same here. I think it comes down to competency. The job market looks good statistically speaking. Sometimes I hear people who start coding and a year later they're making $100k doing it full time, sometimes I hear about people who hardly get interviews and then bomb them when they do. I've seen resumes people post and there is a definite correlation with competence. https://old.reddit.com/r/cscareerquestions/comments/3e55c8/interviewers_can_people_really_not_pass_fizz_buzz/ctbowp4/

For whatever reason, maybe because there's no licensing, maybe because they hear CS is friendly to people without CS degrees, there's a lot of people who literally cannot do the most basic tasks, and who have no capacity to dynamically learn on their own, who apply for jobs. I think if you're above average it will be the most ripe, 1360s esque job market you will ever see. Signals like GPA and portfolio will help you get interviews and competence will help you pass them. People who can't fizzbuzz with no experience apply with a 2.8 GPA and the wrong skillset to 200 FAANG jobs and then flunk the interviews and then they whine that they can't find the job.

Expand full comment

> making webshit and Javashit is boring

It is not like there are no other jobs. It may be less paid or something but it is definitely there.

Expand full comment

1) Plenty of cybersecurity professionals have a CS degree; think of it more as a specialization than a separate career path. Regarding the job market, there are lots of jobs available, but lots of people looking for jobs suck. I wouldn't hire at least half of my graduating CS class for any coding job. Additionally, the interview process at most companies is tuned to try and make the false positive rate low at the expense of the false negative rate, and it's not difficult to fall into a "would be good at the job, but is rejected at some stage in every interview process due to some random signal" trap. This is where having personal connections really helps.

If you want to meet women, choose a woman-dominated area that you're interested in, and minor in it. Or join clubs that aren't CS related. Also, you'd be surprised how much Electrical Engineering and Math have better gender ratios than CS—not 50%, but in my experience it was better.

2) College gives you a great opportunity to educate yourself; choose the classes that interest you, and that will provide useful skills in your job, and throw yourself into them. Additionally, if you do decide to go into CS, the more experience you have programming, the better. Do hackathons, personal projects, try to contribute to open source projects. Get as much time coding as you can. I imagine there are similar skills in other disciplines, though I don't really know. Beyond that, try to make friends who will work in the same industry as you! It's way easier to get a job somewhere if you have a friend on the inside who can chat with the hiring manager.

3) I also struggled with this. One of my major issues, which may or may not apply to you, is that I would sometimes not turn in half-finished homework—get comfortable turning in things halfway done. 50% is way better than 0%. (If you're having difficulty with this, try writing "Sorry, didn't have time to finish this" at the top of the homework. Not sure why that helped me, but it did.)

4) This is much easier to do if you go make 100k in CS while spending 20k for a few years after college.

Expand full comment

Along the lines of turning in halfway done homework, I am a IC (Chip) designer. I did this type of thing even on tests. Often in physics and occasionally in math (calc, etc). I would hit a point where I lacked an equation and say, " Not sure what this equation is but if I was, this is what I would do next....." You can get a lot of points for that. Even in math, showing you know the process is far better than an incorrect guess or just unfinished work. (I would don't know how to differentiate this, but if I did, I would and then I would set it to zero to find the max/min of the original function)

Expand full comment

1) Yes, it is an excellent choice for job prospects. Cybersecurity is fine, but you have 2-3 more years before you make that decision. No need to worry about it now.

For finding a girlfriend, yes definitely, CS has very few female students. Obviously, there are other ways to find a girlfriend, but it is also true that you will spend a lot of time with your fellow students. If you are really concerned about it, you *could* consider starting with a related subject, and switch to CS later. For example, math has much more balanced gender ratio than CS. But this only makes sense if you already have a high interest in math. I would only recommend this if the male/female ratio in CS really troubles you a lot.

2) I agree with the others. Take your studies serious. You will learn a lot of useful and a lot of (for you) useless stuff, but it's hard to predict in advance what will become useful for you.

3) Building social ties helps. Build a study group, solve exercises together with others. If you spend time with them, it will be much harder to get distracted by other things. This doesn't solve everything, so *also* try out good ideas from other people.

4) I would very much recommend spending a year abroad. It's not necessarily the fastest way to start earning money, but most people who did this wouldn't consider it as wasted time. (Except for the last year. So sad for exchange students to have online teaching and distance rules. :-( ) Germany is a nice choice, it has good universities, and (almost) no tuition fees. I think there are many good choices, and few ways to screw this decision up too badly.

Expand full comment

Programming jobs are great, but a CS degree is neither necessary nor sufficient to get one. It's one of the fields where the signaling power of a degree is the smallest. Software is eating the world so any job turns into a programming job sooner or later.

Security is a desperate waste of time: companies and countries that don't have security will be replaced by companies and countries that do, but currently the decision-makers in most companies can't tell security from a hole in the ground, so they buy security products based on the opinions of shill operations like the Gartner Group. This works about as well as getting a company to profitability by buying new accounting software chosen by people who don't know anything about accounting.

Don't worry about the girlfriend thing. Being an adult, in both the literal sense and the metaphorical sense, is a much bigger factor here than choice of career, unless your alternatives to CS include becoming a famous actor, playing professional sports, or some other avenue to being a celebrity. Traveling helps too. You will be amazed how much more attractive you are to women at 25 than you are at 18.

Expand full comment

3) is almost certainly the most critical thing to get right. No matter what you end up doing, it'll matter hugely. My advice is to listen to the huberman lab podcast, particularly the episodes on sleep, dopamine and addiction. They've helped me get a handle on very similar issues after nearly 3 decades of struggling with them.

Expand full comment

I was also pulled in a million different directions as an eighteen-year-old and ultimately decided on CS. There are going to be a lot of people in these comments who say that CS is good, so I'll provide a counterpoint and say that I wish I hadn't done it.

The big question you'll have to answer fairly quickly is "Do I enjoy commercial software programming?"

None of your classes are going to give you the answer to this; the only way to really tell is to get an internship that lets you simulate the experience of being a software engineer (i.e. spending about fifty hours a week doing some combination of writing code and sitting in meetings discussing code).

I hated it, and I realized that I hated it during my first internship, but unfortunately my first internship wasn't until the summer between junior and senior year, so I was kind of stuck.

I thought, "Oh well, no big deal; I can just freelance when I need the money and travel the world, la dee da."

It's much harder than it sounds. Skillset gets stale very quickly in software, so it's hard to jump out and jump back in. Also, you're SO MUCH MORE VALUABLE to a company if they "own your time," so to speak, i.e. if they can call on you whenever they want without paying you extra money, i.e. if you're a salaried employee. If you are a competent programmer with a degree from a name-brand school, you'll probably have people lining up to hire you as a salaried employee but still face difficulty finding decent-paying freelance work.

I also though, "Oh well, with my new CS skillset I can just found a startup and be my own boss, la dee da!"

And yeah, I really did try it! And then the startup didn't work out (happens to about 75% of them), and I was broke and I needed a job, so I ended up back at an office desk for a few unhappy years before I finally got out for good.

What I'm saying is this: If "sitting in an office and writing code with a bunch of people doing the same thing" sounds very bad to you, I would seriously question CS as a career choice, because you run the risk of just kinda washing out like I did. If the collaborative coding project paradigm actually sounds okay to you, then it could be a great fit. There were a ton of people at my company who liked (or at least didn't mind) that paradigm, and for them it was great: They got paid well and were reasonably happy. I just hated every minute of it and had to spend about five years doing it before I finally realized I absolutely could not anymore.

So just...make sure being a software engineer is actually something you'd be okay with, lol.

Other Things You Ought To Know About Computer Science:

1. Unless you are a person who is super mathematically and logically inclined, you're going to have to work a lot harder in college than you would while majoring in, say, English. I had some great experiences in college, of course, but could most commonly be found in the basement of the CS building running on very little sleep and on hour 12 of trying to get some assignment to compile. This was not the experience for my friends who were humanities or social sciences majors.

2. CS might fuck up your GPA, which will make it difficult to get into a good grad school. You'd think they'd care and take into account that you had a fairly difficult major, but they don't, really; it's kind of a pure numbers game (at least for the professional schools).

3. You mention never getting a girlfriend. I mean...CS isn't going to single-handedly stop you from getting a girlfriend, but it does mean that there will naturally be fewer girls floating around in your social circles. It's not a totally dumb thing to consider, although if you're reasonable attractive/social it's not going to matter very much (although I have heard horror stories about the dating scene from my male friends who live in Silicon Valley, lol)

In retrospect, since I ended up going back to grad school anyway, I would probably have just majored in something I enjoyed (something that involved a lot of reading instead of a lot of coding), gotten a high GPA, spent a couple years figuring out exactly what I wanted to do, and then done grad school (it's easy enough to change your trajectory with a masters or postbacc program).

Your description of yourself does not exactly scream "happy as a CS major/software engineer," so I would tread carefully.

Expand full comment

The scarcity/glut issue of programmers is this: there are a large number of people with some form of programming credential (like a CS degree) who can't program themselves out of a paper bag. The interview process may be stupid, but so are a huge number of applicants.

A great way to get a leg up on your job application is to do some software development work in college and post it on github or whatever. Find a tool you need, work on Open Source stuff, whatever. Show that you can produce something that works.

Expand full comment

Happy and sad fact on corona: by the end of this year, 12.5 billion vaccination doses for corona will have been produced. Monthly production is 1.5 billion.

Given that there are 5.8 billion people of age 15+ on earth, that's easily enough to give every adult two shots, with almost a billion doses left for children, booster shots, etc. (Currently about 250 million booster shots have been given world-wide, so there is still margin.) By end of March, we could give three shots to all adults on earth.

It would even be enough if every single adult would take the shots. We know that some people decline the offer to get vaccinated.

Meanwhile, Africa has given out only 18 doses of vaccine per 100 people. This is not a malfunction of vaccine production, it's a malfunction of distribution.

I am moderately optimistic that this will change soonish, and that the excess doses are finally reaching Africa. I wish I would be sure about it.

Source: https://www.ifpma.org/resource-centre/as-covid-19-vaccine-output-estimated-to-reach-over-12-billion-by-year-end-and-24-billion-by-mid-2022-innovative-vaccine-manufacturers-renew-commitment-to-support-g20-efforts-to-address-remaining-barr/

Expand full comment

Update: the largest vaccine manufacturer, the Indian Serum Institute, has announced (or threatened) to cut vaccine production (AstraZeneca) by half because they don't find buyers. Apparently they are sitting on a stockpile of half a billion doses that nobody wants.

https://www.bbc.com/news/world-asia-india-59574878

Expand full comment

Yesterday I learned at Noah's substack that vaccine manufacturers had stopped maxxing out their factories a while ago. :/

Expand full comment

I am always in favor of more support to poor countries, and outraged by the lack of funding for things like malaria and general economic support. This holds for the vaccine issue, but it's worth adding that sub-Saharan Africa remains among the least-hit regions in the world by Covid, probably because of low urbanization rates (https://ourworldindata.org/explorers/coronavirus-data-explorer?tab=map&facet=none&Metric=Confirmed+deaths&Interval=Cumulative&Relative+to+Population=true&Align+outbreaks=false&country=USA~AUS~ITA~CAN~DEU~GBR~FRA). I believe this remains true when accounting for under-reporting, but don't have a source on hand for this. The fact that the same pattern is seen not only in countries with questionable statistics and health systems (say, Tanzania and Zimbabwe) but also those with robust systems and some levels of transparency (Rwanda, Ghana, Botswana) suggests to me that this isn't just an artifact of misreporting. And this fits a more general worldwide pattern of urbanization predicting infection rates (with the striking exception of China and East Asia more generally).

I think even with this considered, Africa is being under-vaccinated, but the problem is not quite as stark as the raw numbers might suggest.

Expand full comment

> a more general worldwide pattern of urbanization predicting infection rates

Do you have a source on that? When I look at US states, the top ten states for total infection rates are North Dakota, Alaska, Wyoming, South Dakota, Tennessee, Utah, Rhode Island, Montana, Kentucky, South Carolina.

Rhode Island and Utah are in the top ten most urban states, but of the other eight states on that list, four are in the ten *least* urban states, and the other four are also in the bottom half of urbanization.

(All time infection rate from here: https://www.nytimes.com/interactive/2021/us/covid-cases.html

Urbanization from here: https://en.wikipedia.org/wiki/Urbanization_in_the_United_States )

Expand full comment

I am also a bit doubtful about the urbanization hypothesis. Looking at the list of excess mortalities here, https://www.economist.com/graphic-detail/coronavirus-excess-deaths-tracker, there are a lot of countries in the top part which sound rather rural, e.g. Bulgaria, Peru, Albania, Kazakhstan, Romania.

South Africa is also in the top part. There are very few African countries in the list, though. But I think Africa was hit really hard by the last wave in July/August, much harder than other regions were hit by third/fourth waves.

Does someone know about India? It might be a good test. was hit hard in general, but were rural Indian states hit less hard than rural ones?

Expand full comment

There's definitely been a lot of discussion of how boosters for us are getting in the way of first doses for others. But it has sounded to me for a while like most countries are not dose constrained, but are instead either logistics constrained or demand constrained. I'm still hesitant to state that this is a fact, because I just haven't seen any systematic discussion of it, but if anyone has such information that would be helpful.

The link you post suggests that in a few months, the world will have enough doses to get everyone two or three doses, which supports what I'm feeling right now. But I'm a bit skeptical, because this comes in the form of a press release from the International Federation of Pharmaceutical Manufacturers, telling everyone "look, there's no need to weaken our intellectual property protections". It seems to me that even if there are lots of countries that are dose-constrained, they have an incentive to say they've manufactured enough for everyone, and keep selling extra doses to rich countries that are letting them expire while locals refuse their doses, rather than doing the difficult thing of selling to poor countries.

Expand full comment

That's a good catch, thanks for pointing this out! I still find the numbers plausible, it's pretty consistent with the prognoses from spring and from summer (which came from an independent analyst company, as far as I could tell). But yes, the source is not neutral.

Also, the shortage is very specific African. There doesn't seem to be a super-severe shortage in India, or Latin America, or Indonesia, or Mongolia, going by vaccination rates.

Except that I just noticed Haiti, which has ~2 doses per 100 people, compared to 125 doses in neighbouring Dominican Republic. Sad.

Expand full comment

I would want to dig a bit further into data on some poorer countries in other regions, like Paraguay and Laos. But I am convinced by the broader point - it's mainly vaccination capacity that is missing, not doses.

Expand full comment

If we had the same degree of malfunction of distribution, but 10 times as much production, Africa would have 180 doses of vaccine per 100 people. Even more importantly, if we'd been able to ramp up vaccine production in March 02020, there wouldn't be a pandemic.

Expand full comment

> If we had the same degree of malfunction of distribution, but 10 times as much production, Africa would have 180 doses of vaccine per 100 people.

Why? If bottleneck is distribution (or refusal to take them) then how much is available would not influence things at all.

Obviously, it is neither poor logistics issue not pure production but assuming that all logistics issues would be fixed by just increasing production seems clearly wrong.

Expand full comment

I'm not assuming that all logistics issues would be fixed just by increasing production. I stated my assumption up front: "If we had the same degree of malfunction of distribution". That is, I'm reasoning about what would happen if the vaccines continued to be distributed with the same degree of inequality. You're thinking in terms of "bottlenecks", but to keep the same bottlenecks (in Africa but, I suppose, not elsewhere) rather than the same degree of inequality, you would have to vastly increase the degree of inequality.

Which should we believe would happen? My belief is that this is specifically an issue of *inequitable access to vaccines*, not insufficient shipping capacity full stop. That's because each vaccine dose weighs about 10 grams, requires about another 10 grams of dry ice (in the case of mRNA vaccines) and a smaller amount of styrofoam to reach its destination, so 18 doses of vaccine per 100 people amounts to about 360 grams per 100 people. Over the last year.

I know there are parts of Africa that have very poor logistics indeed, but there's absolutely no way Africa as a whole is only capable of shipping 360 grams per 100 people per year. That would mean not only that everyone who wasn't a subsistence farmer would starve to death (and I remind you that Africa contains some of the world's largest cities), but also that delivering a 20-kilogram washing machine from the factory would max out the yearly shipping capacity of 6000 people. And that's just ridiculous.

If you're interested in the issues of healthcare supplies delivery in Africa, Project Last Mile's yearly report looks good: https://www.projectlastmile.com/wp-content/uploads/2021/04/PLM_AnnualReport210330bs.pdf

Expand full comment

1) have we got vaccines without strict refrigeration requirements? For how long 10g of dry ice would last?

2) I am rather thinking about problems such as "local government is incapable of organizing something like transport and coordinated vaccination" or "here road is impassable for next 6 months" etc

> My belief is that this is specifically an issue of *inequitable access to vaccines*, not insufficient shipping capacity full stop.

I agree that cheaper vaccines would help.

> delivering a 20-kilogram washing machine from the factory would max out the yearly shipping capacity of 6000 people

Delivering 2000 10 g package to 2000 different people is typically much harder than delivering single 20 kg package to a single destination.

Expand full comment

#1: Yes, we do. 10 grams of dry ice will last an arbitrarily long time with enough insulation. If you don't have enough insulation you might need a few times that.

#2: Yes, there are a few isolated places where such problems are insuperable.

I agree that delivering a large number of packages is more difficult, but come *on*. We're talking about an orders-of-magnitude smaller logistics problem than the ones people solve every day for soft drinks and hot dogs. Yes, in Africa too.

Expand full comment

Why wouldn't there be a pandemic?

Expand full comment

Because widespread vaccination in the affected areas in March and April 02020 would have prevented it from spreading to the rest of the world.

Expand full comment

That might have worked in December 2019. In February and March 2020, it was way already circulating world-wide in dozens of countries. Most countries (including US as far as I recall) just hadn't started to test for it properly.

Even if we had had enough vaccines in January 2020, I don't see how distribution could have been fast enough to stop the pandemic. In retrospect, the first cases probably reached Europe in early January or even late December in several countries, unnoticed by surveillance.

E.g., here: https://link.springer.com/article/10.1007/s10260-021-00568-4

Expand full comment

Hmm, I guess I was wrong about that. Earlier widespread vaccination might have been able to prevent the numbers of cases from going massive like they did, but they couldn't have prevented it from spreading around the world.

Expand full comment

I think the theory is that we could've locked down enough to stay ahead of cases.

Expand full comment

On the plus side Africa is a young continent. Low vaccination Probably won't be as damaging there

Expand full comment

> Meanwhile, Africa has given out only 18 doses of vaccine per 100 people. This is not a malfunction of vaccine production, it's a malfunction of distribution.

That is nothing new, the same applies to starvation deaths. Logistics are often harder than production.

Expand full comment

At least when you offer a starving man a bread roll he always eats it. Not so with vaccines.

Expand full comment

you sit in a chair across from morpheus, he holds forward two clenched fists. He speaks your name, and opens them. "In my right hand, I hold the red pill." He says, your face perfectly framed in his smoked glasses. "In my left hand I hold the blue pill. Choose carefully."

You attempt to stand up. You have no idea who this morpheus person is, or what these pills do. You are manifestly not in the matrix movie. This is something else, but you have no idea what.

Meaty hands push you down into your chair. Morpheus' gun toting henchmen hold guns to your temple.

"you misunderstand me." Morpheus says. "you must choose, take the red pill, or the blue pill."

Knowing absolutely nothing about either pill except their color, which do you take and why?

Expand full comment

I must be one of three people in the world who never saw any of the Matrix movies, so I have no idea what either colour pill is meant to do.

If I have to pick one, I'll pick blue - blue for the sea, blue for the sky, blue blue my world is blue, blue is my favourite colour, this is not a blue bottle:

https://www.histoiresdeparfums.com/pages/collection-this-is-not-a-blue-bottle

Expand full comment

From having seen the movies a few times, I have no recollection of which pill is which. But in the years since the movie came out, "getting red-pilled" has become a meme on the internet for people "learning the truth" about how the world is corrupt (usually in the way that dooms nice guys like the one in question to never have a girlfriend, unless he learns his pick-up artist skills). So I figure that blue was the one that leaves you in The Matrix and red was the one that broke him out of the machine to learn about the deception he had been in.

Expand full comment

Red, to own the libs.

Expand full comment

If the fact that this situation doesn't resemble it's inspiartion in any way whatsoever is not certain, assign weights to the pills according to their meaning in the movie and take whichever one you like first. Otherwise, choose at random.

Expand full comment

Call their bluff

Expand full comment

Red, because: 1) I like that colour better. 2) Its Christmastime and red is a Christmas-y-er colour than blue. 3) That [one SSC post](https://slatestarcodex.com/2015/06/02/and-i-show-you-how-deep-the-rabbit-hole-goes/).

Expand full comment

I loved that story, but needed about two minutes to figure out that Blue, Green and Grey all can create neg-entropy (technically speaking, Orange, Yellow and Black probably can do it to but their may be harder to harvest), so I would assume that King William at least should be able to figure the solution out without needing to go after a wild goose like the Turin Shroud...

Expand full comment

Am I the only person who's noticed hydroxyzine is an atypical antipsychotic?

Hydroxyzine is generally described as an antihistamine that, for reasons nobody can explain, just happens to be remarkably effective for treating anxiety, even though other antihistamines aren't.

If you look into it though, hydroxyzine is a ligand of the D2 receptor [1] and the 5-HT2 receptor family (paper didn't report subtype but presumably this includes the 5-HT2A receptor) [1] [2] with potency not much lower than for the H1 receptor. This strongly suggests it's an atypical antipsychotic (especially since doses used in anxiety are higher than used for allergy). It can also cause tardive dyskinesia [3], just like other atypical antipsychotics (other than possibly clozapine). More to the point, though, it being an atypical antipsychotic entirely explains why it treats anxiety, but other antihistamines like diphenhydramine don't.

This isn't the first time an atypical antipsychotic has been misidentified: trimipramine, originally discovered as an antidepressant, also has affinity for the D2 and 5-HT2A receptors, and one trial actually found it effective for the treatment of schizophrenia, even going as far as discharging patients on a maintenence dose of trimipramine [4].

To my knowledge, nobody has tried this with hydroxyzine; I can't find any literature mentioning an antipsychotic effect of hydroxyzine at all. Even the article above on hydroxyzine-induced tardive dyskinesia [3] doesn't mention an antidopaminergic effect of hydroxyzine, instead speculating that the mechanism by which it causes tardive dyskinesia is somehow related to its antihistamine effects.

This really seems to call into question the common use of hydroxyzine monotherapy for anxiety. Antipsychotics generally aren't used as monotherapy for anxiety (outside of bipolar disorder, though then you're not just treating anxiety), not because they aren't effective, but because you can generally use a lower dose of antipsychotic when you combine with an SSRI or another anxiolytic. I think the assumption is that hydroxyzine doesn't have the same risks as commonly prescribed atypical antipsychotics, so it's safer to use as monotherapy. But this seems to question whether it is, in fact, safer.

I also wonder whether I'm missing something. It seems really unlikely that everyone in psychiatry just totally missed hydroxyzine being an atypical antipsychotic, despite it having been used for many years and the signals being relatively obvious. People noticed for trimipramine, so they should notice for hydroxyzine, too, if it is in fact an antipsychotic, right? I have a hard time believing I'm the only one to notice this, and somehow everyone else is blind to it. So I wonder if I'm wrong. I wonder if there actually is something that makes hydroxyzine different from atypical antipsychotics.

(I also wonder if other antihistamines, like promethazine, might also have weak antipsychotic effects. It would make sense - perhaps others have noted it - though I haven't really looked into it.)

[1] https://www.jacionline.org/article/S0091-6749(05)80248-9/pdf Snowman, A. M., & Snyder, S. H. (1990). Cetirizine: actions on neurotransmitter receptors. Journal of allergy and clinical immunology, 86(6), 1025-1028.

[2] https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.873.3218&rep=rep1&type=pdf Haraguchi, K., Ito, K., Kotaki, H., Sawada, Y., & Iga, T. (1997). Prediction of drug-induced catalepsy based on dopamine D1, D2, and muscarinic acetylcholine receptor occupancies. Drug metabolism and disposition, 25(6), 675-684.

[3] https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5472076/ Cornett, E. M., Novitch, M., Kaye, A. D., Kata, V., & Kaye, A. M. (2017). Medication-induced tardive dyskinesia: a review and update. Ochsner Journal, 17(2), 162-174.

[4] https://www.thieme-connect.com/products/ejournals/abstract/10.1055/s-2007-1014510 Eikmeier, G., Muszynski, K., Berger, M., & Gastpar, M. (1990). High-dose trimipramine in acute schizophrenia. Pharmacopsychiatry, 23(05), 212-214.

Expand full comment

I'm not familiar with this drug, but I've been told specifically that imipramine is an antihistamine (by a pharmacist advising OTC antihistamines might exacerbate an adverse reaction), and also by an allergist that seroquel (quetiapine) masks the standard allergen tests.

Do people assign any meaning to psych drugs being antihistamines? I have never heard anyone say (from psychiatrists to GPs to allergists) "you know what, a general model of mental illness might be that it's an allergy to common mold or fungi".

Expand full comment

I've always assumed that there's no real benefit to antihistamines for mental illness other than sleep (or making someone more sedated and thus less troublesome for people around them, though this isn't usually a good reason to give someone a drug). I've always assumed it's just because the H1 receptor (which is the target of antihistamines) is structurally similar to various other CNS receptors (like serotonin/adrenergic/dopamine/muscarinic receptors) and so it's hard (especially before modern drug discovery processes) to make a drug that affects some of those receptors without also affecting the others.

(Some people also like more sedating antidepressants or antipsychotics because they make them more relaxed/less tense/less "on edge", but my guess is that that's through other mechanisms rather than an antihistamine effect - since pure antihistamines don't seem to help much with anxiety.)

As to whether allergy might be involved in mental illness, I'm not aware of any research on that specifically, but I think there's generally agreement that the immune system is intimately involved in mental illness, though nobody really knows quite how.

Expand full comment
author

I didn't know this! Thanks!

Expand full comment

I'd be curious of any more information on this....

Expand full comment

Is there any good empirical evidence that people who go on examine.com and pick out 20 supplements will have worse health than in the counterfactual where the same people didn't take anything?

It seems to me that observational studies will suffer from the sick-user-effect and inevitably fail at fully controlling for that.

So...

Would anyone volunteer to flip a coin, and if in lands tails, go buy 20 supplements of your choice and take them for a year? And either way the flip goes, report your results on an online survey?

Here's a list of supplements I'd consider reasonable to take for that challenge. I expect the expected utility is nonnegative for each of these: Spirulina, Chlorella, broccoli, magnesium, zinc, vitamin d, glucosamine, chondroitin, cod liver oil, microdose lithium, green tea, caffeine, B12, K2, Kefir, garlic, carnitine, cocoa, blueberry, krill oil. That's 20 things that one could take all at once and probably on average not be any worse off except in the wallet.

Maybe we could even avoid the need for a large sample size by just having a betting market on whether the average outcome in the experimental group will be better than the average outcome in the control group. (although this will bias us against considering rare-but-very-bad outcomes) Maybe we could even avoid running the experiment most of the time by rolling a D20 to decide (1: do the experiment and settle bets accordingly. 2-19: all bets are void) and use this 95% savings to fund a much longer and higher quality study. I think Lloyd's of London would write an insurance policy that pays fair odds minus 1% if that D20 comes up 1. Then you have ~20x more money to run a much better study when you actually need to run it. This is sort of like Robin Hanson's idea of double or nothing lawsuits: https://www.overcomingbias.com/2007/10/double-or-nothi.html

Expand full comment

What benefit does this give you over the existing studies on the individual supplements?

Expand full comment

Reducing uncertainty about interactions or the adding up of side effects too small to show up on the studies of the individual supplements.

Expand full comment

Note that "homeopathic" does not necessarily mean what you think it means. Homeopathic means diluted, and you may think this means diluted to the point of containing no active ingredient - but this is not the case.

For example, Cold-Eeze is a brand of homeopathic cold lozenge, which, among other things, has diluted zinc in it:

https://askinglot.com/how-much-zinc-is-in-cold-eeze-lozenges

Now, this site is talking about how the amount of zinc is unreliable, and the lozenges don't always contain an effective amount of zinc - but if you think homeopathic means that it doesn't contain any effective ingredient, the fact that it ever contains an effective amount of zinc should come as a surprise.

https://dailymed.nlm.nih.gov/dailymed/fda/fdaDrugXsl.cfm?setid=a25b6a72-93b3-487a-9fb9-d33453a448ee has some information about the content of this "homeopathic" remedy. 1x means it has been diluted once, in a 1/10 ratio; this is described as "low potency" in homeopathic communities, but is still considered a homeopathic remedy. But clearly it still contains zinc.

So, well, of course there will be studies showing that homeopathic remedies work; any remedy with 90% inactive ingredients could be described as homeopathic. And even the use of milk sugar as a diluting agent is pretty common: https://www.accessdata.fda.gov/drugsatfda_docs/label/2007/010187s069,018029s040,021284s011lbl.pdf

Expand full comment

It is my understanding that cold lozenges are marketed as a homeopathic remedy to get around medical regulations. It is very difficult and expensive to develop a medicine and get it FDA approved so you can legally call it a 'medicine', whereas it is cheap and easy to take something that probably does have real, scientific medical value and market it as a homeopathic remedy because you only need to meet supplement / food safety regulations.

Expand full comment

A scarier example is a homeopathic product for infants that caused seizures because it allegedly contained deadly nightshade: https://www.statnews.com/2017/04/13/homeopathy-tablets-recall/ What I find frightening is how extremely toothless the FDA seems to be.

Expand full comment

If it contains any significant amount of zinc, I don't think it should be considered homeopathic. I've seen studies that indicated that there is a particular critical dosage of zinc which is most effective, and going either signifcantly above or significantly below that dose produces worse results. Now diets are going to contain SOME amount of zinc, and I'm not sure it's not cumulative. So a really low dose of zinc over time may be optimal...for reasons that have nothing to do with homeopathy.

FWIW, when I went looking for an effective starch and sugar free cough syrup I ended up with something labelled homeopathic. But I think that the effective medicine was one of the fillers, that soothed my throat. It *was* a good cough syrup, but I doubt that it was because of the homeopathic ingredients. It's always necessary to remember that most things aren't just pure elements or compounds, and the effect may not be due to the advertised reason. I often select cough drops based on some of the ingredients labelled "inactive". They seem to work better. So homeopathic medicines can be good medicines (at least for symptomatic treatment) for reasons other then whatever the homeopatic ingredient is.

Expand full comment

re: Pascalian Medicine, I was thinking about it a little more.

I think the general idea of "take everything that probably doesn't hurt you all the time" probably sucks as an approach.

But as a diagnostic tool.... maybe not.

I recently saw an interesting case report about a (very rare) genetic condition that broke an amino metabolism pathway. Turned out that a supplement that would normally be a waste of money for 99.999..something percent of the human population halted painful nerve damage. Not naming the amino because this is the sort of forum where someone will latch on to it and drop a pile of money on pills that are not going to help them.

There's a long tail of people with rare conditions, or common but hard to diagnose conditions. Often it can be incredibly hard to zero in on the cause of their condition or what might help.

If you could monitor someone's general health and wellbeing over a long-ish period and they actually took their pills reliably, you might be able to perform something like a binary search.

start with a large Pascalian pile of various supplements and generally harmless things, get them to record how they're feeling each day in something like an app. Watch out for them getting suddenly worse... because that's also a possibility with the Pascalian approach, lots of small risks add up. but if they do experience a significant improvement you might be able to narrow it down to a particular thing over a few iterations.

Expand full comment

I guess this is the idea of multivitamins for everyone?

Expand full comment

probably more specific, not just vitamins but activated and inactivated versions of each one, then change what's given every few weeks to see if there's any correlation with the persons condition until you've zero'd in on a few candidates if anything does correlate at all.

Expand full comment

What's that about curcumin only being safe if sourced correctly? Is the spice I buy in the supermarket likely to be safe?

Expand full comment

I'm under the impression that in the US it's relatively safe if you buy it from a known brand in a grocery store, the high-lead varieties are largely from unbranded suppliers/unmarked jars imported from abroad, and US customs is pretty good these days about heavy metal testing in spices since they know what to look for. A workaround if you're still concerned may be to buy whole root turmeric and grind it yourself.

Expand full comment

Turmeric in particular is occasionally cut with lead salts (lead chromate IIRC) -- they're bright yellow (which in uncut turmeric denotes quality) and heavy (which, since sold by weight, is worth doing). And the source of most curcumin is turmeric.

Expand full comment

Well, now I've learned something I did not know! As turmeric replaced saffron, because saffron was so expensive, now looks like unscrupulous turmeric merchants are replacing pure turmeric with lead salts!

Expand full comment

Oh, looks like I was confusing curcumin and turmeric. The stuff I put in my food sometimes is actually ground turmeric. I'm in the Netherlands, and it's from a well-known brand, so it's probably safe enough.

Expand full comment

Scott's post now like a month old but about 'families' I thought it was funny that the first black chess grandmaster has a sister who is a world champion boxing and a brother who is a world champion in kickboxing https://en.wikipedia.org/wiki/Maurice_Ashley

Expand full comment
author

That is actually really interesting, thanks!

Expand full comment

Thanks for linking David's analysis. I think he is very good at figuring out whether a problem hasn't been solved yet. I have to ask, what does the community think of meta-rationality. It seems like the obvious next step to me. Don't just use a system, use all of them! Create them! Rate them! Don't win systematically, win consistently!

Expand full comment
author

I think it's kind of buzzwordy. Obviously you shouldn't be a robot who fanatically sticks to one method of analysis and can't think outside the box, but I'm not sure this is a huge problem or that the existing idea of "rationality" doesn't cover not doing that.

Expand full comment

Why are companies in knowledge industries that worked mostly in-person before the pandemic so keen for people to return to the office in person now things are mostly re-opening?

My model of large companies is that they are generally fairly inefficient but rarely self-destructively stupid, but (in my industry at least) the requirement to return to the office 3 or 4 days a week (compared to pre-pandemic 4 days a week) does appear to be an example of a self-destructively stupid decision - myself and my entire team are searching for new jobs which - if we all leave in the next couple of months - will mean the company is unable to operate until they replace us. It also isn't like this will come as a surprise to the company - a staff satisfaction survey sent round a few months ago indicated that about 97% of the company were 'not at all excited' about returning to the office and a significant minority would be 'extremely' unlikely to recommend working for the company to a friend if we returned to the office (which I take as a proxy of flight risk)

I've asked this question on reddit and not really received very satisfactory answers - there seems to be a sense in which 'middle management want to control workers so their uselessness is not revealed' or which 'companies don't want to waste the expensive investment in their offices' but neither of these seem very coherent explanations to me; they seem to rely on a level of coordination between middle / senior management and renters / landlords respectively which does not really seem credible, and treats senior management of Fortune 500 companies as blundering idiots being run rings around by middle management, which is not my experience of them.

Does anyone on ACX have any more plausible explanations of why companies seem to be following this self-destructive path?

Expand full comment

Are they keen?

Post-COVID, my office has like 4 people on-site out of 40 desks. It's nice and comfy, really.

Expand full comment

I don't know about the decisions of companies, but I can tell you *I* want to work in the office and not at home. My reason is simple: I have a firm work/life policy. Despite being in academia, where I can work pretty much where and when I please (apart from actually being in the classroom), I do 100% of my work in my office. When I am off the clock, I am completely off the work clock. If I brought work home—papers to grade, grants to review, articles to read, whatever—I would never be truly off the clock. I would always feel anxiety that I'm not getting more done. This way, I put in an 8-5 day and go be with my family.

Expand full comment

A lot of people are being required to spend about 50% of their time in the office, which doesn't make much sense. You go into the office, and then you videoconference with the people that didn't today, just as if you were home. I don't want to go in a conference room even if I'm meeting with people that are present, because we have to wear masks. It is nice to see people in person, but the illogic and inconsistency grates on me. People are declaring a state of emergency, saying we have to rush out and get a booster shot, but we have to pack into elevators and offices because reasons. If I was completely free to choose, I guess I would not go in, because even if it has some good points, I'd feel *stupid* if I caught something unnecessarily. It's not about risk per se, but about having an excuse for it. I see no excuse.

Expand full comment

I am a senior manager of a medium sized firm. Our executive team unanimously agrees that the downsides of WFH are sufficiently large over the long-run to justify risking a mass-exodus of staff or a significant hit to employee morale if it means returning to the office. One major factor here is how the incentives created by WFH create a steady creep of inefficiencies into everyday processes.

Broadly speaking, workers and managers are attempting to optimize different things: workers are attempting to optimize individual performance, whereas managers are attempting to optimize for group performance, and these two strategies are often at odds.

Workers want to optimize for completing their assigned work. At home, this is easy; you simply perform your tasks as quickly as you can, and log off for the day. This causes workers to self-report higher levels of efficiency and life satisfaction when working from home, but fails to incentivize workers to complete their tasks in a manner that is beneficial to the rest of their team.

Take Deborah the Secretary. Deborah wants to finish her new daily data-entry work quickly so she can spend time with her kids, so she quickly makes a quick Excel sheet and distributes it to her 10 colleagues. Her 10 colleagues notice that the data actually needs to be formatted differently, but it's a 5-minute fix, so they each just quickly fix it themselves when they receive the file each day. After all, teaching Deborah how to fix it would be a 60 minute meeting, and everyone just wants to get their work done for the day.

In a traditional office environment this would have been noticed and fixed almost immediately; there are several ways of stamping out this type of inefficiency. Closer managerial oversight, closer collaboration between colleagues, and the fact that staff are required to put in a predetermined amount of time each day (finishing work quickly is useless) provide a set of incentives and social pressures that encourage workers to optimize for team performance rather than individual performance.

This is only one of several factors, but other commenters have touched on some others. My intuition is that it's extremely difficult to prevent this from happening at a large firm, and relatively easy at a small one.

Expand full comment

Something seems off about assuming that removing incentives to finish work quickly is simply and obviously a gain for groups. When it comes to upper management, it is conventionally taken for granted that allowing people to internalize some of the gains they produce for the organization is beneficial. A sharp division between people who get carrots and people who get sticks may play into the self-image of the former group, but is there clear evidence it is the optimum? I've wondered if the ability of people to internalize productivity gains more in government jobs compensates for some of the other inefficiencies compared to the private sector. Which is better, someone that gives lip service to a big (and doomed) automation project that everybody suspects will lead to layoffs, or someone that does their utmost to eliminate drudgery because they, and others, will temporarily reap the benefits, and are assured of transfer elsewhere when they become redundant?

Expand full comment

I am pretty torn on this.

In the middle management roles, onboarding new people becomes much more difficult with WFH. Some training happens in official courses or docs, but a lot (most?) of it is offloaded into spontaneous office conversations that are really hard to replicate online. There are a million questions from a new hire, some of which feel too small to drop in a slack when you're the only one asking, or maybe feel too big and you don't know enough to ask.

In WFH the team lead can easily become responsible for almost every moment of growth for every employee, from skills to process to culture/mission. It becomes a lot, and I'm not even sure if one person can reliable teach all of that effectively without relying on a team.

Beyond development, mgmt is responsible for everyone's wellbeing. Ie, is anything frustrating an employee that I can help fix? Is someone approaching burnout? Is someone spinning because they are not matched to the right task? Hard to ensure you're tracking all that stuff and asking the right questions if you don't get to see someone's face and body language, same way you pick up on things with friends.

That said... I strongly suspect strategies and processes designed to foster spontaneous learning across different comms channels could mitigate a lot of the lost advantages from on site work. That's not just having the technology, but investing in cultural norms about how to use those channels in a casual way. Most firms do not have the patience to invent new cultural practices from scratch, so they're defaulting to what they know -- just put people in a room together.

And when we talk about the inarticulable institutional gains, it's really hard to parse those out from institutional inertia. And that really compounds the way this is discussed with the workforce. Especially when that workforce doesn't generally see the organization making a bunch of other decisions to delay gratification on productivity in order to do more institution building.

Regardless of which strategy an organization takes, any discussion of the strategy should be pretty open about acknowledging the detailed tradeoffs and unknowns, and show a humility and willingness to continue experimenting to find the best balance.

Expand full comment

Speaking as Deborah the Secretary...

I agree that things like you say happen. But I'm generally the one correcting them, and it often happens that it's from higher-ups that the screwed-up formatting comes. Because I'm in clerical/administrative support role, I'm *supposed* to know this stuff and get it right.

The frustration of drawing up a nice, working Excel spreadsheet then distributing it to people who don't know how to use it is real, as is the "feck it, send it back to me and I'll correct it" because yes, trying to teach them how to use it *would* take up more time and energy than anyone has.

But in every job I've ever worked, there has been slack time and then busy times. I've gone from twiddling my thumbs and asking around the entire office "does anyone have anything they need done?" to wishing I had a time-turner because there are sixteen different things needing to be done at once.

In the office, as you say, finishing work quickly is useless because there is the obligation to put in the predetermined time. This means that work gets drawn out to cover the slack time, or you find make-work to do, or what most people probably do - surf the Internet and take care of personal tasks.

Working from home means I can always find something else to do, because instead of sitting at the desk pretending to be doing something, I have a home to run. It also means if I need to work longer hours at home one day to cover something, I can do this - because I have slack time to cover what I need to do otherwise.

Expand full comment

I'd love to talk with you about our various different experiences, perhaps somewhere offline. Would you be interested?

Maybe even produce something kind of like an adversarial collaboration, about what the same things look like from the very different viewpoints of senior manager and senior knowledge worker.

I'm not sure how to arrange that, using this venue, without doxing one or both of us. If you are interested, I can check. Also, if you happen to be on DSL, I'm there too, with the same alias, and it definitely does support private messaging.

Expand full comment

In a previous life, I was very interested in facilitating adversarial collaborations (coincidentally, I'm the other half of the 2019 collaboration Scott mentions in the main post). If both you and Smough were interested you could email me at frankiehenshaw at gmail dot com and I could act as the go-between so that nothing you want to keep private is posted on the open internet.

However I'm also finding both of your contributions to be exceptionally insightful, so if you wanted to just keep hashing it out in public I'm sure nobody would be any the poorer for it!

Expand full comment

Workers optimize for doing fuck all and still appearing productive. WFH makes this massively easier. This is a feature, not a bug. See Graeber's bullshit jobs.

Expand full comment

Great for the bullshit jobs. Less great for the remaining real jobs.

Expand full comment

This seems to me to fit with the organizational neurosis/myth that produced open offices. There's this magic sauce called "collaboration" that only occurs if peons are constantly interrupted and unable to concentrate. They absolutely won't work together or share information if they are offered a chance to get their own tasks done efficiently. Management needs to force them into a noise pit for the good of the company.

In most places, managers, except perhaps very low level ones don't have this problem. They need private offices to do effective work. But not grunts.

My theory has been that it's about showing status, and making sure the peons know that they are at the bottom of the totem pole, even if the company could not survive without certain specific individuals among them, whereas middle managers are a dime a dozen.

Demonstrating one's status, and keeping potential rivals down, is evolutionarily important. Executives are on average, especially competitive/into status; that's why they put up with the shit needed to achieve the status they have. Lower level managers probably less so, on average, being less successful, but they still have too much of it, again on average, for the good of their subordinates.

In modern times, they generally can't just be honest about it, even with themselves, so they rationalize themselves a whole batch of excuses. But basically, they are just more interested in demonstrating their own status than in their company's ongoing success.

Expand full comment
founding

Yeah, open offices are not the way to do collaboration right. Maybe there are industries (advertising?) where Good Ideas everything and implementation a mere detail, but in the realm of 1% inspiration, 99% perspiration, no.

It does not follow that putting everybody in an iso-pod, whether at home or at the office, is also the right answer. When it's inspiration time, you really want people talking to one another. And when someone reaches the limit of immediate perspiration, you want it to be as easy as possible for them to ask for and get help from a colleague.

Right now, even my most junior staff have private offices - there have been times when it was two to an office for the Level 1 staff, which works almost as well. But having colleagues in an office across the bay, being able to see that their door is open or closed and knowing that the open door means Ask Me Anything, that's worth a lot. Sitting around a table for the weekly staff meeting, sometimes going to lunch together, ditto. Losing that for the past year and a half, has cost us.

Expand full comment

You are absolutely right. Apple used to give people their own office - relatively small but everybody got the same sized office including Steve Jobs - and that was its most productive era. Now with the new shiny space there’s no private offices. It’s not about cheapness as that place was expensive. Let’s see what happens. Anecdotally I would say Apples best days are behind it, but people have said that before.

Expand full comment

Not to be too critical of my immediate boss, but she tends to micro-manage - as in firing off emails every ten minutes - and this is *not* conducive to "incentives and social pressures that encourage workers to optimize for team performance rather than individual performance", because the "closer managerial oversight" means I have to drop what I am currently working on and go hunt up the answer to what she wants, often for a query she could easily do herself.

(She's a good boss but she tends to be anxious about stuff, and that leads to the micro-managing).

This means that I am interrupted frequently, and as you know, once you lose your train of thought it's hard to get it back. Interruptions to the flow of work don't make it go more smoothly or efficiently. Having said all that, I also once had a boss who was laid-back to the point of being a horizontal line and who was frequently absent from his office as he was involved in plenty of organisations and committees. In that case, I often *wished* he paid closer attention to what I was doing, as sometimes it would be "I can't go ahead with this task as it needs him to sign off on it, and he's not here".

There needs to be a balance between constantly looking over your staff's shoulders and never being there to check or guide.

Expand full comment

Note: there's a whole pile of research pretty conclusively demonstrating that open offices impair productivity, especially for software developers, which I brought to the attention of the top executive to whom I had access, when it was decided that everyone in software development would be moved from cubes to an open plan.

He kept insisting we needed to "try it" to see if it worked, in spite of all this evidence. I was disinvited from the team that was planning the move, whereupon the teammate invited brought his own collection of studies - there were so many, we didn't even need to overlap.

I found myself a new job, somewhat distressing my immediate superior.

Expand full comment

Do you have some quick links to the research about open offices? I'm getting forced into an open office and while it's a done deal at this point, I want to at least make /some/ noise.

Expand full comment

Here's what I happen to have bookmarked, mostly from back when this was all going on. These are generally not research, but popular news articles, and I haven't attempted to distinguish those that reference research from those that don't. Also, they aren't all about productivity.

I'd have included the article titles, but I don't think substack supports html a elements.

http://www.theatlantic.com/health/archive/2014/06/an-office-for-introverts/372130

http://blog.cultureiq.com/quick-guide-to-pros-cons-open-office-floor-plan

http://www.nytimes.com/2012/05/20/science/when-buzz-at-your-cubicle-is-too-loud-for-work.html?pagewanted=all

http://www.washingtonpost.com/posteverything/wp/2014/12/30/google-got-it-wrong-the-open-office-trend-is-destroying-the-workplace/

http://blog.officedesigns.com/work-space/is-the-open-office-trend-destroying-the-workplace/

http://www.fastcompany.com/3019758/dialed/offices-for-all-why-open-office-layouts-are-bad-for-employees-bosses-and-productivity

http://www.forbes.com/sites/neilhowe/2015/03/31/open-offices-back-in-vogue-thanks-to-millennials/2/

http://blog.idonethis.com/reconsidering-the-startup-open-floor-plan-office/

http://www.theage.com.au/comment/silicon-valley-got-it-wrong-the-openplan-office-trend-is-destroying-the-workplace-20150420-1molwh.html

http://www.bloomberg.com/bw/articles/2014-07-10/steelcase-susan-cain-design-introverts-office-spaces#p1

http://www.icben.org/2008/PDFs/Hongisto_et_al.pdf

http://www.newyorker.com/business/currency/the-open-office-trap

http://www.theatlantic.com/magazine/archive/2014/04/the-optimal-office/358640/

http://www.nytimes.com/2012/01/15/opinion/sunday/the-rise-of-the-new-groupthink.html?_r=0

https://hbr.org/2014/01/to-raise-productivity-let-more-employees-work-from-home

http://smallbusiness.chron.com/advantages-disadvantages-openplan-office-space-80288.html

http://fortune.com/2015/03/18/pros-and-cons-open-office-floorplan/

https://hbr.org/2011/07/who-moved-my-cube/ar/1

http://www.forbes.com/sites/susanadams/2013/05/17/why-the-open-office-fails-and-a-solution/

http://www.forbes.com/sites/jmaureenhenderson/2014/12/16/why-the-open-concept-office-trend-needs-to-die/

http://www.forbes.com/sites/jmaureenhenderson/2014/12/16/why-the-open-concept-office-trend-needs-to-die/2/

http://www.sciencedirect.com/science/article/pii/S0272494413000340"

Expand full comment

Thank you, I'll take a look.

Expand full comment
Comment deleted
Expand full comment

What I don't understand is why *I* have to do what works better for you? I mean, if you're the boss, and you prefer that I do bad work so that you can feel properly superior, you get what you pay for, along with a horrific upward reference from me to anyone who asks me what it was/is like to work for you.

And I'll happily call you a selfish status seeking monkey, if not something less suitable for polite company.

But if you aren't the boss, what's wrong with you doing what works for you, and me doing what works for me? It's certainly not especially selfish of you to want everyone, including yourself, to have working conditions that work well for them.

Is it that you figure that management will impose the same rules for all, and you merely want/hope they pick the rules that suit you best? I.e. anyone who wants different working conditions for themselves is necessarily your opponent, and classing you with the managers who impose your preferred conditions on them?

Sadly, I agree that this is probably what management will do; in any company that isn't tiny, the executives tend to regard software developers as interchangeable engineering units, to all be treated in precisely the same way, except perhaps for differences in status. But I can still distinguish between those making the bad decisions, and those doing their best to cope with the constraints they experience.

Expand full comment

Well, if I were a manager with these motives, you would be incorrect to claim I'm status-seeking, since those aren't my motives. But as I already said, I'm not.

My dream job involves a lot of flexibility, but the entire team being in the office 75% of the time. If you're at home, I have a higher emotional barrier to overcome to reach out to you.

Your dream job is incompatible with that. That's fine. But you should consider that part of why people disagree with you about how people should work is reasonable personal differences, and not the hyper-cynical status-seeking monkey picture you're painting.

Expand full comment

In some ways, that would be even worse. I'd be in the office because you enjoy that. You would feel better able to reach out to me emotionally. I would feel alienated and angry.

Of course here I'm making the assumption that if you want everyone in the office most of the time, you also want them visible to you - no individual or shared offices, and probably not even cubes.

I'm perfectly happy working at the office, and prefer the work-life separation, if only I'm provided with an office where I can actually work effectively.

Emotional connections are bidirectional, and I'd be, at best, politely suppressing my real experience of the workplace. And I imagine I wouldn't be the only person who considered the office, or the commute, or similar, as the worst part of their job.

I'm not sure there could be any real emotional connection between you and anyone who shared my preferences, unless there was some reason other than the boss' preference, that the staff found credible, for the need to be present.

Expand full comment

Why are you assuming I want to work in the same company as you? I'm perfectly fine with us having irreconcilable preferences because I can and did search for a job that fit my preferences (and presumably one you would have rejected.)

I'm slightly less happy with my coworkers who are trying to change the company I found to work in a way that's worse for me. But fundamentally there's no reason why you and I shouldn't both be satisfied by working in different environments with different rules.

Expand full comment

It's worth adding that unless your team is tiny, you almost certainly have colleagues that prefer to work at the office, whether to enhance work-life separation, or because they don't have space at home for a decent desk, or all kinds of other personal reasons. You don't need me to be there to notice if you are slacking off - which I probably won't do anyway, unless your method of slacking happens to be noisy or otherwise interfere with my concentration.

Expand full comment

Worth editing that I have noise cancelling headphones I can put on if I need to block out noise, so the 'noise pit' aspect has no effect on me. And I can pull them off for low-stakes entry into water-cooler-type spontaneous conversations.

Expand full comment

My perspective is that an existing workforce can switch to WFH easily and the individual workers will be generally happy and productive, provided the tasks they need to complete are conducive to that kind of work. The existing workforce can probably continue working from home for a pretty lengthy amount of time (I would say 3-5 years, for a good working group) before the individual employees can see that the system is breaking down, but that management started seeing after 6-12 months and by now are becoming overwhelming.

There are two main types of problems, neither of which exist to any significant extent on day one of WFH, but both of which become increasingly more challenging over time.

1) Turnover and new hires. Training new employees, especially getting them acclimated to the culture of the business, is extremely hard remotely. If the new employee has problems with attendance, productivity, relationships with other employees, etc., then it is much harder for managers to correct or even identify the issues. Training them on how to actually do the job can also be very hard, especially for entry level work and younger employees. An established employee working with a seasoned group of coworkers may not even notice what's going on with new hires, but management will be seeing it as soon as new employees get hired.

2) Existing employees who run into problems. Sometimes this is a medical condition, personal life issues, or downright shady behavior like working for another company when they should be working for your company. It's hard to see what's going on when the employees are all remote, and an individual can make up a wide variety of reasons why their projects aren't getting done. Depending on the type of work, they can even fake doing work at all for a while, until things get really bad and the managers are now looking at a real problem of how to fix the mess. Even a single case of an employee blowing off months of work because they could get away with it working from home will send a shock through management that pushes everyone back to the office. Some of this is hard to prevent, especially if the employee has legitimate reasons to be missing some work, such as a medical condition, a death in the family, or whatever else the company may try to give them some slack with.

I don't know how long it would take most companies to start seeing a third problem, but social cohesion is also a big deal and easily overlooked when a company first switches to WFH. Work friendships are important to the operation of a company, and help with retention, productivity, hiring referrals, sharing new ideas, and a whole host of softer benefits.

I watch the literature on this type of thing, and about six months in there was a lot of buzz about permanent WFH. Then, about six months later, reality started setting in and companies realized that most workforces can't continually work from home.

Expand full comment

In spite of my highly negative comment above, I've seen these things. My employer likes to update (i.e. break) their IT infrastructure regularly, and it's much harder to fix it or find a workaround when you can't simply go to a coworker's desk and ask to use their still functional machine to e.g. look up the number for the help desk, let alone pick up your spiffy bricked laptop and physically take it to IT.

And trying to get one's feet under the desk remotely is harder, and harder still in a company that doesn't know how to organize a remote team, and doesn't especially want to learn.

OTOH, I could blow off work at least as effectively at the office as at home. If I'm not at my desk, that would be because I'm working in a conference room to try to get some peace and quiet to concentrate. If I'm not in one of the rooms near my desk, it's because interruptions follow me when folks can see me, so I'm e.g. hiding in a conference room on some other floor, among people I don't work with directly.

Management expects we'll complete our work however long it takes, usually via a second shift at home after being unable to concentrate at the office all day. Unsurprisingly people don't want to do that, so they spend a lot of time hiding.

I should also note that management really cannot measure my productivity, inside or outside the office. I am, sadly, a past master at doing what will be rewarded, whether or not it's the best choice for the job I'm supposed to be doing - but productivity isn't even part of the recipe for looking good. Sometimes it directly contradicts what I need to do to look good - e.g. visibility (more meetings) is usually better than spending the same amount of time doing useful work.

Expand full comment

Thanks for this - exactly the sort of insight I was hoping for!

It was my understanding that there were some industries where remote working has been the norm for some time (that is, was the norm pre-pandemic). How have these industries avoided the collapse you describe above given they have been operating for >3-5 years? Or are there certain industries which can avoid the collapse, and these industries have mostly already taken the gains from allowing fully remote work?

Expand full comment

As others have stated, there are better approaches than the one forced on many companies in March of 2020. It helps a lot to have measurable outputs on a regular basis, good communication tools, a very strong understanding of what is expected of employees (think detailed job descriptions and standard training), and, where possible, some kind of self-regulating iterative process built right into the job. For that last item, think of commissioned sales - a field that has been unusually remote for many years. If you pay your employee a low salary, but a commission that could earn them 3-10X+ that salary, then you aren't out too much if they underproduce for a while. A sales person who doesn't get any sales for X months/years (depending on industry) doesn't hurt the company too much, because they also earned very little money, and they are automatically incentivized to push for sales through the promise of high commissions.

Most companies that went suddenly remote out of necessity had very few to none of those things, and it's very difficult to implement them once you're already remote. They're honestly hard to implement anyway. The best bet would be at large companies who were naturally decentralized already, who had to develop systems that worked across multiple locations in different states and countries. Giant companies, especially in tech heavy fields that can expect digital know-how from most employees, may be able to stay remote indefinitely. Most companies are going to hit some hard walls, if they haven't already.

Expand full comment

There are better ways to have distributed teams that the more mature organizations with all-remote teams use. Most companies have just tacked on Teams or Zoom for "collaboration" and kept using the same processes as when they were in-person (so still using lots of email and more meetings). There are definitely software companies that are more advanced for fully-remote work. I think a lot of the stories I've read or heard about those have to do with using better work management tools - especially task boards like Trello or Jira, since you can update your work status there and a task card is tied to the work product that you've checked in. Although software development may be conducive to using those kinds of tools, I think any organization would get substantial benefits using task boards because it puts all information about a task in one place that isn't trapped in a personal folder or inbox, and also reduces the constant noise of status discussions via email.

Expand full comment

Wait, issue trackers / task boards are not standard tools to coordinate work?

Expand full comment

I can't read if that's sarcasm or not, so I'll treat it as genuine surprise. Even the organizations I've worked with that have ServiceNow trackers or Jira or some other task board don't use it to actually coordinate work. All the coordination is done via email, instant messaging, or in meetings, and then putting something on the Jira board is an afterthought. If you're lucky there is a shared folder somewhere on the cloud drive or Sharepoint where the important work product is stored, but it's not linked in any way to a tracker. I mean, why check the current status or work check-ins on Jira when I can just email the engineer for a status check and a copy of the latest document draft... and while I've got him on the phone ask him for some Tier 1 help that anyone at the help desk could have figured out, but it was just a 10 minute task so no need to add it to the tracker, like the other 15 tasks someone has called him about today that also didn't get added to the tracker to be assigned to an appropriate resource.

Expand full comment

Yes, I was serious and thanks for a reply.

(in programming even the most dysfunctional projects I have seen that were not using version control still had an usable issue tracker)

Expand full comment

*sigh*

Sometimes that's because the bug/task tracking system is designed only from the POV of those managing the work, and is useless for coordinating the work. IIRC, Jira's products are in that category.

In one particularly notable example, I resorted to putting tracking updates and detailed technical information for each bug I was working on into a wiki, and simply posting the URL for the specific bug to the useless bug tracking system. (My manager called the new bug tracking system "the worst denial of service attack" ever committed against the project.)

Sometimes it's because managers prefer email, and simply can't be bothered checking the bug/work tracking system, so engineers stop updating it. But if the bug tracking system is any good, the engineers are using it to coordinate with each other. (It's only the never ending status updates, rolled up into useless meetings, that move out of the bug tracking system.)

With small tasks - especially IT help - it's often because the tracking system is ponderous and hard to use. Worst case, it's unusable for someone experiencing many/most of the problems for which they'd require help from IT.

If the task takes 10 minutes, and adding it to the tracker etc. takes 5, and farthermore adds several hours of latency, no one's going to use that tracker for that class of task. If forced to use it, because they can't get help any other way, they'll tend to wind up contemptuous of the team tasked with providing the help, or at least its management chain.

Expand full comment

Mark Seemann's take (https://blog.ploeh.dk/2020/03/16/conways-law-latency-versus-throughput/) at the beginning of all this really clarified a lot of things for me, and accords with your hypothesis. As an N=1 attempted replication, I've been trying to get my team to shift toward an asynchronous distributed approach, but my manager is a very inside-the-box thinker.

Expand full comment

Thanks, the ideas in that article aren't new to me but I like how he puts them together - and I somehow have never run into "Conway's Law" as an expression before. I don't know if this concept would appeal to your manager, but one of the major benefits of moving out of a constant email-asynchronous process into something like a kanban board is that it at least gives you the opportunity to protect the resources that are the bottlenecks in your production. If nobody can just email "Bob the Expert" directly whenever they have a question - no matter how many other people could have answered it - then projects start to get a lot less chaotic because Bob focuses on the things that Bob is the only person in the group capable of doing. But the manager has to be willing to force compliance with that process. (An overview of this in Goldratt's Theory of Constraints is https://social-biz.org/2013/12/27/goldratt-the-theory-of-constraints/ )

Expand full comment

I know this is in the thread about the work-from-home aspects of the pandemic, but it seems very relevant to the other thread about the where-is-the-vaccine-shortage parts of the pandemic.

Expand full comment

> How have these industries avoided the collapse you describe above given they have been operating for >3-5 years?

Well, stereotype of programmers as bunch of socially dysfunctional autistic introverts has some source in reality. It is likely that some groups of people had already negative effects of WFH even when kept in office.

And judging effectiveness of programmers is famously hard to do anyway.

https://www.folklore.org/StoryView.py?story=Negative_2000_Lines_Of_Code.txt

In early 1982, the Lisa software team was trying to buckle down for the big push to ship the software within the next six months. Some of the managers decided that it would be a good idea to track the progress of each individual engineer in terms of the amount of code that they wrote from week to week. They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week.

Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementor, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code.

He recently was working on optimizing Quickdraw's region calculation machinery, and had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.

He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.

I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied.

https://imgs.xkcd.com/comics/compiling.png

Expand full comment

This! Some decades ago, one of my colleagues was fired for a combination of ineptitude and a propensity to self-destructive office politics; I inherited his code.

He's been the junior team member, with the easiest part of the project, but his code nonetheless had about as many bugs as all the rest of the project combined.

It was full of routines that were all very similar to each other, and should have been a single routine, with an extra parameter controlling the small differences. Whenever he fixed a bug in one of them, he'd fix some but not all of the other similar routines. I instituted a policy that whenever I touched a routine like this, I combined the whole family of routines into one. Pretty soon the code was about 20% shorter, and no longer starring in the bug statistics. It also had all the missing features my late colleague hadn't had time to add.

Then I had to deal with my immediate manager, who was addicted to measuring productivity and complexity based on lines of code.

Expand full comment

Alternative take: programmers are so needed and hard to hire that even if WfH is 50% less effective then remote job is still justifiable at least in some cases.

Expand full comment

N of 1, but I’ve been on both sides of the issue. When the pandemic started I was an associate at a law firm. Most of my day was spent doing work assigned to me by a partner. Working at home was ideal. Fast forward to the end of last year, my boss left, taking most of the work with him, and most of my team was let go, but I remained. Suddenly I had not enough work, was helping build up another practice group, and was promoted to partner to top it off. Now I feel much more comfortable in the office where I can have spontaneous interactions with colleagues that might help build up the practice group. To some extent just showing up at the office reminds people I’m there, so I’m on their mind if something comes up that I can help them with. I think I also focus better in the office. I have two small kids at home, and while I have a quiet home office, I find myself succumbing to distraction more often at home.

Expand full comment

This matches my theory: working from home is a benefit for more established workers, but it leads to a serious loss of institutional knowledge at the company.

Expand full comment

> 'companies don't want to waste the expensive investment in their offices' but neither of these seem very coherent explanations to me; they seem to rely on a level of coordination between middle / senior management and renters / landlords respectively which does not really seem credible

It is entirely possible that management that handled offices does not want to lose face and become less useful and influential.

This does not require them to care about landlords at all.

> they are generally fairly inefficient but rarely self-destructively stupid

My model is that they are fairly bad with dealing with new things and paradigm shifts like this one

> staff satisfaction survey sent round a few months ago indicated that about 97% of the company were 'not at all excited' about returning to the office and a significant minority would be 'extremely' unlikely to recommend working for the company to a friend if we returned to the office

I read it as "majority will grumble, significant minority will consider quitting, some small part of them will quit".

It is not indicating "everyone will quit".

> they seem to rely on a level of coordination between middle / senior management

Maybe people in senior management (with much nicer offices) cannot really imagine anyone preferring to work from home? (this is approximation)

Also, maybe your team is outlier and working from home is not actually better overall for company?

Expand full comment

> Maybe people in senior management (with much nicer offices) cannot really imagine anyone preferring to work from home?

Maybe the causality is not "has a nice office -> likes to spend time at work", but rather "likes to spend time at work -> chance to become a senior manager (and get a nice office)".

In other words, you either prefer to be at home with your family and/or hobbies, or you prefer to be in the office. Only the latter have a chance to become senior managers.

Expand full comment

This effect definitely exists!

So people who like working in office are more likely to become senior manager, get better offices and be even more confused why someone would prefer to work from home.

Expand full comment

To be clear, that was trying to explain how it may look from perspective of company.

From my point of vie if commute takes over 90 minutes each day and is an unpaid time then it is ridiculous and I would not accept it (or at least take into account that time when considering income and workload when comparing available jobs).

Expand full comment

I didn’t want to clutter up the top-level post, but here are some explanations I think might be partially plausible:

1. In some way, working face to face earns the company more money. This seems like the most likely reason companies are insisting on it, but I can’t make sense of it myself – we, along with many other companies in my industry, posted our biggest ever growth / profit forecasts in 2020, and speaking personally my productivity is far higher at home for a variety of reasons. Furthermore, companies are prepared to take a loss of a small amount of profit for good causes – for example we have a Diversity and Inclusion initiative which takes up probably an hour of every single employee’s time every week – so in theory the company was prepared to eat a 2.5% loss of productivity for doing the right thing (a more cynical take is that the company spent 2.5% of its headcount budget on a PR / staff retention stunt, but this would also be true of WFH so I don’t see the purpose in arguing about it). Therefore, it isn’t like the company have split the numbers and are ruthlessly optimising for an additional 0.001% growth from office working I can’t see because I’m too junior – the gains would have to be pretty significant to overcome the pain of annoying your staff and I just can’t see gains that significant. (NB Some companies, like Big 4 Investment Banks / MBB consultancies / FAANG tech companies make a point of giving their new starters hellish working conditions as a kind of signal that their employees are the best. It makes sense they’re requiring a return to work as part of this, although I note not all of them are)

2. Senior management are making a mistake about how nice the office is to work in. By the time you’re at the level of deciding the global WFH policy of a Fortune 500, you’ve experienced at least a decade of people ‘kissing your ass’ in the office. You probably work in a nice office with a door (rather than a cube farm), have a very high degree of flexibility over your schedule if you need to conduct personal errands on company time and everyone you interact with is polite and deferential to you. By mistakenly assuming this is what ‘the office’ is like (rather than correctly understanding that it is the office experience only for you and your peer group) you could convince yourself that lower-level employees are simply mistaken about their experience and you are guiding them into making the right choices.

3. Similarly, ACX / reddit readers could be making a mistake about how nice the office is work in. The big advantage of working in an office is face-to-face interaction. These are particularly important at two points in your career; at the very start when you are being ‘socialised’ into being a worker and towards the late-middle where you start playing a game of c-suite shuffle and your political acumen is particularly important. The start of career has a double burden of WFH because you likely don’t have the equity to purchase a residence with a dedicated working space, so you end up doing spine-destroying things like working from the sofa. Therefore ACX / reddit readers (who are mostly mid-career) are in a prime position to enjoy a few years at home, before greatly regretting their choices in a few years when they start trying to break into upper management. Or alternatively, companies might reasonably feel their talent pipeline / top talent are worth nurturing at the expense of their mid-level talent, and rationally conclude that they could take the hit of mid-level talent resigning in order to have a top-quality pipeline.

4. Senior management are making a mistake about how *bad* home is to work in. If you are old enough to be deciding the WFH policy of a Fortune 500, you’re probably a ‘boomer’. Perhaps the tech is scary and unpleasant compared to the way you like to work? If you are dealing with significant and seriously confidential business decisions, perhaps the lingering thought that you might have accidentally hit ‘reply all’ or that you said something on a Zoom call where someone out of the loop was lurking with their camera off might be enough to make the office seem appealing? I have a more radical thought which I don’t know if I can fully defend: one of the big generational shifts I’ve noticed between the boomer and younger generations is a complete death of the ‘wife bad’ genre of comedy (meaning comedy like at the link here: https://imgur.com/r/ComedyCemetery/SmVzA4o)/ I think a charitable way of looking at this is that the cultural expectations of a boomer was to marry early to someone culturally appropriate and then hope it all worked out because it would be social suicide to get divorced, and many boomers are trapped in relationships that would never have occurred to younger dyads, or been divorced out of if a result of mistaken young love. This is probably doubly true of those who become senior in big companies – part of being senior is following social scripts. This seems quite an uncharitable explanation, but perhaps there is something true which can be salvaged from a pretty poor set of observations?

5. Perhaps ACX / reddit readers are making a mistake about how bad the home is to work in? For example, extending the example above; perhaps the ‘wife bad’ humour comes not from social expectations to marry people you don’t really love, but in fact spending 40+ years with one person just makes you hate them. Senior management understand this and are trying to protect mid-level staff from themselves, like a parent prevents a child from eating too much cake or something. This doesn’t just seem uncharitable but also clearly false; some companies don’t care about causing a breakdown of your marriage RIGHT NOW due to long hours and consuming work culture, so why would they care about causing a breakdown in your marriage in 40 years in a way that doesn’t impact productivity?

6. Finally - perhaps companies just don’t believe people will actually resign, when push comes to shove? Therefore reddit-type explanations ‘it is a conspiracy to protect useless middle management roles’ might start to be plausible, since if it is (almost) costless to demand the return to the workplace but it has significant benefits accumulating to a specific interest group then there is a large pressure to do it and no pressure opposing it, so it sort of happens by default.

Expand full comment

I think you're right about senior managers. They love being in the office, where everyone is deferential, says good morning, how was your weekend, they go home early whenever they need to run an errand, they buy the office donuts and tell themselves, "truly, I am a man of the people."

Suddenly when everyone is wfh, it's not fun anymore. What's the point of being a top dog if no one is around?

And senior managers are the ones making the decisions about whether WFH should continue or not.

Expand full comment

> we, along with many other companies in my industry, posted our biggest ever growth / profit forecasts in 2020, and speaking personally my productivity is far higher at home for a variety of reasons.

Maybe there is a long term-damage to productivity/team cohesion

And profit changes were unlikely unrelated to WfH but caused by other effects anyway, likely even with WfH making things 30% worse the profits would be still there.

Expand full comment

Could you expand a bit more on your first point? I agree there could well be a long-term effect of WFH that we haven't seen yet. But it seems that this surely has a symmetric upside - perhaps one long term effect of WFH is that it has a huge productivity upside as we can now recruit from all over the country rather than from the much smaller pool of people who live within commuting distance of the office (just the first example that came to mind). That is, in a situation where we don't know what we don't know, it seems odd to bias towards the pre-pandemic status quo that the workforce mostly does not want rather than the current status quo that the workforce mostly does want.

Not to put words in your mouth, but are you maybe making the argument that senior management have the skills and experience to see this long-term damage coming down the pipeline, but more mid-tier employees like me don't have the depth of experience to see the problems brewing? I guess I could believe that, but again I really struggle to believe that the harm of worse team cohesion in the future is something companies would accept short-term team decimation for now - if this were generally true companies would work harder at ensuring it was almost always the financially best decision to stay in one place rather than job-hop, and this isn't an obvious feature of the current corporate world.

But don't want to appear dismissive - I really appreciate this thought, because it is one of the very few explanations that is consistent with both a return-to-office policy and non-insane senior management.

Agree completely with your second point too btw - the profitability was because everyone *else* was staying home, so they were extremely easy to sell our product to! I suppose I include it only to illustrate that WFH didn't seem to tank productivity by any easily available metric

Expand full comment

<i>Not to put words in your mouth, but are you maybe making the argument that senior management have the skills and experience to see this long-term damage coming down the pipeline, but more mid-tier employees like me don't have the depth of experience to see the problems brewing?</i>

I would pose it differently--it's not skills and experience, but perspective. If you have a team of 4, and hired one new person who integrated well--but the CFO has 400 people in her organization, and 20 (rather than the usual 10) are obviously struggling, and of 50 new hires only 25 are doing as well as expected--it's not her greater skill that enables her to see the problem.

Expand full comment

> Could you expand a bit more on your first point?

I suspect that some negative effects may take longer to manifest.

Especially for new people - video only contact is not a real replacement of an actual meeting, onboarding may be harder.

I guess that it depends on what benefit are from direct meeting and lost when limited to videoconferences, emails and chat. I am not really sure here.

And I suspect that high-ranking management may value easy communication, team cohesion, managing etc much higher as it is the entire point of their job.

And they may underestimate importance and influence on coding/design/whatever the actual job is.

> Not to put words in your mouth, but are you maybe making the argument that senior management have the skills and experience to see this long-term damage coming down the pipeline, but more mid-tier employees like me don't have the depth of experience to see the problems brewing?

Partially that, partially result of overvaluing direct contact, partially blind luck.

Expand full comment

I think there is a conceptual mistake: "the" level of how bad/good work at home is, does not exist. The answer depends on the person you ask, and it's extremely polarized. I am working in university, and personally I am a bit more comfortable with homeoffice. But almost all colleagues in my group have a very strong preference for working in office. They are *craving* for every single day that they can work in office. A lot of people have been bending or outright breaking rules that restrict their office time (we are currently allowed 2-3 days in office per week). So just the opposite of you and your group.

For many companies, this makes it factually difficult to make decisions that don't repell a substantial part of employees. So I would be very careful to generalize from your company to others. But if I take your description at face value, then your company is simply making a mistake.

My best guess for how this goes wrong is that the decision-makers are on the "I love office" side, and they don't manage to imagine that other people feel otherwise. (This is actually quite hard if you feel strongly about something. If you are democrat/republican, then think of all the people who believe that voting for Trump/Hillary was a good idea.) So even when they see a poll saying people like homeoffice, they don't actually believe that, or at least they don't internalize it.

Expand full comment

That definitely applies. Previously topic was mostly buried and inactive, and now that WfH is viable there are both people who want it and want to avoid it.

Forcing one solution on everyone will make some group unhappy.

Expand full comment

This is really interesting, thank you. Yes, I bet it is really sector-specific and therefore it probably shouldn't surprise me that it is also quite seniority-specific within a sector.

To your specific point though, could your employer not solve this by offering fully hybrid working ("Come into the office if you want, work from home if you want")? That solves at least the problem of quiet environment and good workplace, although I suppose one key point is that it does *not* solve the problem of being lonely unless lots of people all come back to the office at the same time. Therefore a decision to offer hybrid working in an company which is mostly 'prefer-home' workers harms the 'prefer-office' workers at the expense of 'prefer-home' workers.

This is - exactly as you suggest - probably an impossible decision for companies so maybe that's why they're defaulting to heuristics like the democrat / republican example you give?

Super interesting food for thought anyway, thanks again!

Expand full comment

Thanks for the kind words!

Your solution would indeed work if there were not some country-specific side constraints. There has been a rule in my country that normal use of offices is allowed if there are at least 10 square meter of office space per employee. For our institute that means that we only have 60-70% of the office capacity that we would need for everyone coming all the time. The solution was that every group was assigned 2 days of "office time", plus some capacity that can be used flexibly (which we could assign at organization levels of 50-100 people, and we mostly used it to make the office-loving people moderately happy; demand for office was still higher than available space).

So our university actually did a pretty good job here. But it's not easy.

Related: at the beginning of the decision phase, when the 60% office space was to be distributed, my direct boss had individual zoom calls with all ~15 of her employees, just to find out what solutions we prefer (how many days in office etc). She said afterwards that her initial guesses of our preferences had been pretty bad. That was really awesome, but obviously, not every boss makes such an effort.

Expand full comment

Add-on: my top-three reasons why people like it better in office than at home:

- They don't have a quiet environment at home.

- They have no good workplace at home, e.g., no comfortable office chair and desk, no reliable internet, ...

- They feel lonely at home. (This might be especially important for universities where the peer group often overlaps more strongly with the group of colleagues. Many PhD students have just moved to the city a few years ago, and have few contacts outside of university. Often they don't have a partner or kids.)

Expand full comment

Great news that there is going to be another book review contest. The quality of the last one was awesome!

Expand full comment

"The Outside View argument here is completely right, and is a great illustration of the limitations of Bayesian reasoning" - It's a shame that Bayesian reasoning can't take outside views into account.

Expand full comment

In re homeopathy: Are they making their own homeopathic remedies, or using commercial products? There have been some scandals where homeopathic remedies turned out to contain real drugs.

If they're using commercial products, the products should be lab tested.

Expand full comment

I will be sourcing the remedy (and probably the placebo as well) from Hahnemann Labs in California, an FDA controlled laboratory. There have definitely been issues with this from other vendors, and this is a source that the Homeopaths I've talked to all trust.

Expand full comment

I have often wondered about the claim of homeopathy that extremely weak solutions of the active ingredient are more potent. I wonder what they are using to do the dilution.

Presumably water, yes, but what kind of water? If it's not perfectly pure water, than at some stage of dilution the water may have a higher concentration of whatever you are diluting than the homeopathic medicine.

https://link.medium.com/LCESjdPjLlb

Expand full comment

Matt Yglesias brought up that the US is actually a pretty rich country in GDP per capita. I thought maybe that because we have some of the richest people, the biggest companies, and a very large financial sector - but even looking at the poorest states in the US, we're still pretty rich in comparison. Sweden is about the same as Georgia (#28 in the US ranking), Germany similar to Arizona (#39), Canada comparable to Utah (#42), the UK down with Idaho (#47), France with Arkansas (#49), and Italy is poorer than any of the 50 US states.

It was interesting looking at the regional differences within those countries, though.

1. UK basically has a rich region around London, Oxford, and Edinburgh that is comparable to the middle ranking US states, and a few other areas that are comparable to the lower ranking US states (mostly urban) - with much poorer everywhere else.

2. France is similar in that regard. The Ile de France (centered on Paris) has a per capita GDP comparable to the richer US states, and everywhere else except the Rhone (barely) has a lower GDP per capita than even the poorest US state.

3. Germany really shows the East-West divide, although there's also a really prominent north-south divide. The southern German states - Bavaria, Hesse, Baden-Wurttemberg - are also noticeably higher in GDP per capita than the north-western states, and comparable to the mid-to-mid-low states such as Nevada (#29). Whereas the north-western states are similar in GDP per capita to the bottom ten states in the US, and eastern Germany is (unsurprisingly) noticeably poorer still and poorer than the poorest US state in GDP per capita.

Expand full comment

Adjust for hours worked by European countries GDP per Capita gets relatively similar. Time off is a kind of leisure product but it doesn't appear in the GDP stats.

Expand full comment

You have to factor in forced 1-2 household car ownership. Healthcare and educational costs that are higher and generally less vacation.

I would be surprised if the US did much better for the median person if you adjust for these factors.

Expand full comment

I think GDP per capita is only part of the story. It’s how much $ you make. The other part is what you buy with it. Matt Y makes a good argument that Europe represses consumption in favor of “social investment” - less McMansion and luxury cars, more healthcare and education. My argument would be that Europeans (on average) work less, make less $, and make up for their relative “financial” deficiency by spending it wisely on things that actually make life enjoyable. With a big caveat that it does not apply to upper middle and rich in the US who get to buy privately the same or higher quality of services that Europeans provide for everyone publicly, and enjoy the much easier path to wealth that US provides.

Expand full comment

As Erusian pointed out, European investments in these social goods are definitely accounted for in GDP per capita. Furthermore, the US spends an enormous sum on healthcare and education, tops in the the charts per capita, so this doesn't hold either.

At the end of the day the anti-American propaganda has been extremely effective at influencing American's views of ourselves.

Expand full comment

European investments in social goods are definitely accounted for in GDP per capita. But the point is that US consumption in more purely positional goods (like bigger cars and bigger houses that are farther from work and where people spend a lot more time in their big car commuting, rather than in a building at either end) shows up even more so in the GDP per capita. If everyone in the US buys a $30,000 car every ten years, that they drive into the ground, while everyone in Europe gets an extra $1,000 in health care every year, then the US has $2,000 extra GDP per capita, but without an actual benefit.

Expand full comment

Education and healthcare are also largely positional goods.

Expand full comment

Europe does not spend more per capita on healthcare than the US. I don't understand the point you're making.

https://en.wikipedia.org/wiki/List_of_countries_by_total_health_expenditure_per_capita

Is your argument that the US, despite spending nearly 50% more per capita than the next country, should defer some personal consumption and spend more on healthcare?

Expand full comment

I'm not trying to make a specific point about what some countries should or shouldn't do. I'm just trying to clarify what someone's claim is about how GDP accounting could conceivably be misleading, if for instance Europeans solve the problem of getting around in a cheap way rather than having $30,000 per person of spending, and could thus have higher quality lives with lower spending.

Expand full comment

You do realize that government spending on public services contributes to GDP, right? If Europe spent less on consumption and more on government services that would still show up in GDP. You're right that shorter hours would lead to a lower GDP though Europe is also mostly lower productivity than the US. Though there are a few places that fit that model: Denmark has lower productivity overall but about the same productivity per hour. But they're unusual. Most of Europe has both lower hours and lower productivity.

Expand full comment

A lot of the effect comes from the US having a younger population. They also work more hours (after adjusting for having work https://ourworldindata.org/grapher/annual-working-hours-per-worker?yScale=log&country=GBR~DEU~USA~FRA~SWE )

(Also note that while Sweden is rich it is poorer than Norway and Denmark (on the gdp per capita measure))

Expand full comment

Here I would also have to back up and wonder what is going on. North west Germany has more industry than the south and a major port city in Hamburg and the industrial Rhine region. It’s clearly more developed than Nevada which has cities based on gambling and not much else. And is London, a major financial centre, is only equal to mid tier US states?

Expand full comment

People in Nevada buy a lot of cars, and a lot of money is spent on road widening and re-paving. People in Vienna walk to work, and don't have much GDP spent on their transportation, and thus can spend more on the things that matter while still having lower GDP per capita. At least, that is the claim.

Expand full comment

It's a big city, which has been affected by the relative decline of the UK over the past 150 years (US per capita GDP was already significantly above that of the UK by 1914) and the effective closure of its port following containerisation and the move to larger ships that can't get up the Thames. Housing for the poor being provided on a local basis makes poor people moving to other cheaper areas of the country less likely, and there is a high proportion of free state-owned housing compared to other areas of the country. I don't think it's intrinsically unlikely that its GDP would be that low.

Expand full comment

Some contextual information is that US Gini coefficient (standard statistical measure of inequality) is much higher than for those countries. https://en.m.wikipedia.org/wiki/Gini_coefficient#/media/File%3AGINI_index_World_Bank_up_to_2018.svg So the median individual may not be benefitting much from that higher gdp

Expand full comment

One thing with Gini is you are comparing a bit of apples to oranges - if Gini is calculated using incomes and here we're talking GDP per capita, it's SUPPOSED to be basically the same thing, but it's not, when you get into complexities of where government spending vs. public goods paid with taxes are accounted for and such.

that I found when trying to figure out the Gini coefficient is the importance of ensuring you're looking at post-tax and transfer numbers. With those included, the US is still more unequal than the European countries but it comes out looking a lot better, as outlined by the paragraph below from your link. A lot of charts, though, including the line chart in your link, use the pre-tax/transfer numbers.

The paragraph:

"For OECD countries over the 2008–2009 period, the Gini coefficient (pre-taxes and transfers) for a total population ranged between 0.34 and 0.53, with South Korea the lowest and Italy the highest. The Gini coefficient (after-taxes and transfers) for a total population ranged between 0.25 and 0.48, with Denmark the lowest and Mexico the highest. For the United States, the country with the largest population of the OECD countries, the pre-tax Gini index was 0.49, and the after-tax Gini index was 0.38, in 2008–2009. The OECD averages for total populations in OECD countries was 0.46 for the pre-tax income Gini index and 0.31 for the after-tax income Gini index.[6][33] Taxes and social spending that were in place in 2008–2009 period in OECD countries significantly lowered effective income inequality, and in general, "European countries—especially Nordic and Continental welfare states—achieve lower levels of income inequality than other countries."[34]"

Expand full comment

The US doesn't have a lower HDI than Europe. It tends to hover around spot 5-15. It's also a somewhat subjective measure at any rate. As for the GINI, even if you compensate for it the median America comes out ahead in wealth terms. The average/median American is just wealthier than the average/median European even in the wealthiest parts of Europe, even after compensating for what they have to spend on stuff like health insurance. Now, they might be less secure (in the sense of insulated from financial shocks). But that's a different metric.

Expand full comment

Isn't HDI just a linear combination of life expectancy, GDP per capita, and average female adult literacy? There's nothing subjective about it, any more than there's anything subjective about GDP per capita, or life expectancy, or literacy.

Expand full comment

It's GDP per capita, expected years of schooling, and life expectancy. Plus inequality sometimes. Which is pretty objective except in that you can argue about the specific choices. I was thinking of the old version (prior to 2010 according to Wikipedia) which included stuff like PPP adjustments weighted by goods and various slicing and dicing of what enrollment counted. There was more room for subjectivity on what counted etc back then.

Expand full comment

Human development index is an interesting, if more subjective, measure and puts western European countries at the top https://en.m.wikipedia.org/wiki/List_of_countries_by_Human_Development_Index

Expand full comment

In what way is HDI "subjective"?

Expand full comment

Well, the link Randomstringofcharacters provided includes a summary of common criticisms of HDI. I'd start there.

HDI is 3 measures: life expectancy at birth, mean years of schooling, and GNI.

I then takes a geometric mean of the result of their calculations of these 3 figures.

Life expectancy ranges from 1 for 85 to 0 for 20. So if you're life expectancy is 80 instead of 85, you have a .923 score here. By baselining at 20 you have disproportionate impact for each year difference. (e.g. 80 is 94% of 85 but you're only credited 92%).

On the GNI front, by using natural logs in the formula you flip the dynamic. 75,000 is the standard, so a country with 70,000 returns a income index of 99% despite only having an income that is 93% of the standard.

Is this an objective way to measure human development, or subjectively rating one thing above another? To me it's subjective.

And that's just on the math front. Points on gender inequality matter too.

Expand full comment

Yeah, many people who don't travel or who only meet the kind of people who can afford to travel (who are often elites in their country) can get a hugely distorted view. Doubly so if they only know the international city at the center of the country which is often hugely different from the norm.

Expand full comment

I think anybody who has travelled France and Georgia, as I have would see the former as much richer. Maybe that’s just the infrastructure but I don’t think so.

Expand full comment

I think this is difficult. One of the big contrasts between the US and Europe is that in the US, public spaces are designed cheaply in order to support big and lavish private spaces, while in Europe, private spaces are small and efficient in order to enable more usable and beautiful public spaces. A traveler usually only sees public spaces, and thus would naturally be biased towards thinking the European society is richer, because its public spaces are nicer.

Expand full comment

If you've been to the Atlanta region, including an hour drive or more away from Atlanta, you would see a thriving Georgia and would not be at all surprised to hear they are rich by international standards. Sure, there are very rural parts with much lower incomes, but New York state has the same effects in very similar ways, but nobody seems to think New York is a poor state.

Expand full comment

What parts? Because I've been in some pretty poor parts of France and some pretty nice parts of Georgia.

Expand full comment

Edinburgh is the other end of the country to London and Oxford, so I guess we’re talking about multiple rich regions?

Expand full comment

Not really - the South East, including London, contains nearly all of the reasonably well-off areas, and then there are a few other cities that do okay, most notably Edinburgh - but it's only got a population of half a million. The UK economy is very London-centric.

Expand full comment

Yeah that’s my picture too. I guess I was just making the world’s most minor contribution to a discussion by pointing out that Edinburgh’s not near London.

Expand full comment

Like the outdated saying, GDP is a measure that goes down when a man marries his secretary. Comparisons across states and big enough tome gaps are not useful.

Expand full comment

Your point is also illustrated by this story of two economists growing the GDP:

https://www.reddit.com/r/YangForPresidentHQ/comments/cgeewu/joke_re_gdp_two_economists_are_walking_down_the/

Expand full comment

The quote is "a man marries his housekeeper" and it's not literally true. It's a metaphor about unpaid women's labor. In reality if a man marries his housekeeper then the money he was using to pay his housekeeper remains in the household and is instead consumed or invested by the couple. However, the saying points to the fact the work done by a woman instead frees up her husband's money. It's the Marxist-Feminist idea that men exploit women's labor.

Expand full comment

It's literally true. A housewife has no income as far as GDP is concerned. Someone earning a salary for that same work does have income.

Expand full comment

Right. And the income is coming from where, exactly? Does the employer magically poof the money out of thin air? If he doesn't spend it on a housekeeper does he burn it in his fireplace? No, he spends it on something else or saves it and it enters GDP calculations that way.

Expand full comment

Does he spend it on anything different than the housekeeper was going to spend it on? Especially if he's married to the housekeeper?

Expand full comment

This gets into inter-household dynamics. The argument is that, no, it doesn't because the housekeeper gets to spend it entirely on her own while a married couple must negotiate.

Expand full comment

I earn money from my job. That money is my income and therefore counts as part of GDP. I spend it on a housekeeper. It's now her income and counts toward GDP again. She spends it on groceries. It's now the store's income and counts toward GDP again. And so on and so on. If I marry my housekeeper, my financial support of her does not count as her income, so the money skips a step; it counts toward GDP one less time than it otherwise would have.

Expand full comment

This depends on whether it has a real decrease on the velocity of money. If it does you're right. But if it doesn't then you're wrong. If the spending rate of the household doesn't go down and the velocity otherwise stays constant then it should have no effect. At least if I understand the macroeconomics correctly. I might not.

Expand full comment

But, but, but...the US must be better than France!!! \s

Expand full comment

I am happy to announce that, after 6 years of insanity, Stanley Kubrick has finally gotten his infobox back on Wikipedia (https://en.wikipedia.org/wiki/Stanley_Kubrick). I commented on the topic in previous open threads, so I figured I'd share the good news here.

Expand full comment

I've learned never to underestimate Wikipedia talk pages' capacity for (mercifully well hidden) drama, but... why Kubrick?! Can you link an explanation of the controversy?

Incidentally I was relieved to realize that Kubrick has been dead for the last 22 years, rather than insane for the last 6 as I initially assumed.

Expand full comment

Infoboxes have always been controversial on Wikipedia. On content debates, the arguments have to be based in "what reliable sources say". For a matter of display style, there is little to go on other than what the participants like.

A few of the most anti-infobox editors wrote Kubrick's article, and were (until now) successful in keeping an infobox off that page. Every discussion ended with "no consensus", which defaults to the status quo, which was no infobox. Since the last discussion, two of those editors have left the project. So the latest discussion managed to find consensus to treat this article the same as nearly every other biography.

Expand full comment

Here’s an example of a "request for comment" which seems to be held annually on the talk page: https://en.wikipedia.org/wiki/Talk:Stanley_Kubrick/Archive_12

Expand full comment

I looked at the talk page briefly. The only reason for infobox was "most people have infoboxes", and the only reason against the infobox was "it is not necessary for all people to have infoboxes". This is possibly the most stupid Wikipedia debate I have ever seen.

Expand full comment

I don't think that's an accurate summary. There were many reasons argued for having an infobox aside from following a consistent standard: navigability, accessibility, search-engine optimization, and improved aesthetics to name a few. Infoboxes are useful generally for these reasons. Of course, "most [comparably significant] people have infoboxes" is itself another good reason on a project like Wikipedia where consistent standards are valuable. There's also the benefit of ending the stupid debate, which would only continue if the page kept conspicuously having no infobox.

Expand full comment

It was a complete no-brainer and literally the only reason it persisted like this was because the editors behind his particular page happened to be virulently anti-infobox and well-connected to important Wiki admins. In previous discussions they fought hard to have their way, but it seems they've since left Wikipedia, possibly due to harassment over this issue and others. Good riddance.

Expand full comment

For the record, I agree with you. The crazy side was the one saying "our only reason for opposing infoboxes is that fact that we can".

Okay, they also had the argument that "if people can conveniently find all information they need in the infobox, then they will not need to read the article itself, which would be a pity". From my perspective; first, it is not true, some people prefer text written in paragraphs; second, if someone prefers the infobox, hey, more power to them." The idea that Wikipedia should not use infoboxes, because they are *too* convenient for the reader, is just... insane.

Like, what the fuck are you people optimizing for? It is like modern journalism, where the goal seems to be writing as long article as possible, in a way that prevents the reader from easily finding the answer to the question asked in the title. Is the goal to provide information, or to make your readers suffer?

Expand full comment

See, cyberbullying does work

Expand full comment

Arbitration report

Motions – hyphens and dashes dispute

https://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-05-16/Arbitration_report

Expand full comment

And to be clear: it is not that Wikipedia has unusually dumb discussion, the difference is that discussions are almost entirely public and archived.

Expand full comment

I have convinced my mother-in-law that she needs a stupendous amount of illumination in her home, to help with seasonal depression. I'm not having much luck finding what I've got in mind, and would appreciate any suggestions.

A couple of these on the ceiling https://www.stagelightingstore.com/sundog/844519-ovad-stages-sundog would be reasonable, but she doesn't need the various stage-related features. Just the illumination.

She'd also like some short lamps she could put in corners that would produce some spotlight-esque beams aimed at the ceiling.

Expand full comment

I recently set up a shedloads-of-lumens system in my living room and home office (Edinburgh, 56 degrees North), and the setup I went with was

- A "festoon", i.e. a long string of lightbulb sockets that plugs into the wall. Here's the model I used, though the website is based in the UK and may not be ideal for you: https://www.lighting-direct.co.uk/22m-weatherproof-festoon-lighting-20-black-bulb-holders.html

- Enough 13.5W "daylight" (over 6000K) LED bulbs to fill the sockets in the festoon (just get whichever are cheapest on Amazon, but look for CRI values over 80: I used https://smile.amazon.co.uk/dp/B08BRJGH5J/ref=pe_27063361_487055811_TE_dp_1)

- A box of 1.25" cup hooks

Put the bulbs in the festoon. Starting at an electrical socket, go around the room with the festoon putting a cup hook in the wall next to every bulb (you'll want to drill small pilot holes). As you place each hook, hang the next bulb off it. Put the hooks at the top of the wall just below the ceiling - the idea is to get bright light from up high and all around you, mimicking the effect of being outside in daylight. When you've finished, plug the festoon in and marvel at how bright the room has become. The whole thing cost me about £130 (~170USD) per room, and has made both rooms noticeably more uplifting. I have previously used high-wattage corn bulbs in standard ceiling light fittings, and the festoon approach is better - instead of one glaringly bright light leaving purple streaks across your vision every time you turn round, the festoon makes the whole room light.

Photo here: https://www.dropbox.com/s/4cb0cjy1o0hk4ip/2021-11-27%2019.14.58.jpg?dl=0. This is basically the setup used in Sandkühler et al: https://www.medrxiv.org/content/10.1101/2021.10.29.21265530v1.full-text (they used two festoons per room), but I *strongly advise* hanging the lights up higher than shown in the photo of their setup - you want the light to be coming from the "sky".

Each of those bulbs is roughly equivalent in lighting output to a 100W incandescent. If you can find brighter bulbs with diffused output, go for it - the festoon's rated up to 25W/bulb - but I don't advise bare corn bulbs, which are unpleasantly dazzling to look at and need some kind of lampshade.

If you're a bit more comfortable with electronics, I'd look in to LED strip lights, which can be chained together and don't suffer from the problem that some of the light is shining straight into the wall. You can get wall-mountable aluminium heatsink/diffuser strips like https://www.ebay.co.uk/itm/224525523391?hash=item3446c399bf:g:ShIAAOSw-59g5jb~, which you could run around the top of the walls to get a similar but more even effect than mine (or mount the strips pointing upwards into the ceiling - white paint is apparently 90% reflective). One strip is currently about 500Lm/m, so you'd need three of them running in parallel to get the same amount of lumens as the festoon setup. There are limits to how many strips you can chain together in series (varies by type) and the power supply requirements and cost/lumen calculations were starting to get too much like a logic puzzle, so I went with something simple.

Expand full comment

The best lights I've ever seen were in a Zara Home store. Sadly, the staff did not know what kind of lights they were. I was crazy enough to ask but not crazy enough to bring a ladder so I could look at them myself. I passed the shop on my way home from work and frequently stopped to hang in there for a few minutes. Their lighting was just this perfect combination of warm and intensely bright that drew me in and made me feel noticeably better on dark winter days. It definitely worked for the store; I acquired a lot of Zara coasters and tea towels simply justifying my near-daily presence from November to March. The next time I walk into a retail store with lights like this, I'll try to figure out the specs on whatever commercial lighting they're using.

Expand full comment

Worth trawling through the Lighting tag on LessWrong: https://www.lesswrong.com/tag/lighting?sortedBy=new

Also, here is a suggestion of using LED bars designed for SUVs, with a 2021 update of 'stadium lights':

https://meaningness.com/sad-light-led-lux

Expand full comment
founding

The best solutions I have used is corn lights (need an adapter but then fit in standard fixtures) or a simple 5-way bulb splitter, with 5 120 watt bulbs plugged in.

Expand full comment

30 years ago I bought an inexpensive standard overhead-type fluorescent fixture that held four four-foot florescent bulbs. It had some wires running out the back. Then I bought four expensive four-foot florescent bulbs that were claimed to mimic sunlight. I added a 110 V plug to the wires and propped it up in a corner of my office.

It definitely helped with SAD in Virginia in late fall and January. Around February 1 each year I would notice my spirits lift and I would breathe a sigh of relief.

I am retired now and living in the Dominican Republic where the length of the days do not change dramatically. I don't notice much effect when the seasons change.

Expand full comment

Here's an accordion link: https://youtu.be/_E-yOI-BbLs

Expand full comment

Color me briefly entertained and moderately confused.

Expand full comment

She might be an experimental musician having fun, or perhaps this is a lifestyle brand's home appliance TV commercial from another dimension?

Expand full comment

Regarding the Pascalian Medicine problem, I think an approach that better includes uncertainty in the calculation could solve this in a formal way. Something similar to this paper that dismisses the Fermi Paradox: https://twitter.com/juliagalef/status/1465818521351323652

Expand full comment

That paper was interesting but it oversteps in claiming to dissolve the Fermi Paradox. The obvious issue is that we are inevitably going to narrow those uncertainties, and if they narrow to higher values of the Drake equation constants then the Paradox comes right back.

Note in particular that substantial interstellar panspermia* would set fl to effectively 1 (i.e. if a planet can host life, it will), and that would break the argument in half (specifically, using the lowest edges of their ranges for every other parameter it would require L < 10,000,000 for N < 1 in our galaxy, i.e. technological civilisations inherently self-destruct and humanity is doomed, i.e. literally the proposition Sandberg/Drexler/Ord are trying to refute).

*And on that note, see this paper https://arxiv.org/abs/1304.3381 - if its regression is anywhere remotely close to correct then interstellar panspermia must exist, because abiogenesis in Sol System cannot predate Sol's existence. That paper's authors do claim that their hypothesis resolves the FP due to intelligence being new, but I think they're full of shit on that point since a 0.5% headstart on a 10-billion-year race gives 50 million years i.e. enough time to colonise the Milky Way; civilisation may be new, but it has to be *really* new to explain the Fermi observation by itself and I don't think it's anywhere near proven that we're slammed that hard against the fastest evolution of intelligence possible.

Expand full comment

Are you familiar with the variant of cosmological inflation that posits fractal spawning universes? The idea is that every universe formation spawns thousands more instantly so that the growth of the number of universes increases exponentially at a spectacularly rapid rate. Some people have proposed that this almost assures we are the first (if not only) intelligent life in our universe. This would be the case because the time between the evolution of the first and second intelligent species is long enough that dozens of orders of magnitudes of additional universes would have formed giving that many more opportunities for us to evolve as the first intelligent life.

A very odd conjecture... (Had to fix a typo)

Expand full comment

But I think the whole point is that the Paradox does not exist currently as our uncertainty when accounted for properly supports our experience of no other civilizations. If in the future we narrow the uncertainty in a way that the Paradox becomes relevant, that's a different matter. Also, our current observation gives us a hint on which side of the uncertainty we're more likely to end up (no civilizations other than us).

Expand full comment

From the paper:

> Linear regression of genetic complexity on a log scale extrapolated back to just one base pair suggests the time of the origin of life 9.7 billion years ago.

From what I can tell, the paper hinges on the applicability of a regression running six orders of magnitude outside its sample data. It certainly was... bold... of the authors to superimpose their origin of life prediction over a timeline highlighting cosmic inflation, but this isn't exactly extraordinary evidence. To put it mildly.

Expand full comment

Oh, I'm aware of the enormous guesswork involved there. But note my words: "if its regression is anywhere remotely close to correct". They claim prokaryotes took 6 billion years to evolve, which would necessitate panspermia. The thing is, though, that if prokaryotes took only 1.2 billion years to evolve from the initial abiogenesis event (i.e. if they overestimated by 400%), well... that still necessitates panspermia, because we have bacterial fossils from 3.5 billion years ago and 4.7 billion years ago is presolar. They have a lot of room to be wrong numerically without being wrong descriptively.

Expand full comment

If someone tells you that they are healthily thirty-seven feet tall and therefore existing knowledge of human anatomy is bunk, "they have a lot of room to be wrong numerically without being wrong descriptively" is certainly true, yet not terribly useful. I do not think Sharov's approach is fundamentally sound given the data he's working with, and give its conclusions little weight.

Backing up a bit, I think this might be the main crux:

> Note in particular that substantial interstellar panspermia* would set fl to effectively 1 (i.e. if a planet can host life, it will) and that would break the argument in half (specifically, using the lowest edges of their ranges for every other parameter it would require L < 10,000,000 for N < 1 in our galaxy

The Sandberg paper works with fairly extreme uncertainty in the distribution of fₗ, so it takes a pretty conservative route of applying a log-normal distribution over 200 orders of magnitude with a standard deviation of 50 orders and a median of 1 abiogenesis event per habitable planet's lifetime. Per the discussion section, the end result isn't much changed even if the median is shifted up to the point where the median is one abiogenesis event per planet *per year*.

The uncertainty really is coming from the distribution, not the expected value. Assuming fₗ =1 isn't merely saying that life is common, it's saying that all worlds experience a massive, constant shower of extraterrestrial organic material. I don't think I live in that universe.

Expand full comment

>Per the discussion section, the end result isn't much changed even if the median is shifted up to the point where the median is one abiogenesis event per planet *per year*.

Of course it isn't. That's a shift of 0.2 SD in their ludicrously-wide distribution, so that distribution still cashes out to like 30% "abiogenesis is the Great Filter". The paper is a whole lot of figure-hiding clothing on "if you assume 30-40% that (abiogenesis is ludicrously unlikely and there's no panspermia), then you get 30-40% that we're alone without having to invoke any other Filter". No shit, Sherlock.

>Assuming fₗ =1 isn't merely saying that life is common, it's saying that all worlds experience a massive, constant shower of extraterrestrial organic material.

No, it isn't. It's saying that a planet that can host life gets at least one life-bearing meteorite within, let's say, 2 billion years. Remember that ne already weeded out the planets that can't host life; fl is simply the probability that life actually happens on a virgin world.

Note also that it doesn't have to be what you'd customarily think of as "organic material" that falls. Earthly sediments and rocks contain bacteria kilometres deep, and it doesn't exactly take a lot of bacteria transferred to LUCA a planet.

Expand full comment

> No, it isn't. It's saying that a planet that can host life gets at least one life-bearing meteorite within, let's say, 2 billion years.

I'm confused. Ignoring the Fermi implications for a second, you're presenting arguments that panspermia is common as shown by the emergence of life on Earth more quickly than can be explained by abiogenesis, but then suggesting that panspermia would typically be *slower* than what Earth-abiogenesis theories would predict? Is that right?

Do you have any observations that would distinguish this model of panspermia from conventional Earth-abiogenesis narratives? (How many meteors has Earth shed during its life-bearing period?) Because it looks like this is trying to buy an artificial level of certainty without much in the way of differential evidence.

Returning to Fermi, I'm not even sure what that nets you, if you're also objecting to the root abiogenesis calculation - panspermia can't get you from 0 to 1, after all. Is the entire panspermia tangent a distraction?

Expand full comment

The founders of Toronto media platform 6ixBuzz were recently covered in Toronto Life and reacted very negatively to having their real names published. See paragraph ~23 in the grafs above the second capital W heading for context: https://torontolife.com/city/the-secret-life-of-6ixbuzz/ ("Over the years, some traditional...")

The platform and the founders are associated with many things I don't like, and appear to be in a different class than me. Other than that, are there important differences between this and the NYT-SSC beef? If I think Scott's anonymity should be maintained in the Times, should I be equally against this decision from Toronto Life?

Expand full comment

There's a fundamental and pretty diametrically opposed conflict between rights to privacy and to free speech. I am extremely leery of using legal force to support privacy rights within the public sphere, as it results in easy paths to abuse - feel free to dance naked in your own house with plenty of safeguards against paparazzi, but something has gone wrong if one is sanctioned for quoting a speech given in a public forum.

Norms can be significantly stricter than legal requirements, but it's not clear that anyone has a consistent set of guidelines here. The classic definition of doxxing involves full set of contact details - address, phone number, work contact, etc. - but with easy access to the internet, a full set isn't needed for malicious action. But mileage *will* vary, since some people are easy to identify from a surname, while other times an exact spelling including nickname will still give you half a dozen results in the same company.

It would be nice to be able to just run with "if someone asks to be anonymous, let them be" but there's a reason why anonymous sourcing is so frustrating - people have incentives to put information out into the world without it being traced back to them, and quite a few of those incentives are nefarious. A reliably pro-social investigative journalism corps would be a blessing... but again, incentives exist and some of them are Moloch.

There's more to be said regarding whether or not privacy rights can ever be retroactive, imposing an obligation for an audience to forget details or associations between current and past work or identities. But that's an even more complicated can of worms and I don't think there's even consensus on the preliminaries.

Expand full comment

The reporter asked them for an interview, they would only do it on grounds of anonymity, the reporter refused and did the piece solo and put their names out there.

I think this is poor behaviour. Reading the article, I'm no wiser afterwards than I was before. The reporter seems to be trying to make an argument that this website? Instagram? is making money out of stoking social unrest or something. I don't see that they proved their case, and if they want to talk about anti-Somalian racism in Canada, that would be a bigger question than "social media monetisation".

There's strong hinting towards sinister motives, but nothing solid proven that the two guys in question have any opinions about anything other than what you'd expect from city-dwelling millennials.

They're not criminals, they're not white supremacists, I hardly think they're racists or any other Bad Persons. So the big scoop here is "we found out the real names of these guys", and if those real names were available elsewhere, then I think the article does nothing in the public interest. I think the reporter and news outlet behaved badly.

Expand full comment

I keep trying to copy the relevant sentence in the article, but something very strange happens, I suspect as a DRM method

The reporter found their real names because they had registered 6ixbuzz under their real names in the 'searchable federal business registry'

That makes this different imo. If scott had registered SlateStarCodex LLC under the name siskind as shown in the public federal employment identification db, and that was how the NYC learned his name in the first place... I would feel a lot less sympathetic to him. Anonymity only has the value you put on it.

Expand full comment

I think the claim Cade Metz gave in his New York Times article is that Scott's real last name was totally findable through various ways.

Expand full comment

I found it by accident. Scott posted it (or rather an email address that consisted of "scott.siskind@something") in squid314 (though I didn't keep track of exactly which post) and linked squid314 from SSC in several dozen places. So yes, it was definitely findable on random browsing through SSC and linked-from-SSC stuff.

Expand full comment

That said, he did eventually delete squid314, so after that one did also have to know to plug the dead links from SSC (which weren't removed) into the Wayback Machine. Still no need for anything except wanting to read Scott posts.

Expand full comment

Err, NYT.

Seriously, the lack of an edit feature after all this time is starting to make me lose faith in substack.

Expand full comment
author

I agree that this is similar to what happened to me and that this reporter is a bad person.

Expand full comment

Interesting. Personally I don't support a general principle of "anonymous / pseudonymous media figures' identities shouldn't be publicized" or anything like that. What angered me about the NYT's behavior was 1) the hypocrisy of denying Scott a courtesy it had extended to others, 2) reacting to his resistance by turning its story into a hit piece, and especially 3) willfully misrepresenting him and the community to fit its narrative.

WRT 2 and 3 I may be suffering from some Gell-Mann amnesia: I've never heard of 6ixBuzz before and have no idea whether this reporting on them is fair. It certainly does seem to be a hit piece, maybe merited and maybe not. It's interesting to read a sentence like:

> When Esagholian heard that his name would be used, he turned menacing, calling the writer incessantly—dozens of times in the span of 15 minutes—and when he didn’t get the answer he wanted, Esagholian said it would be a problem if people found out where the reporter and his family slept at night.

That is, shall we say, more forceful than Scott's reaction. But the sentence is constructed to imply a threat of violence when the stated facts are equally consistent with "Esagholian freaked because anonymity was valuable to him and he'd expected the writer to respect it, pleaded with the writer to reverse course and finally appealed to the Golden Rule to try to make him understand why Esagholian didn't want to be doxxed."

So I definitely take Toronto Life's reporting here with a grain of salt; it's obvious to me that it's telling one particular side of the story, partly due to not liking 6ixBuzz's behavior here. Toronto Life's side of the story could still be correct on balance but the article is only weak evidence of that-- much weaker than it would be if I trusted their reporting to be neutral.

Taking as given that it's a hit piece, I don't think I object over and above that to Toronto Life using Esagholian's name. I might feel differently if there were clear ways in which it constituted a danger to him, or if Toronto Life had a precedent / policy of respecting anonymity the way the NYT did.

Expand full comment

I think a lot of people just didn't see the NYTimes article about Scott as a "hit piece". It had some implications that some people would view as negative, yes. But I think that's standard for an article about a public figure.

Expand full comment

I mean, the bit about Charles Murray was pretty obvious dark arts when you contrast the obvious implication of *technically true* thing they said with the clear meaning of the (unquoted) passage in its context.

Expand full comment

The homeopathy study also acts as an excellent probe of Scott's reputation among his readers - are you willing to take random pills mailed to you by a complete stranger, based solely off of the fact that Scott trusts that person enough to put the study on his blog?

Expand full comment

The problem with homeopathy studies is that they are all done by people with skin in the homeopathy-works-game. Nobody outside is doing research because...well because it's absurd. The people inside have large motivations to hide any bad results. It is a bit like trusting studies from the tobacco companies saying that smoking is good for you.

I would not participate in the study for exactly that reason. Every study has some chance of showing an (invalid) positive effect. I am not interested in helping to defraud the general public. (No offense intended by this....)

Expand full comment

Great point. Clicked on that shit, thinking yeah let's do it, til I got to the bit about the rando pills, then nope. While I would have no fear of taking pills Scott handed out for a study, I'm not confident of his ability to sniff out malevolence on Medium.

Expand full comment

The homeopathic proving study isn't an adversarial study, is it? Has it received any input from a homeopath?

Expand full comment

No, Yes, will continue to refine the exact procedure based on ongoing discussions. Will likely reach out to U.S. homeopathy schools to pass this along to interested students to get sample size up and have it not be made up of almost exclusively skeptics.

Expand full comment

I manage clinical research at a small pharma company, and have worked on a few drugs that went on to receive FDA approval. I'd be interested in helping ensure the project is of the highest quality, based on GCP and other industry standards (though not in any affiliation with my company). Let me know if you want to chat.

Expand full comment
Comment deleted
Expand full comment

Done. Feel free to delete your comment if you don't want the email address to be exposed to people who skim that kind of thing.

Expand full comment

Re homeopathy: Most health care professionals using homeopathic medicine often believe that greater dilutions are more potent than less dilute doses. Whether or not this is true, I think there is reason to suspect that the most concentrated homeopathic medicines actually may be having a physiological effects do to the presence of the original substance in the medicine.

What led me to this belief was my exposure to some work being done in low-dose immunology. Low Dose Immunotherapy (LDI) is a treatment for increasing immune “tolerance” of an overactive immune system. Allergy and autoimmunity represent an alteration or overactivation of appropriate immune tolerance. LDI retrains the immune system for specific antigens, thereby decreasing overactive immune response and decreasing symptoms.

This type of immunotherapy was discovered in Great Britain in the 1970s and called “Enzyme Potentiated Desensitization” (EPD). The technique utilized very small concentrations of antigens along with an enzyme, beta glucuronidase, which helps educate the T cells involved in the immune response. This treatment was brought to the US, but in the early 1990s the FDA stopped the importation of EPD. At this point, Dr. Shrader reproduced the mixtures of EPD and called them LDA. LDA originally used antigens causing certain allergies and the technique was later expanded by Dr. Ty Vincent to treat various autoimmune conditions using a variety of different antigens, called LDI. The claim for for the effectiveness of these extremely diluted antigen doses seems to be that they stimulate the so-called T regulatory lymphocytes, which are known to suppress unwanted “hyperactive” immune reactions. More information is available elsewhere on the Web.

LDI is by no means a main stream approach. However, it is having some demonstrated positive effects, and I think it’s worth looking into.

The reason I mention it here is because I believe that the most dilute doses used in LDI overlap with the least diluted doses used in homeopathy. So I wonder if the effects of high concentration doses of homeopathic medicines are being mediated by the immune system.

Expand full comment

I recently got scripted some "homeopathic" drops by my doctor that aren't very dilute at all - most of the herbs are present at around 1 part in 1000. I definitely think there's a fair bit of "homeopathy" that is in practice low dosage herbal medicine, even if the homeopaths think that diluting it a few dozen more times would somehow make it far more potent.

Expand full comment

> Most health care professionals using homeopathic medicine often believe that greater dilutions are more potent than less dilute doses.

Just to check: you're saying homeopaths generally believe that *less* of the original substance means *greater* effects? If I understand you correctly, the LDI hypothesis would tend to *refute* that belief if confirmed, so I'm not sure why you bring it up in connection with LDI.

Expand full comment

Yes, that is what homeopaths actually believe. In particular, they believe that pure shaked water that contains no active substance whatsoever will have potent effects (memory of water).

Checking homeopathy wikipedia page should confirm this.

Yes, it is one of reasons why homeopathy is in conflict with basic physics.

Expand full comment

That is, indeed, the primary claim of homeopathy.

Expand full comment

A few weeks ago, I piggy-backed on comments here forecasting the future of Covid19-related mask-wearing to raise a different issue: Why hadn't mask-wearing previously been common in indoors public spaces in areas with high TB rates? (Recent TB transmission research, summarized here, was on my mind: https://theconversation.com/new-study-shows-that-normal-breathing-is-a-major-spreader-of-tb-170656).

Now I'm wondering: In comparison with the reasoning behind mask-wearing for Covid19 in indoors public spaces, are there *any* other diseases where (at a high rate of prevalence) the case for mask-wearing is, or at least temporarily has been, similarly strong?

As far as I know, widespread mask-wearing before Covid19 was in response to exceptionally deadly flu epidemics (e.g. 1918-1920), flu season in general (a few countries such as Japan), or air pollution.

I have probably failed to use proper terminology to phrase this, so I hope my question is comprehensible. Let me know if it isn't.

Expand full comment

I think I just accidentally posted a duplicate comment, and then deleted that comment and both copies got deleted, but I was just expressing amazement at this article saying that up to 60% of TB cases may be asymptomatic, and may result in spread - I had thought this was a novel feature of covid, but I keep discovering that many other diseases have this same feature!

Expand full comment

People knew about asymptomatic TB all the time. In my childhood in Soviet Latvia we had annual TB testing for all children. Those with positive tests were considered infected and were provided treatment.

Expand full comment

I think masks were first widely used in the 1910 Manchurian plague epidemic, which was pneumonic rather than bubonic. (https://en.wikipedia.org/wiki/Manchurian_plague)

Though the 17th century "plague doctor" image with the big beak full of herbs suggests that there might have been a role given to masks then (at least for medical specialists).

Expand full comment

> air pollution

that is interesting, as at least in some cases people were wearing masks as a political statement and in attempt to influence legal decisions related to smog

Expand full comment

I like this idea. Knowing how the benefits of mask-wearing for covid compare to the benefits it would have had for other illnesses would provide a sort of yardstick. I presently feel like I'm lacking one. I keep pretty up to date on covid findings, but just do not have much of an intuitive personal sense of how dangerous various things are, how protective various protective measures are.

Expand full comment

Periodic reminder I keep wanting someone to do a double-blind [1] test of mask wearing in controlled scenarios, starting with the flu.

[1] You can do double-blind by giving people masks that do nothing.

Expand full comment

https://www.pnas.org/content/118/49/e2110117118

Not quite what you want but still useful.

Expand full comment

If you give people masks that do nothing, they won't interfere with breathing and the blind will fail. How about instead, we run the test group first and then just rub active Covid-19 on the inside of the same percentage of masks in the control group as failed the test group?

Expand full comment

If you don't know how effective masks are, how do you determine which masks do nothing?

Expand full comment

The size of the holes in them. You can make masks that are little better than screen windows.

Expand full comment

I share this frustration. A personal permanent ambient sense of confusion is the unexpected legacy of this pandemic, here

Expand full comment

How freaked out is the rationalist community by the new self-replicating xenobots? Can somebody who understands the tech better than I assuage my fears about this being weaponized, or worse? https://www.npr.org/2021/12/01/1060027395/robots-xenobots-living-self-replicating-copy

Expand full comment

https://arstechnica.com/science/2021/11/mobile-clusters-of-cells-can-help-assemble-a-mini-version-of-themselves/

"Interesting research, but no, we don’t have living, reproducing robots

Don't believe the hype—this isn't reproduction or replication."

Expand full comment

As a self-replicating blob myself, I'll tell you it's harder to take over the world than you'd imagine.

Expand full comment

Okay, as soon as I read this and found it was biology, I shrugged. They're putting stem cells into a medium and trying to get them to do things; the cells then do things. Quelle surprise!

If they were messing around with virulent organisms, I'd be concerned they might whip up a batch of plague, but as it is, I think it's more that they're finding out "Oooh, we didn't know stem cells could do that" rather than BIOLOGICAL ROBOTS TAKE OVER WORLD.

Expand full comment

Not freaked out at all. They only "replicate" when provided with extra Xenopus stem cells under controlled conditions.

Expand full comment

I'm not seeing how this could be weaponised. Some bunches of frog stem cells tend to assemble other frog stem cells into similar bunches. If you are not a laboratory full of frog stem cells, I don't see any way this could be a problem.

Life 2.0 is an existential risk, but this isn't Life 2.0 and doesn't seem to be a step in that direction.

Expand full comment
Comment removed
Expand full comment

I'm not sure I understand what you're saying. Are you saying Life 1.0 is not a step toward Life 2.0, that it is such a step, or something else?

In the strictest causal sense I'd say that Life 1.0 is a step toward Life 2.0 insofar as we're the ones who'll potentially develop 2.0 and we are made of 1.0, so if there had been no Life 1.0 there would never be Life 2.0. But I don't think the structure of Life 1.0 is an immediate template for a dangerous 2.0.

Expand full comment

I'm skeptical of pretty much everything in this article.

As I understand it, here's what is described:

- Scientists observe unmodified clumps of frog stem cells in a petri dish

- The clumps move around in a pretty trivial / purposeless way (spinning around)

- The movement tends to create more clumps which have similar movement properties

- ??? (speculative computer simulation of the clumps)

- Profit!

To me this is a big nothing-burger. The organization is totally trivial and the self-replication property is, as far as I can tell, complete coincidence (or rather it requires a suspiciously convenient set of initial conditions). The organisms in question are no more "robots" than an ice crystal is.

Expand full comment

I'm not very freaked out given that they can only do one very simple thing, they depend on an external resource, and there's no reliable mechanism for heredity currently.

I do think it's funny that by portmanteauing the genus Xenopus with "robot," they managed to make an extremely dystopian-sounding name (maybe intentionally, to grab attention).

Expand full comment

4: Tangential, but if you are making a google form you do not have to set an autofill for a linear scale (here, 'how serious are you about participating?'. This form defaults to 10, maximally serious, making the question result useless as both the most serious participants and those so unserious that they didn't read or adjust the question both end up as a 10. I entered a 9 incase there is an attempt to salvage, as I am fairly serious about being willing to participate

Expand full comment
founding

I didn't get prefilled. Either they fixed it or ymmv

Expand full comment

Thinking about the Pascalian Medicine idea, my first thought was that if we're assuming all supplements have some tiny inexplicable chance of curing our covid, then we should also assume all supplements have about the same inexplicable chance of making our covid worse. Or making us age faster. Or making us age slower.

If your answer to that is "well there are at least some unreliable studies showing an effect," well, I am sure I could run ten studies proving vitamin D makes covid _worse_. So it mostly feels like you're rolling a ten-thousand sided dice for each random med you take, and each die has one "makes your covid slightly better!" side and one "makes your covid slightly worse!" And maybe a few other sides like "Makes you really sleepy" or "slightly increases bone density". And each of these dice costs like twenty bucks to roll.

And this does match my intuition where if you're in a situation where you think it's pretty likely you'll die of Covid _anyway_, yeah cram whatever into your mouth. If all your dice point to nothing, or more covid, or some other weird negative outcome, well, whatever, you'll be dead anyway. And if you do live, it's very unlikely it was because of all the vitamin D you took, but hey, maybe you can make some money selling everyone your miracle combo.

Expand full comment

These were my thoughts about it too, and I was surprised that nobody made this exact argument in the original comments thread.

Expand full comment

I think Scott made that first point originally! But he said that in his experience the risks aren't asymmetric—it is in fact not the case that drugs are equally likely to help you or hurt you. I defer to his clinical experience because he has it and I don't. :P

And that's enough to power the "might as well try something on a long-shot" argument. And the question is, then, why not try a hundred separate long shots. My answer to that is, basically, the costs aren't linear and possible-benefits basically are.

Expand full comment

In addition to the above:

Whenever my doctor prescribes me some pills, he always mentions how he's going to start with the smallest possible dose, ramp up from there, and then ramp down and maybe cancel the pills altogether if there's sufficient improvement. He does this not only to avoid any potential side effects, but also to reduce the load on my liver/kidneys -- because prolonged exposure to *any* medication tends to damage them in aggregate. This could be another long-term downside to gobbling up any nominally harmless drug that you can get your hands on.

Expand full comment

> because prolonged exposure to *any* medication tends to damage them in aggregate.

Maybe I'm being too pedantic but how do our kidneys distinguish a molecule from a drug against a molecule from a supplement, or vitamin D from a pill against vitamin D from a food source?

I'm not just being a jerk, because I see my pill box in front of me showing my daily pills and wondering if I'm doing something wrong.

Expand full comment

Drugs are pretty much defined by having a dramatic biological effect. If something doesn't do much but maybe sometimes activates a receptor, it's an unregulated supplement or ingredient #1003 in your favorite plant.

Expand full comment

> how do our kidneys distinguish a molecule from a drug against a molecule from a supplement...

They don't. They just process molecules, but the problem is that blasting them with too many molecules at once causes damage over time. It will happen whether you consume a lot of alcohol, or a lot of salt, or vitamin D, or whetever. This doesn't happen so much if you consume a lot of e.g. broccoli, because most of the stuff in broccoli will be metabolized fairly readily (or will pass through your digestive system, like fiber). But it does happen if you consume a large volume of simple molecules, because your body just can't handle that much all at once.

Expand full comment

The homeopathy study says that the thing you're supposed to take is chemically equivalent to "milk sugar". Are the pills involved in the study vegan? If so, I will sign up for the study; if not, I hereby mildly complain about not being able to participate.

Expand full comment

A different direction of concern about milk sugar: aren't many (most?) people lactose-intolerant to a greater or lesser extent? Is lactose actually the standard for a medically-inert substance, such that all things referred to as "sugar pills" or "sugar coating" are made of lactose (as opposed to sucrose, fructose, etc.)?

Expand full comment

Complaint received, apologies! But the pill forms are not vegan. The way any homeopathic remedy is prepared is you take original substance (sea salt, a flower, mosquito, etc), and then repeatedly dilute it in alcohol or water to get the potency you want (higher dilution is stronger, most commonly used is 30C, 1 part per 10^60 so statistically not a single atom left). You can then either use this liquid as the remedy or for ease of delivery add milk sugar and evaporate into pill form. It is much cheaper for me to buy pills in bulk/mail that, than ship containers of liquid, so unfortunately cannot open study to vegan participants thanks for asking.

Expand full comment

Why you expect that making this study is useful?

In case of detecting that homeopathy works it would not be worth starting to treat it more seriously - chance for misdesign of study or anomaly are higher than that this kind of magic works.

In case of detecting that homeopathy does not work - who would be influenced?

Or is it some attempt to test procedure itself?

(also, sorry but I am not signing up, I will not take random pills send via mail)

Expand full comment

Thanks for clarifying! I don't know anything about making pills. I hope the study gets tons of participants and settles things once and for all.

Expand full comment
founding

Do you have any reassurances for people worried about taking pills from a stranger on the internet?

Expand full comment

Am hoping Scott's endorsement and the community's approval of the adversarial collab I wrote is sufficient. Also, I will send extra doses to each participant so they could in theory have a chemical analysis done to confirm it's milk sugar.

Expand full comment

Did you say a mosquito? A fucking *mosquito*?

Expand full comment

Have you never read the great Avicenna's Canon of Medicine? Err. Obviously that's an unreasonable question. But it was basically a 11th century prescribing guide for medical practitioners. A list of remedies and the maladies they treated. It was used by basically every non-christian doctor from the new world to Hindoostan from about 1200 until germ theory. It tended to not be *completely* made up bullshit, maybe only 90% bullshit. It was one of the reasons non-christian doctors, particularly jewish and islamic but also hindoo, were preferred by all, even christians

And it is CRAZY. Diluted centrifuged mosquitos wouldn't even stand out.

As homeopathy was invented before biology was much advanced, and especially as today they tend to reject 'modern' medicine and think the past was great, I'm not surprised at all

Expand full comment

The past looks so nasty and random from here . . .

Expand full comment

Hello,

I am still a lowly medical student, I am working on a medical education program.

actually coding it myself and a few others (from sweden and Latvia).

I am looking for someone to help me publish articles/research on my software.

effectively documenting how it works and what it's for.

p.s. if you work in medical education and you'd like to take a look, let's talk by email

abk93@cornell.edu

Expand full comment

The Philadelphia ACX Meetup is holding a Solstice Celebration on Tuesday, December 21 from 6pm-10pm. Request to join our Google Group for complete details: https://groups.google.com/u/1/g/ACXPhiladelphia

Expand full comment
Comment deleted
Expand full comment
Comment deleted
Expand full comment

Life requires no larger meaning. Do what brings you happiness, nothing else matters.

Expand full comment

Unfortunately, many people find happiness only in the context of a larger meaning.

Expand full comment

My goal is that the state of affairs in X million years be better for the fact that I lived. Defining "better" and predicting what will result in "better" are, unfortunately, difficult.

Expand full comment

I grew up religious and came to lose that faith. This left me with a general psychological sense of leaning over a pit of possible meaninglessness

Recently this has changed. I realize that I really don't know. Instead of viewing everything as possibly meaningless I see it as possibly meaningful. It's a Pascal's wager sort of attitude. I may only have one life and I can not possibly know that it is meaningless. It it were I presume that I could not know that. It's a possibility that I am a blip of predetermined material matter but it is entirely impossible that such an entity could be aware that this was its condition. I can't know that life has no meaning and as a result I'm more oriented to act as if it may

With this reduction of uncertainty I feel far more at peace at the prospect of no longer existing. I'm more willing to engage in everyday things I like with a sense that whatever the case I find interest and meaning in them

On a larger scale I hope to be able to formulate and promote ideas that provide market friendly alternatives to laws like high minimum wages or rent control. I could promote the idea of a comparative study on the costs and benefits of rent control vs those of a suite of ideas like a tax credit based on having rented out a space in your house, or relaxing zoning laws to do something like allow commercial buildings to have residential top floors. I feel this is my potentially high value action that could improve other's lives

Expand full comment

Nothing in the description you provided in any way references god or religion. Your study methodology seems flawed. You should ask for everyone's framework then afterwards ask the same group where they stand on religion :)

Really though, your framework does not seem like a religious framework? Why do you think ours will seem atheistic?

Expand full comment

I don't do the top-down "meaning" thing.

I do things that feel good, or seem locally useful. Sometimes I generalize a bit, but it is not my goal to generalize as much as possible... at some point the things get too abstract and the probability of talking bullshit approaches 1.

Among the things you mentioned: mentoring CEOs could be fun (assuming they are interested in anything I want to tell them), soup kitchen volunteering would be fun for a moment and then it would probably get boring, accumulating wealth can make life easier and richer, being healthy is definitely good, helping children is fun and potentially will become even more fun in the future. This all (plus many other things not mentioned here) is enough to keep me moving.

Expand full comment

when life is good, it needs no explanation.

Expand full comment

I believe there is no objective "meaning" per se. Yet still life is interesting, often good or fun, love and friendship is real (even though probably some artefacts of evolution expressed in brain chemistry), you can lead a good and rewarding life by helping others or at least not causing unnecessary harm. All in all, it is all very interesting and worth going through if you are not on the wrong side of the spectrum of human condition.

Expand full comment

I think that knowledge of deep time and evolution leads to the conclusion that life is more or less a glorious, futile rebellion against entropy. Life, once begun, needs no reason to continue other than that it does so. And it will do so, to the bitter end. The last crustal bacterium, as it is cooked to death by our sun going red giant, will still be trying to accumulate enough resources to divide.

As a child of futile cosmic rebellion, then: I don't need meaning to do anything, any more than a bacterium does. And when I do decide that I have reasons for doing whatever I do, I try to be generally pro-social and creative. Which is probably more of a function of being a relatively well-socialised pack animal then any deep statement of philosophy, and that's fine.

Expand full comment
Comment deleted
Expand full comment

I think the bacterium (and indeed all of us) is entitled to think whatever it wants to think. So, if looking for meaning gives you joy, then please do so. But, if meaning eludes you, don't feel burdened by the need to find it. You never needed it to breathe.

{Note: I wrote up a whole screed here reiterating my original point, but I suspect it just reinvents Camus in dorkier language}

Expand full comment

I had multiple religious experices through out my life. But the most powerful was when I understood on a gut level what it really means that God does not exist.

It means that we are the only source of good and evil in the known universe. All the hollyness of heaven and all the damnation of hell are actually us. Have always been us. There is noone else. No grander plan than what we will implement. It is up to us alone to let the universe illuminate in the light of reason... or not.

I can not imagine religion providing sense of purpose even close to this. Whole pretense of religion to be a source of meaning to our life seems utterly ridiculous. Nothing can impose meaning on me, but the burning passion of my own humanity.

Expand full comment

My 'meaning' is derived from an intuition that it's possible to increase or decrease the units of 'good' in the world by my actions. So I save a bee from death or make a tune that someone likes and infer that my action had meaning. This is naïve but helps me make sense of the question.

Expand full comment

I'm not entirely sure what people mean when they talk about a life having meaning.

I suspect they are thinking in terms of the opposite of a sense of futility. It's unpleasant to feel like you don't matter to anyone, and also won't have any significant impact on the world.

But I don't really know. I don't seem to have a meaning-shaped hole in my life, so I don't do anything to fill it.

Maybe that's because my life already has meaning to me, and I just don't recognize it ;-) But on the other hand, maybe not.

Expand full comment

>I don't seem to have a meaning-shaped hole in my life, so I don't do anything to fill it.

I don't either, but I kinda get why people might. From evo-psych perspective it seems straigthforward that feeling a need to be valued by your tribe would be beneficial for the chances of your genes to propagate.

Expand full comment
Comment deleted
Expand full comment

> If I had to fight for food and the security of my tribe while simultaneously competing with my fellow tribesmen for positions of power

If you couldn't contribute to that food, security or win any prestige in your tribe, you'd feel pretty unvalued though. I don't think that's all that high on the pyramid.

Expand full comment

Think of it more as 'usefulness' than 'meaning'. If you're just one of ten thousand subjects of the Great King in the Big City, whether you live or die isn't very important to anyone other than your family. If you're one of fifty people living in the settlement, then maybe you don't mean much, either, if you're not a craftsman or great hunter. But if you can provide value to the life of the community, your chances of survival are greater: people will help you, you will be looked after. So even an ordinary farmer or villager, if they fit into a role in the village society, can thrive in their circumstances and can count on support in times of famine, disease or war. Hence the search for meaning: "I am important, I count, because even though I am just nobody very significant...."

Expand full comment

I would say that if your life has "meaning," you regularly employ a policy to decide which of the many choices you are presented with to focus on, and which choices to make, in order to have whatever lasting impact you are aiming for. It answers the question, "why should I do something rather than nothing?"

If you're Christian, you aim to get into heaven and maybe even get as many other people into heaven as possible, so actions that affect that outcome are "meaningful" to you, and you use that calculus to guide your actions.

If you're a nihilist on the other hand, you don't think any choice you make can make the world better or worse in the long run - at least not in any way you consider important. Thus you lack a deep motivation to do anything. This is not a good state to be in, regardless of whether one thinks it is reasonable or not.

Meaningness.com helped clarify my thoughts on this a great deal.

Expand full comment

>Thus you lack a deep motivation to do anything.

That's assuming that you're not selfish at all, which is an odd assumption to make.

Expand full comment

I would include caring about the well-being of others as something that makes you not a nihilist.

Of course you could think nihilism is interesting as a philosophical idea but still live a normal, non-selfish human life. But then I'd argue you're not taking the idea of nihilism seriously. Which is very much the norm, since the consequences of taking nihilism seriously are so repugnant to a well-adjusted person and probably end in suicide.

Expand full comment

I'm sorry, I misread the parent comment as saying "assuming that you *are* selfish" (as a nihilist). I would also count a hedonist with only selfish desires (maybe a pleonasm - let's say "who discounts the value of altruism greater than is socially acceptable") as not a nihilist. In my view, nihilism is basically the philosophical post-rationalization for clinical depression. That's why I said it is likely to lead to suicide, although most suicidal folks probably wouldn't explicitly call themselves nihilists.

Expand full comment

I'd say that it's more of a spectrum, with a bimodal distribution. A "well-adjusted" person that you think of is a central member of the first mode, a successful psychopatic CEO or politician ditto of the second. Even if somebody doesn't quite qualify as either, I doubt that it's a common cause of suicide. In my model, the vast majority of those are caused by mental illnesses, and a small minority by extreme suffering or an expectation thereof. It just doesn't seem plausibe that philosophy or aesthetics can be the main source of that. As the rationalization of underlying issues - sure.

Expand full comment
Comment deleted
Expand full comment

Right, I agree, but maybe I worded it poorly. By "deep motivation" I'm not including this kind of intrinsic drive, but rather just the post-rationalization that helps one feel that one's actions have a coherent purpose. You would still eat, but you might think afterwards "what's the point of it all, why don't I just starve myself?" and you wouldn't have a satisfying answer.

Expand full comment

'The opposite of a sense of futility' is a helpful frame. I appreciate that, thanks.

Expand full comment

How do you define meaning? You seem to be using it to mean "things that I morally should do".

Expand full comment

I think our lives are meaningless, but feel very moved by people's accounts of their transits through the void. I work to make my own observations acute and unpretentious, yet yummy.

Expand full comment

I don't have such a specific goal yet; I'm still figuring it out as I go. I think your goal is interestingly apolitical though. As opposed to, say, aiming to increase the role of democracy in determining how societies evolve, it sounds like you aren't taking a side as to who should be in control, except that *someone* (or thing) should be rather than the whims of chance?

Expand full comment
Comment deleted
Expand full comment

Ah I see, I was reading "the lot of humanity" as relating to major societal changes rather than individual life trajectories. For the latter I think it does make more sense to be apolitical. Thanks for the clarification.

Expand full comment
Comment deleted
Expand full comment

Ah, that must be exhausting.

Expand full comment

Are you sure this doesn't make you an existentialist, if you believe the people involved in an action have the power of meaning-making?

Expand full comment

I wondered this. Hell is, after all, other people

Expand full comment

Does this infer that one discrete statement, for example, can have 3 or more meanings? This is making me want to read Wittgenstein more closely

Expand full comment