286 Comments

> intelligence, height, schizophrenia, etc - are necessarily massively polygenic, because one side of them is better for fitness than the other.

I'm curious which side of height you think is better. Being extremely tall or short is clearly bad for fitness (see the health issues of giants and dwarves). Being close to average seems optimal.

Expand full comment

I'm tall and I almost died of cancer at 38. Height appears to raise cancer risks.

My feeling is that women are too prejudiced in favor of tall men, too heightist. That bias made more sense when height was heavily controlled by nurture, so being tall was a sign that a man had relatives with resources when he was a child to help him attain his height. And having affluent relatives is a good thing in a potential husband. But now that height is mostly controlled by nature, it's dumb for women to be so heightist.

But criticizing women's prejudices is not a high priority in our culture, so almost nobody complains about heightism.

Expand full comment

Maybe humans just like round numbers. In which case I predict that by the year 3000, we will have diverged into 2 species:

Americans, who are all exactly 6 feet tall.

Everyone else, who are all exactly 2 meters tall.

Americans will consider the rest of the world freakishly tall, while the rest of the world will consider Americans grotesquely short.

Expand full comment

Mabey! (comical shrug)

Expand full comment

Gregory Cochran has some blog posts about how he thinks height is also indicative of lower mutational load not just a good childhood environment. It doesn't seem fair to criticize preferences that are probably innate either.

Expand full comment

Male height might be like the peacock's tail?

Expand full comment

Don't be too harsh on women having silly preferences for men, these are mostly inborn just like male preferences for women (nice curves, narrow waists and whatnot). If we told men to change their preferences towards wider waists - because, let's say, there are so many women with wider waists and quite few with narrow ones, and the narrow-waist types are actually worse at enduring famines - do you think men would really be able to change their preferences?

Also, women don't really prefer very tall men, just a little taller than themselves. Short women are quite fine with moderately short men. The dating-site phenomenon of ladies filtering out shorter men is probably some problem with the sites, e.g. this is the main thing they can filter, if they could they'd filter something more relevant.

Expand full comment
Feb 8·edited Feb 8

> If we told men to change their preferences towards wider waists - because, let's say, there are so many women with wider waists and quite few with narrow ones, and the narrow-waist types are actually worse at enduring famines - do you think men would really be able to change their preferences?

This is an interesting question as to waist circumference specifically, because fatness is well attested as being historically attractive.

For just one fun example, modern Chinese women, like most women, are intensely concerned with being thin. The word for this is 瘦, and it is highly desirable. The opposite is 胖, fat, which is bad.

But if you look at the characters themselves, 胖 uses a "body" radical (ok, technically "meat", but it's the normal one for body parts), while 瘦 uses the overtly negative "disease" radical.

Expand full comment

Hm, I'm not sure about the "fat was always beautiful" history. Nowadays worldwide polls indicate that preferring lower-normal BMIs by men is almost a human universal, with the poorest and hungriest hunter-gatherers picking the 'BMI=23 figures, all others around 20-21. Of course, the hungriest of them preferring wider women does hint on a slight cultural or circumstantial effect. It just doesn't go very far into higher BMIs.

If you say fatness is well attested to have been historically attractive, which evidence do you mean? (Well, the hieroglyphics are some evidence, but not too strong.) As far as I understand, men generally favor both soft round curves and thin waists. Back in our constantly hungry history most women were skinny and somewhat lacking in the curves department; therefore curves got perhaps glorified and elevated when describing a beauty? While now most have good curves but lacking the thin waist, so the latter gets elevated in descriptions. Does that make sense?

Expand full comment

IIUC, historically men prefer women in their teens, who are still growing, so they won't be too fat (on the average). But I also understand (contradictorily) that in many cultures men prefer women who have successfully been pregnant...which would increase the odds of "fat". I suspect that there's a multi-factor preference going on here, with different factors not always in agreement. Certainly in some cultures an extremely young wife is a sign of status. (I'm referring back to a article I read by an anthropologist multiple decades ago that claimed that among the Australian aborigines the chief was likely to have a wife a young as 8 years old. [What "wife" means in this context, however, I'm not sure.])

Expand full comment
Feb 8·edited Feb 8

South African women wear what appear to be padded inner tubes around their waist to prevent any appearance of narrowness. (Source: watching Family Feud South Africa.) It's not universal by any means, but it appears to be a conventional way to dress yourself up.

Wikipedia showcases this painting in its article on the Judgment of Paris. ( https://en.wikipedia.org/wiki/File:The_Judgement_of_Paris.jpg ) It's from 1599 and you might notice that Hera (the one in the middle) has been drawn with rolls of fat bulging from her waist.

Expand full comment

I've thought about that area European paintings, with those well-fed women. They look like real women, right? My guess is that these were painted from actual live human models. And actual live women don't generally have perfect figures from male point of view (they never had much selection pressure for that, when you think of it). When men try to paint perfect women from the top of their heads, they come up with quite different figures, like those in computer games and anime. Those have extremely narrow waists coupled with nice round hips. I'm not a man myself so it's only a guess, but it kind of looks like this is what they actually prefer? There are also plenty of ancient figurines of women with the same kind of unnaturally narrow waists from different places in the world.

On the other hand, every now and then there are strange beauty fads here and there, like the one with soft baggy jawlines necessary for European women during a century or so. And the narrow hips fad of 20th-21st century. Culture interfering with inborn biases?

Expand full comment

>this is the main thing they can filter, if they could they'd filter something more relevant.

They could have, say, filtered on acohol or tobacco usage. But they don't want that.

Expand full comment

This is interesting. Alcohol and tobacco don't matter if one's looking for a short-term mate, maybe that's the reason. For a long-term relationship I'd predict many would filter out tobacco (and very strange if they don't) but certainly not alcohol, it's kinda nice to have a glass of booze together.

Expand full comment

Cancer, like schizophrenia, doesn't always present by traditional mating age; someone could be 25 and already have two kids, so it would be too late to avoid mating with someone with the gene(s). On the other hand, being taller than average has some social advantages.

Plus, people didn't know that cancer or schizophrenia weren't caused by too much black bile or demons centuries ago.

Expand full comment

"But criticizing women's prejudices is not a high priority in our culture, so almost nobody complains about heightism."

Brutal

Expand full comment

It makes no sense to criticize anyone’s sex preferences. They are not based on deliberate considerations of offspring’s reproductive success. We may as tell men to stop being more attracted to younger women because older ones are more mature and our life spans are now long enough, family planning technology good enough, that age at first childbirth matters less.

Expand full comment

Our society spends a huge amount of effort criticizing some people's sex preferences, such as, recently, older men who like younger women. Criticizing female heightism might or might not be effective, but it's striking that virtually nobody does it in the mainstream media, while constantly denouncing various male -isms.

Expand full comment

Yeah we can imagine scenarios favoring both short height (lower caloric consumption, less strain on the joints, faster recovery from injuries) and tall height (stronger, better at fighting, possible sexual selection effects for males). I wouldn't say it's definitely tilted one way or the other.

The modal human is not very tall by Western standards. In India, the average male height is 5'5. In China, it's 5'6. That's about a sixth of the world's population right there.

Expand full comment

I've read that soldiers fall into three categories-- big (strong, but need more food and more room), fast (but not especially strong), and enduring (what it says on the label).

Expand full comment
author

Thanks for questioning that.

My original answer was going to be that although being very tall seems bad, marginal increases in height seem good. But then there would have to be some point at which that stopped being true, and the height evolution made us seems as likely to be that point as anything else!

I originally thought that height must be good because 1. it helps with hunting and stuff 2. at least in men it seems to raise sexual attractiveness 3. healthier people with better nutrition are taller. But it's possible that either cancer risk or difficulty getting enough nutrition counterbalance 1 and 2, and 3 is just circumventing an evolutionarily precise mechanism.

Overall I would guess height improves fitness today, but that it might not have in evolutionary times when there was more food insecurity. I've taken that out and replaced it with "strength" as an example.

Expand full comment
Feb 8·edited Feb 8

Strength still require more calories.

One way we can see that strength is not always selected for is that men are stronger that women. If strength was always an advantage, woman would also be selected to be strong.

Intelligence also requires calories.

Expand full comment

I can see that a human brain burns more calories than the brain of a frog; and humans are smarter than frogs.

But do smarter humans burn more calories thinking than their less gifted peers?

Expand full comment

That seems like a near certainty; even if doing the same amount of thinking costs them fewer calories, thinking is more valuable when they do it, so they're going to do more of it.

Expand full comment

People don't shut off their brain, people are always thinking.

Expand full comment

In the first place, this is false.

In the second place, the amount of thinking people are currently doing is not constant.

Expand full comment
Feb 8·edited Feb 8

Chess masters (allegedly) burn a shocking amount of calories during tournaments. Even if they require the same amount of energy at rest, smarter brains may use more energy during peak performance.

Expand full comment

We would need to exclude the possiblity that chess masters just burn more calories because of what the adrenaline does to their bodies, before we blame it all on the brain.

Expand full comment

I had done some reading about it and it was the conclusion : Stress is what burn calories in chess master.

The brain baseline energy consumption in unusually high in the animal world but thinking harder does not burn more calories

A fun fact in that the greatest chess masters do quite a lot of physical training. Turns out sports is a good way to train your organism for stress and chess tournament are very stressful

Expand full comment

I read once long ago that researchers were surprised to find the opposite, that in solving problems like math, very intelligent people used less energy. I treat such studies with high skepticism and do not know if it has been replicated.

However, having taught math and discovered to my astonishment that most people think about it radically differently than I do, explaining the percentile, I find this consistent with my experience.

As it happens, my mother is schizophrenic, which has been a lifetime of difficulty. She is completely mathematically incompetent.

Expand full comment

Don't be too skeptical. People used to solving problems are more apt to recognize a new problem as just a reframe of one they already know how to solve. In which case it's quite likely that they could solve the problem with less effort. ... You say you're a math teacher, so consider the problem of "factor (x -1)^3 = 0". You could probably solve it trivially, but your students...(well, I don't know what level of math you teach, they may not be able to do it after an hour's work).

Expand full comment

Completely off-topic but I felt a burning need to reply: "factor (x-1)^3" is a trick question, it's already factored! (x-1)*(x-1)*(x-1). You probably meant "factor (x^3-1)", which is indeed easy when you've done it a million times but hard if you don't know the 'difference of cubes' trick.

Expand full comment

Your example is an interesting one and I will add some thoughts about it to my Rehchoortahn reply after breakfast. (I am not a math teacher but taught for a while, unexpectedly, an illuminating step outside my comfort zone alone in a room with a computer.)

I am not skeptical due to the conclusion, which was apparently not clear. I am skeptical of non-replicated research in general, especially with current rampart careerism, and any studies on topics hard to unambiguously define and quantify in particular. Skepticism does not mean I prefer to doubt the conclusion. I don't know if the conclusion is true or not. Very few studies are as convincing with one experiment as was the double slit experiment.

Most people learn by rote and that is not the bad thing it is often disdainfully considered to be by those who don't. Rote learning makes it possible for many people to effectively apply concepts they could not have originated themselves, permitting powerful cultural transmission. Most people have astonishing powers of language, pattern recognition and memory.

Personally, I am not good at math because I sometimes recognize that a new problem is a reframe of one I already know how to solve, at all. That would fall into the category of "how most people think of math" IMO.

What do the symbols describe? What does it mean? What does it model? Extremely important, what is arbitrary and what is essential?

How does this connect to everything else I know? Why is this approach a useful way to think about the scenario? What possible implications could it have for other situations? Is there a better way to model it? How does it relate to historic approaches and how do those constrain representation of the scenario? How is it applied and what do the authors want me to conclude? How can I identify assumptions based on those intended conclusions and step outside of those assumptions?

This seems like a lot of work as I write it down yet I memorize almost nothing and rederive on the fly most of the time. I have a lot of trouble talking about math because I bypass language mentally while doing math, which speeds it up dramatically. I usually don't recall the names for what I am doing. That made it extremely challenging (and interesting) to teach math. One can't just point and say, "Notice this, then it will be self-evident!" (Feedback was that I was a good math teacher in spite of those limitations.)

Expand full comment

One complaint about Common Core math in the U.S. is that parents struggle to help kids with homework because the kids are taught multiplication differently than how the parents were taught.

My initial reaction is surprise. Because if the parents had more than a superficial, purely algorithmic understanding of multiplication, then the common core methods shouldn't have been a big deal.

Also, how many proofs are there of just the Pythagorean Theorem. At least 40, right?

Expand full comment

I'd imagine people who are bad at math have more stress response when you ask them to do it: muscular tension, gritted jaw, elevated heart rate, etc.

I'd expect the effect of that to be way bigger than any change in brain function.

Expand full comment

>that in solving problems like math, very intelligent people used less energy.

There is more and better evidence that good runners use less energy to run a certain distance.

Now, if you put smart vs smart in math battle and compared them against dumb vs dumb, that'd be more interesting.

Expand full comment

As I recall the answer is mostly no when looking at 'how much energy is burned while doing some problem set'. Smarter people were actually slightly more efficient. But brain size is only weakly correlated with intelligence in humans, and I'm sure that in terms of passive energy consumption there's a reasonably linear relationship between that and brain size.

Expand full comment

There will presumably be some genetic variation in how efficiently the muscles work, such that those who are higher in this trait are stronger without requiring more energy, plus a much larger variation in strength in a more resource-dependant way. The argument still applies to the former variants, which will mostly become fixed in the population, reducing the variation, while the latter will be subject to stabilising selection. The genes affecting efficiency will not all be completely fixed for the same reason as the schizophrenia genes, that minor harmful mutations are continuously arising.

This point is considerably stronger for intelligence, where it is more intuitively plausible that such efficiency variation could exist, particularly since human brains have changed a lot relatively recently, while muscles have been subject to much the same selective pressure for hundreds of millions of years.

Expand full comment

Male height is probably like the peacock's tail?

Expand full comment

As a female, though I find men at my eye level attractive, too, I do tend to swivel toward male height. It signals fitness in physical competition against other men. Height corresponds with relative reach, throw and somewhat with mass. It is hardly the main factor, though, easily overridden.

I agree that because it is quantifiable, it may assume a disproportionate importance in dating sites.

Expand full comment

Humans however are downright much weaker than the other great apes. We are clearly strongly selected for being less strong.

Expand full comment
deletedFeb 8
Comment deleted
Expand full comment

Throwing seems a bit niche, kind of a human specialty with no real competitors. A better category would be spitting, at least then we'd have some real competition.

Long-distance running seems good. There are a few species that specialize in that. It actually is kind of amazing that humans are at or near the top.

Expand full comment

Og throw rock, hit & kill rabbit; eat rabbit good.

Expand full comment

This is probably true. I suspect that the trade-off is weaker muscles vs. longer life.

Expand full comment

iirc the main trade-off is flexibility.

Expand full comment

I would think reduced calorie consumption and the associated resistance to starvation were the main benefit. More generally, there doesn't have to be any specific *benefit* gained from less muscle (beyond reduced resource consumption) so long as the need to have them is reduced or eliminated. Whole organs and appendages get deleted by evolution when they're not useful any more, for no reason beyond 'this uses calories and protein'.

Expand full comment

I don't think anyone claims that being weaker is not a disadvantage. So it would automatically be selected against unless there were a compensating advantage And I don't think that it's reduced calorie consumption. Orangutans frequently undergo seasons where their preferred food aren't available, and they didn't evolve weaker muscles.

Expand full comment
Feb 8·edited Feb 8

Also, violence advantage.

Downside (and what might have kept things in check and explain geographic variations): calorie and protein demand.

It seems optimal to have genes for a taller height and then have malnourishment check it if it takes place.

Expand full comment
Feb 8·edited Feb 8

> Overall I would guess height improves fitness today, but that it might not have in evolutionary times when there was more food insecurity. I've taken that out and replaced it with "strength" as an example.

As I point out in a root-level comment, what you just said about height applies just as strongly to any trait where there is significant existing variation. (Such as strength and intelligence.) If one side was better than the other, the variation in the population would disappear.

Expand full comment
Feb 8·edited Feb 8

I thought it was established that many sexual selection characteristics (thick lips in women, big foreheads and height in men) were indicators for high sex hormones during adolescence. And that high sex hormones increased susceptibility to disease and caloric needs. Thus the actual signaling was of an innate robustness to disease(since despite high sex hormone, they had in fact survived) and either robustness to caloric deficit or success at acquiring food.

Expand full comment

I think this would have no analogy to schizophrenia, but point 2. offers a stabilising (regressing to the mean) mechanism. Supposing that height (its higher percentiles) reduces sexual attraction in females, and that some of the genetic effects on height are sex agnostic, then an extreme height polypenic score, while increasing fitness of sons, would decrease fitness of daughters.

Expand full comment

Height has disadvantages. First, if you're big, you need to eat more. Big strong guys didn't survive as long in the Gulag as shrimps. Second, if you're big in body but you aren't big in heart size, your heart is going to have to work harder. Third, if tall men have tall daughters, that might hurt the daughters' marital marketability.

Expand full comment

other cons:

cube-square law (strength). I.e. Large objects have less strength per unit of volume. This is why ants/spiders/etc are "10x" stronger than humans)

cube-square law (heat). I.e. large animals have lower surface-area per unit of volume, and therefore conserve heat better. This is more important for aquatic life. I guess this sort of dovetails with "caloric intake" that others mention, but it's worth mentioning anyway.

Expand full comment

Just in terms of reproductive success in the current landscape, taller men and shorter women have more children (Europe and US, not sure about elsewhere).

In the very long run, this would be the kind of thing that should lead to increased dimorphism - but that is a hard target for evolution to operate on, since the great majority of variation in height is due to genetic variants that do more or less the same thing in both sexes. So instead we're probably close to equilibrium.

Expand full comment

The existence of noticeable differences in height by racial ancestry strongly suggests that whether or not height is beneficial is environment dependent (that's compatible with height being attractive everywhere via a peacock tail type effect...ohh they can support that huge calorie load...sexy).

Height has substantial effects on heat dissipation so likely Inuit and sub-saharah Africa face different pressures.

Expand full comment
Feb 8·edited Feb 8

Being tall might have some health risks, but you will also be stronger (important in a violent society) and most important _by far_, tall men have an attractiveness and workplace advantage. So definitely tall is better for males.

I don't believe average is ideal for men - taller than average (within reason) is surely better. 6" and up to some inches taller?

Expand full comment

I think that's a false assertion. Most boxers and wrestlers aren't especially tall. It's basketball players who are tall. That's an argument not for tall but rather for sturdy. But sturdy body types don't do as well in hot climates.

Simple answers are going to be wrong. A lot is cultural, and a lot is environmental (not counting health), and another lot is health. And they aren't all pulling in the same direction.

However, within any small genetically nearly homogenous group, being tall will be an indication that you did well growing up in the current environment. There are a range of reasons for why this might be true, but they all indicate that you're probably a good mate. (An exception might be for things like acromegaly .)

Expand full comment
Feb 8·edited Feb 8

Boxers and wrestlers are weight-classed already, so that has an inherent tradeoff (you can't be so tall you don't have weight left for muscles). Superheavyweights are _not_ short (and height correlates with reach, which is super important). Offensive linemen have an average height of 6'5".

Expand full comment
Feb 8·edited Feb 8

The five latest UFC heavyweight champions were 6'4", 6'4", 5'11", 6'4", 6'4". And note that even the heavyweight division is weight-classed (265 lbs max), which could potentially limit some even taller guys.

Expand full comment

You do know that virtually all combat sports have strictly controlled weight limits, right? And since taller people at the same BMI will be heavier than shorter people, we should expect height to scale with weight, which it does: https://themmaguru.com/ufc-fighter-height/

And there's usually nothing stopping smaller athletes fighting in higher weight classes, meaning that shorter fighters could bulk up and take on taller, leaner fighters. Most non-HW fighters have to dehydrate themselves to make their weight class, and so considering smaller fighters are usually more technically skilled and they wouldn't have to go through a dehydration before a fight, we should expect this to happen a lot to the point that shorter champions moving up a weight class and becoming champion should be common, but it's not. In boxing especially, being taller (at least to a point) is a huge advantage in terms of both usually having a significant reach advantage, and the fact that you punch down and your opponent has to punch upwards.

Heavyweight fighters ARE significantly taller than average, and even then there's still usually weight limits that likely keep the tallest fighters from being as competitive.

Also, NFL, MLB and NHL players, and male tennis champions are all around four inches taller than the American male average, so its not just an advantage in basketball.

Expand full comment

Brains are metabolically expensive. An obvious reason for chimpanzees to be less intelligent than humans. Koalas & sloths are relatively dumb because they are so optimized for saving metabolic energy on their low-nutrition diets.

Expand full comment

All else being equal, being taller means being stronger, and having a reach advantage. You do run into square-cube problems, which is why we aren't 4 meters tall.

Expand full comment

The big thing that's missing from this piece is the adaptive benefit of diversity and variance.

If everyone in your tribe is tall, being the first short person born might make you better suited for some niche in the community/environment that's not being exploited. Or it might just give you a huge relative advantage the one year out of 20 when famine hits.

So it's likely that things like the 'best height' are hugely contingent and contextual, which is a lot of why we have variance over those traits and maintain a polygenic store of diversity-generators for them.

Expand full comment

A simple model of short and tall: in good times, the tall guys do better because they can hunt more and get the women; in bad times, the tall guys all starve to death and the women have to settle for shorties. Thus, the proportions of each wax and wane.

Expand full comment

You're right that neither extreme is optimal, and height variants of large effect size are selected against because either way tends to be bad. (Big rare shortness genes seem to cause bad things, while big rare tallness genes seem to generally point to growth dysregulation in particular eg https://www.biorxiv.org/content/10.1101/2023.02.10.528019.full https://www.medrxiv.org/content/10.1101/2021.12.13.21267756.full .) But there's antagonistic pleiotropy: being taller than average is very good for male fitness, but very bad for female fitness so it operates differently by sex. (https://gwern.net/doc/genetics/selection/natural/human/dysgenics/2021-song.pdf) Since you can't choose to have exclusively female or male children, that means that tall males give back their fitness gains in their female children and vice-versa. So there's population-level stability. The optimum also differs by environment: pygmies are genetically short (https://gwern.net/doc/genetics/selection/natural/human/2019-lopez.pdf https://gwern.net/doc/genetics/selection/natural/human/2023-fan.pdf), and Peruvians might have a shortness variant being selected for (https://www.science.org/content/article/study-short-peruvians-reveals-new-gene-major-impact-height); or for a more extreme example, consider Homo floresiensis and the 'island effect'. Because it's environment-dependent, you can't even say whether height is being selected for or against: in European samples, in the very long term (thousands of years) there appears to have been selection for increased height (health? warfare eg. https://www.biorxiv.org/content/10.1101/690545.full https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8995853/ https://www.biorxiv.org/content/10.1101/2022.09.22.509027.full / population structure https://www.biorxiv.org/content/10.1101/2023.10.04.560881.full ?), but if you look at the most recent fitness estimates in American & UK populations, it looks like the female disadvantage more than offsets the male advantage and there's selection for decreased height there (possibly because shorter women do worse on the labor market due to the standard pro-height discrimination, and so are more likely to have kids early & more kids in general)?

But it's a large mutational target because 'height' is just the sum of all your parts (similar to temperature being the sum of all the atoms' motions in a volume), which are themselves the outcome of many processes (eg. childhood nutrition and infection, metabolic efficiency), and there are many ways to affect all those. So you can have a very large number of variants affecting it slightly without being purged quickly because they tend to push, ever so slightly, away from the optimum average height.t

Expand full comment

But issues related to height generally don't crop up until after typical reproductive years so they're not going to be selected against.

Expand full comment

Being healthy beyond reproductive years must have some benefit (helping to raise grandkids etc) otherwise wouldn't women drop dead shortly after menopause, like salmon dying after spawning?

Expand full comment

Being taller makes you physically more powerful all else equal, which was probably ancestrally important. There is a known societal correlation between height and status that has probably been present throughout history.

Expand full comment

You say that with height one side is better for fitness than the other. Not really, it depends on the environment. Tall people need more food, but are better at fighting. In environment with little food and little violence, shortness is likely selected for. In an opposite environment tallness is likely selected for. (This is simplified, height affects other things than just food and fighting.)

Expand full comment

Short king detected

👑

🩳

Expand full comment

The sexual preference for height is way too strong for it not to be way better for fitness. Saying this as a bitter, despairing, lonely short king.

Expand full comment

Except that this is only true for men, and it's the opposite for women, and most of the genetic variants operate similarly in both sexes. Thus the equilibrium.

Expand full comment

Women have an almost unviersal preference for height, making height strongly fitness increasing

Expand full comment

Scott! Have you heard of the microbiome? It's a big fucking soup of highly variable biochemistry that is only very loosely under genetic control. And interestingly enough, all the schizophrenia risk genes worth talking about are MHC genes! Does that not tell you something?

Like, let's talk about the actual biology of the disease rather than speculating from a theoretical/statistical perspective!

Why is the abundance of Ruminococcus gnavus 10,000x higher in the guts of schizophrenics than in healthy controls? Why is that figure nearly 1,000,000x in treatment-resistant schizophrenics?

Expand full comment

What are some of the hypotheses for the abundance of Rum. gnavus in schizophrenics ?

Expand full comment

In ascending order of interesting-ness:

1. Diet effect, wherein schizophrenics mostly eat shitty (low-fiber, high-salt) foods that encourage the growth of gnavus over other bacteria [EDIT: they checked, the diets cluster with healthy controls' so this is out]

2. Direct medication side effect, e.g. it eats clozapine or something

3. Indirect medication side-effect, e.g. clozapine causes constipation which somehow creates conditions favorable for the growth of gnavus.

4. Inflammatory capsular polysaccharide drives chronic inflammation (well known feature of scz), which induces IDO/TDO, enzymes that convert tryptophan to kynurenine rather than serotonin. Kynurenine is the precursor to kynurenic acid—an NMDA antagonist elevated in scz that sits at the core of the best current theories about the origin of the disease's "positive" symptoms.

5. Gnavus's uniquely efficient & "promiscuous" tryptophan decarboxylase enzyme turns tryptophan & phenylalanine into tryptamine & phenethylamine in the gut. Colonic absorption lets these molecules circumvent first -pass metabolism that usually keeps them from being psychoactive when eaten. Elevated phenethylamine is a known feature in blood of scz patients, and there are reported derangements of tryptamine levels.

6. Gnavus is known to possess a lyase enzyme, which lets it break down and scavenge/sequester the conditionally essential nutrient queuine. In animals, queuine deficiency leads to impairment of the aromatic amino acid hydroxylases via oxidation of their cofactor tetrahydrobiopterin, or BH4. These enzymes are responsible for conversion of tryptophan to 5-HTP, tyrosine to L-DOPA, and phenylalanine to tyrosine. Impairing them would produce a syndrome that looks a lot like the "negative" symptoms of schizophrenia, and possibly the positive ones too, because of the same so-called "kynurenine shunt" discussed in (4).

The BH4:BH2 ratio has been shown to be off in schizophrenia. BH4 is also the cofactor for nitric oxide synthase, which is responsible for vasodilation; this might be part of why schizophrenics have a much higher risk of cardiovascular disease. NOS is also the "sharp edge" of the immune system, so derangement of this system could create a "vicious cycle" where the body can't muster the usual forces to control the microbiome.

Some combination of any of these may be true for the variety of schizophrenia subtypes, and there are other bacteria with some of these functions, so I doubt gnavus is a sole or even major culprit in every case...but any of the latter explanations is more satisfying to me than "it's probably genetic, and maybe autoimmune?"

Expand full comment

This is very interesting stuff you're mentioning, but it looks like the evidence is still super tentative?

Expand full comment

Yeah, this is bleeding-edge stuff. The microbiome angle, anyway; things like the elevated phenethylamine and kynurenic acid have been topics of discussion since the '70s or earlier.

Even the association between GI dysfunction and mental illness has been known about since ancient times—but it's only in the last ten years or so that molecular biology and gene sequencing have gotten cheap enough that you could just have a hundred people shit into an Illumina to see what's going on down there.

But again, all of these are *known, concrete biological mechanisms* which could plausibly explain various features of the disease in a direct way. Held to a similar standard, calling the evidence around most schizophrenia risk alleles "tentative" would be generous.

Expand full comment

I'd been scoffing* at Chris Palmer and his promotion of keto diets as a means of treating the schizophrenia. Now I feel somewhat humbled.

* https://twitter.com/TeaGeeGeePea/status/1632572607055683584

Expand full comment

I feel you. When we sequenced the human genome, we thought we could finally read straight from the book of life.

More and more, it's looking like that was just the index.

Expand full comment

It's weird how much emphasis has been placed historically on the gut (having guts, trusting one's guts, etc.), but somehow we shocked the gut has a very real say in our well being. Which is, of course, very dumb, you would think the thing responsible for consumption and conversion to energy (basically, lift itself) would obviously be considered super important (you are what you eat!). But shrinks obviously don't like the bio angle, for which they'd just be plumbers, instead it has to be some quasi-philosophical investigation (about as reliable a guide to life as other types of philosophy).

Expand full comment

Right on, dude. The idea that there's something to be done about it is terrifying to a certain kind of person when they're not equipped to do that thing, or at least not easily. The genetics/microbiome dichotomy is sort of a reflection of the same shrink/plumber dichotomy you mentioned; one is rooted in just thinking a lot about the disease and trying to learn to cope with it; the other is about doing something to fix it.

Lot of "motivated reasoning" from Scott here justifying why there's no reason to explore the possibility of a cure.

Expand full comment

Do you have any cites for that last paragraph?

Expand full comment

Cmon, this is just not true. While the MHC region is the top hit in the latest GWAS, there's several reasons I disagree with this framing.

1. The MHC peak is not there in Africans.

2. Rare variants associations are not in MHC

3. Overall, most genes are related to synaptic function and neuronal genes, not immune function related genes.

4. SNP-based heritability estimates typically remove the MHC region - all our molecular heritability estimates are based on genotypes without MHC.

Expand full comment

Do you know WHY most analyses remove the MHC region?

It's because the effect sizes of every other risk allele are so absolutely dwarfed, so brutally CORNCOBBED by the relative size of the MHC signal that it fucks the scale and nothing else looks significant!

It's like doing a study on geothermal activity in the US and leaving out Hawaii! If you want to talk about the likelihood of seeing a volcanic eruption in Washington vs. Wyoming vs. Oregon, then sure, you've got to leave Hawaii out of your analysis. But if you're interested in actually studying an active volcano, you don't need to do any statistical analysis; you just need to buy one of those mylar suits and pack your bags, because it's very obvious that there's only one place with much of anything going on. This is what I meant when I said "the only risk genes worth talking about are MHC genes".

The fact that you're talking about "the latest GWAS" as if this hasn't been the status quo for decades tells me you're googling as you go and don't actually have expertise here.

Next!

Expand full comment

Alright, here we go. Another person just making shit up on the internet. Trubetskoy et al, 2022 is the largest GWAS to date of schizophrenia (https://pubmed.ncbi.nlm.nih.gov/35396580/). Take a look at supplemental table 1, where can actually look at the effects sizes and alleles frequencies for all genome-wide hits.

MHC is indeed the most significant, with an odds-ratio of 1.22. Note however that the allele frequencies in cases vs controls: 93.1% in cases vs 91.4%. That's not an being absolutely dwarfed..

The second-most significant loci is at chromosome 7, with an odds-ratio of 1.09. My point stands: MHC is indeed the most significant loci, but it's not even remotely close to dwarfing everything else. You are just making this up out of nowhere.

If we also look at rare variation, where effect sizes can actually be large (singh et al, 2022: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805802/.). Take a look at figure 6.

We see CNVs and deletions/duplications, such as 22q.11 or a deletion in the GRIA gene. Not the MHC.

And lastly, the MHC region is not removed because of its large effect size, it's because the correlation between haplotypes (linkage disequilibrium, LD) is extremely complex around that region, and is one of the few regions that contain long-range correlations. Methods that estimate the aggregate heritability, or methods predicting phenotype based on genotype needs to handle this dense correlation - which can typically be very hard. Take a look at their wiki: https://github.com/bulik/ldsc.

Expand full comment

Are you willfully misunderstanding my point or just bad at parsing?

You're right that 93.1% vs. 91.4% is a pretty tiny difference! I'm not saying "the MHC allele has a large effect size".

Like all genomic associations with schizophrenia, it has a tiny, shitty effect size. But compared to all the other genomic associations, it is more than twice as large.

If you are five feet tall, and I am more than twelve feet tall, I dwarf you. That is the ratio between these effect sizes. It's also an intron variant. If you understood what an intron is, you would not be waving this example around so proudly.

The fact that you keep misusing words like "loci" is your third strike. Begone and take your 300-author paper with you. This is not gravitational wave astronomy,

Expand full comment

But you are literally wrong with "You're right that 93.1% vs. 91.4% is a pretty tiny difference! I'm not saying "the MHC allele has a large effect size" -> As I posted, there are several genetic variants with very large effect sizes - but all rare.

How am I using "loci" wrong?

And all your other claims are that the microbiome is important because all the main schizophrenia genes are immune related - I just clearly displayed that this is completely wrong.

You are just jumping completely over the place without meeting any of my arguments:

1) It is ignorant and wrong to proclaim that schizophrenia genetics mainly points to immune function because the top loci is in MHC. In aggregate, its clearly neuronal function that genetics point at.

2) the MHC loci does not "dwarf" the other associations, it's merely the strongest one out of all loci. If you rank by effect size, it's not even the largest one - it's just the combination of effect size + effective sample size.

3) You were wrong and clueless about why the MHC region is removed in typical genetic analysis.

Expand full comment

I am not interested in some inbred Algerian family with a mutation that causes a syndrome which technically meets the DSM criteria for schizophrenia.

I am interested in curing the nearly 1% of the population that suffers from this disease. Anyone who has studied schizophrenia seriously understands that it is highly heterogeneous and the diagnosis encompasses multiple distinct etiologies.

My claims about the microbiome's importance stem from the fact that it provides clear mechanisms to explain the disease's symptoms.

Regarding "loci": look it up.

Goodbye! 🖖🏼💩

Expand full comment

Ignoring the ES&P guy for a second, I'm still confused about why MHC genes should be excluded from our thinking here. The idea that the immune system effects our microbiome which effects the phenotype of the whole organism seems plausible to me, and while neuronal genes are the first place I'd look for a heritable mental illness, the argument seems that I should discount the MHC genes because they are confounded somehow? I cannot figure out the github wiki at all. Do I just need to spend some time reading about LD? Any references I should start with? I feel like this will help me understand GWAS better in general.

Expand full comment

My argument was not really spelled out properly - my bad.

You can take a look at this paper to familiarise yourself with the different heritabilities:

https://www.sciencedirect.com/science/article/pii/S0006322320316693?via%3Dihub

The SNP heritability for schizophrenia is roughly 20%, can be interpreted as the amount of variance we can explain with common genetic variants (given large enough sample sizes).

In genetics papers, we typically estimate the heritability with LDscore regression, which is a method that recommends removing the MHC region for methodological reasons).

My point is simply that a large portion of the heritability is outside the MHC region, as ESP tried to make the claim that the MHC is by far the most important association.

For LD, perhaps check out side this paper:

https://www.nature.com/articles/nrg2361

Expand full comment

Thank you very much. I will check out those two links.

Expand full comment

Thanks for sharing this. Gut micro biome monotheists are exhausting – and wrong. Tom Chivers and Stuart Ritchie did a great show on this.

Expand full comment

Something I usually find missing in these discussions is noticing that effects, in general, are not additive. E.g. "each of which individually has a small effect, adding up to a large total effect". Genes are not a D&D like system where each contribution gives "+1" to whatever. There are complex interactions. I think of the extreme case as breaking systems on a plane: you remove one circuit controlling an engine, and nothing happens. You remove two, still good. Remove three, suddenly the engine malfunctions, and you go straight from "it's all ok" to "plummet and crash". Obviously it's not like all biological systems are made like this (although we have plenty of redundancy inside too). The more general version is a system with different components that still sort of complement each others functionality. E.g. a city accesible by train, road and a harbor may not function quite the same if roads are closed, but it's definitely not even 1/3rd as bad as "all roads, trains and harbor being closed".

I find especially relevant for answer 3 to the "why keep small bad effect genes", because it makes a lot of sense that you have a bunch of mutations which may cause a positive outcome (close the roads, less pollution!) while causing a bad effect that is much-less-than-additive compared to all of them activating (city is inaccesible, everybody starves). This naturally makes the "bad" effects of single genes much, much harder to see. Obviously the city example is exaggerated, and you'd see the effects of closing the roads - but when it's extended to dozens or hundreds of factors, one shouldn't even expect them cause effects on the order of 1%, as the naive "additive" decomposition would suggest.

Expand full comment

Steve Hsu, argued that they are surprisingly additive (https://arxiv.org/abs/1408.3421).

Expand full comment

The evolutionary argument is straightforward: additive effects can be picked up by natural selection, and nonadditive effects can't (because of the mechanics of gamete production).

So any functionality that is the result of natural selection is going to consist almost entirely of additive effects. Then you ask "how much of the genome's functionality is the result of natural selection?".

Expand full comment

Conditionally additive effects can still be selected for though.

E.g. if gene A is positive when you have gene B but is otherwise neutral, and gene B has non-trivial prevalence, than A can still be selected for. The same goes if you substitute B with environmental factors.

Expand full comment

I don't think that argument necessarily applies here. Yes, in cases where there is a measurement whose outcome depends on many loosely coupled factors (better health, better brain architecture, better blood flow, better parenting etc) like intelligence are likely to be roughly additive (when changing only a few). Less so for genes causing specific conditions eg sickle cell.

We don't really understand what kind of think skitzophrenia is yet.

Expand full comment

Actually when it comes to these sorts of GWAS type studies the results are surprisingly additive. Epistasis turns out to really not matter all that much in humans at least (in some other species it does) over a wide variety of different conditions from our current level of knowledge compared to the effect sizes of the genes themselves (and it's not like we haven't tested it, we have but didn't find large effect evidence).

Expand full comment

A guest (I can't remember which one) on Razib Khan's Unsupervised Learning talked about why genes of large effect tend to get selected out. There's often some sweet spot, and genes of large effect will tend to result in overshooting it. With lots of genes of smaller effect you get more of the "law of large numbers" resulting something closer to average, whereas with smaller number of large effect genes random noise will tend to produce larger deviations from the average.

Expand full comment

There's much to be said about this, but one thing particularly jumping out was your comment "most random mutations are deleterious."

I have to disagree: silent or neutral mutations occur all the time. Indeed, evolution works by random mutation; if most were deleterious, life wouldn't have progressed past the most rudimentary organisms.

Expand full comment

There are a lot more ways in which a random change makes things worse than ways it makes things better. (Consider typos as an analogy.) Evolution works because it operates across many individuals over many generations, so it has the opportunity to pick out the occasional helpful mutations out of the majority of deleterious ones.

(I'm not sure about neutral ones, maybe you're right about those, but deleterious >> beneficial.)

Expand full comment

Yeah, pop genetics seems rigidly adaptionist, while all the population genetics I did in undergrad emphasised neutral selection.

Expand full comment

I think the literature agrees with you that "most nonsynonymous and nearly all synonymous mutations have no detectable effects on fitness", as these guys argue when refuting a paper that suggested the opposite: https://doi.org/10.1101/2022.07.14.500130

Expand full comment

Ultimately you have a situation where theory strongly suggests that they are not likely to be useful and are quite likely *slightly* deleterious, but the genetic architecture has evolved in such a way that most small changes that result in a viable organism are indeed likely to have no *detectable* effect on fitness. Not surprising, given that even the variants that we end up flagging with high powered GWAS often show tiny effect sizes.

Expand full comment

Isn't it the case that a significant portion of fertilized egg cells are non-viable due to harmful mutations? So if we're looking at the genetics of born babies, we're getting at a distorted picture - the subset of mutations which weren't rapidly discarded already in the embryo stage.

Expand full comment

Could we steelman this to: "of those random mutations that have an effect on fitness, most are deleterious"?

Expand full comment

I think that must be true.

Expand full comment
author

Yeah, okay, I meant "most meaningful random mutations". I'll edit it in.

Expand full comment

I agree with your point in general, but I don't think 'if most were deleterious, life wouldn't have progressed past the most rudimentary organisms.' is true.

Expand full comment

Agreed, depending on just how deleterious they were and the environment they were in, this could even lead to evolution happening faster than it happened in our world (c.f. bacteria developing antibiotic resistance, introducing the antibiotic makes certain alleles deleterious and leads to protective variants fixing much faster than they would have (or even not fixed at all))

Expand full comment

Except that in asexual reproduction, presumably most mutations are deleterious (in sexual reproduction most are ignored because they occur only on one parent's contribution). Except that asexual reproduction like bacteria or yeast involves doublings upon doublings upon doublings so that if 10% of your culture dies due to mutations each generation, it's more or less a who cares.

Razib Khan had a post a while back saying that each individual has something like 50 new (first generation) mutations that are ignored because they are only from one parent and pretty much everyone has 1-3 (multi-generation) mutations that would be fatal if you had both copies.

Expand full comment

Most mutations are silent - have no effect on the phenotype.

Of mutations with a small effect, 50% shouold be beneficial (50% chance that a random small change moves you towards optimum vs away from it).

Mutations with large effects are almost always deleterious.

Expand full comment

The case for 2 would be mostly the positive effects of Schizotypy. Some are known, it's not just creativity, nice overview paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4373632/

The elephant in the room is religiosity. Like most psychological traits that's like 50% heritable. I'm unaware of any genetic analysis of it, but I'd easily bet 10:1 it is also polygenic and significantly correlated with the Schizotypy/Schizophrenia genes. Meaning if you screen or engineer to prevent Schizophrenia, you're also screening or engineering against religiosity. In the upcoming huge culture war about genetic interventions in human reproduction, that's predictably a factor that's not going to calm things down.

Expand full comment

Also been argued that this explains high religiosity in ancients etc. Relating to 1.

Expand full comment

That could be fun to watch. Some parents or governments will want to engineer religiosity in and others will want to select it out.

Expand full comment

Especially if the theory pans out about certain types of political movements being fundamentally religious in nature.

Expand full comment
Feb 8·edited Feb 8

Schizotypy and religion is messy. Schizotypal people are not conventionally religious, though you'd get this impression from Crespi's idea of what schizotypy is. The most schizotypal religions tend to be the most materialist -- many New Religious Movements are only "religious" in the most tenuous sense, in that it's the best concept we have for rounding e.g. "UFO religions" to, or because philosophies that use the trappings of an existing religion tend to be treated as "offshoots" of it even if they reject just about every claim it makes.

Like everything else, The X-Files does this really well -- Mulder (not only the single best portrayal of a schizotypal person in any media, but the single best portrayal of *any neurodivergent person* in any media, completely by accident) is, famously, much more interested in unusual ideas as a whole than Scully, but he's far more dismissive of conventional religion. Whenever the case turns religious, she's open to it and he goes "is this organized religion? cringe".

Most research I've seen on schizotypy and religion hasn't been able to seriously grapple with the differences between simultypal and schizotypal religion, because it looks at/assumes some sort of "general factor of spirituality" that isn't reflective of the real world. Mainstream religion and alternative spirituality are more unalike than mainstream religion and mainstream irreligion are.

Expand full comment

I broadly agree, and would point to the concepts of intrinsic vs. extrinsic religiosity, which is mainstream in the (tiny, so not meaningfully mainstream itself) psychology of religion. Extrinsic religiosity is oriented towards the congregation / umma / sangha / etc. and is emphasized in the big denominations, while intrinsic religiosity is the one of solitary prayer, piety and the heartfelt weirdness of schizotypy, and is bigger among religious specialists like priests and monks, and among converts to religions, which includes almost all members of NRMs.

They're correlated, but not hugely. Your statement that they're less alike than mainstream religion and irreligion seems to me daring but not absurd.

Expand full comment

It amuses me to no end that Buddhism was more or less areligious until Buddha died and people decided it would be a lot easier to sell to the masses if you added back in deities and giant statues.

Expand full comment

Mahayana Buddhism even developed holy wars and swordpoint conversions.

Expand full comment

My wife thinks she is Buddhist. She is Vietnamese. There is none of Buddha's teachings but lots of fortune telling and ancestor worship. On the other hand, it's still probably better than most western religions.

Expand full comment

> Mulder (not only the single best portrayal of a schizotypal person in any media, but the single best portrayal of *any neurodivergent person* in any media, completely by accident)

My vote would be for Harriet the Spy, the best portrayal of Asperger's that I've seen.

Expand full comment

She's not aspergers.

She's a child whose parents are totally absent from her life, and is raised by a maid who doesn't really parent her or care about her much. She's also a nasty piece of work, but its not her fault because she is left to herself and never really got connected to her family to be taught empathy.

you definitely knew kids like her back in that timeframe, but calling it aspergers is bullshit because it completely absolves the parents and adults lack of responsibility.

its a book that changes when you read it as an adult. Maybe there is a point here about all this desire to place the burdens of personality on the individual alone in an essentialist way, but Harriet grew up without real parenting.

Expand full comment
Feb 11·edited Feb 11

I don't think it's a matter of a lack of parenting. There's a lot of signs of aspergers including lesser known ones, like the way that Harriet insists on a rigid schedule and eats the same thing every day. It's to the point where I strongly suspect that the author based Harriet on someone she knew in real life with undiagnosed aspergers.

Expand full comment

a lot of kids insist on that, they grow out of it. in the old days that would be seen maybe as a little babyish not as a psychiatric condition. or as a response to a bad home life. i grew up with 60s-80s kid lit; it definitely did not medicalize anywhere near the post ritalin age.

like psychiatrists were generally there in response to issues with family life; they wouldn't start with assuming the kid had a condition. In Harriet's case it really felt like all the adults in her life were emotionally absent. and that can do a number on a perceptive kid like her.

like the dark side of medicalization is putting the burden on the kid. Your parents not being there can do a lot too,

and a lot of those old books pointed it out. empathy and social stuff is learned too.

Expand full comment

> Schizophrenia is bad for fitness, so if it were genetic, evolution would have eliminated those genes.

I think what is overlooked is that phenotype dispersion (is that the right term?) is good for the species, even when it is bad for the individual. If the same set of genes creates descendants all over the spectrum of a certain trait, then you end up with gay uncles who do not procreate but are still useful. Or with a spectrum of intelligence where some do more intellectual labor and others more menial. Dispersion in gene expression for any set of genes would definitely shaft the unfortunates who end up with, say, severe schizophrenia or autism, but the species as a whole would do better because of the diversity.

There, I explained group selection.

Expand full comment

The math of genetic group selection doesn't work because there's so much more variance within groups vs between groups. And gay uncles don't actually seem to do any of the things people theorize they do.

https://westhunt.wordpress.com/2013/01/10/group-selection-and-homosexuality/

Expand full comment

This argument is exaggerated. In some circumstances group selection isn't effective. However, given that we are in fact large groups of cells in others it must be effective.

I agree it's not a good explanation for traits like altruism or pro-social behavior (I believe there are compelling mathematical models here) but that's a very different claim than the suggestion that genes which cause low frequency individual harms might offer enough benefit to allow group selection to work.

Its kinda irrelevant for homosexuality since gay uncles is a kin selection theory not a group selection theory.

Expand full comment

"We are in fact large groups of cells" with the same DNA.

Expand full comment
Feb 8·edited Feb 8

>If the same set of genes creates descendants all over the spectrum of a certain trait, then you end up with gay uncles who do not procreate but are still useful.

This has been debunked on here many, many times.

Gayness cannot be selected for, because you cannot select for (what is functionally) infertility. It doesn't matter that a gay uncle shares a lot of genes with his nieces and nephews, the specific genes for homosexuality have no way of being selected for. Offspring without "gay genes" will on average dominate the next generation and even if these gay uncles were useful in some way, anyone producing kids that don't reproduce will be punished in terms of gene pool representation.

Altruism is different, because the most common expression of altruism is towards one's children, or people who could raise your children, which means your altruism genes can directly help increase your individual fitness.

And the idea that an infertile gay uncle is of such great group value is taken as a given when it really shouldn't be. They consume resources without producing offspring, and 'caring for children' is by no means necessarily worth the cost of lower group fertility.

https://www.greaterwrong.com/posts/QsMJQSFj7WfoTMNgW/the-tragedy-of-group-selectionism

>Or with a spectrum of intelligence where some do more intellectual labor and others more menial.

Of course, lower IQ menial workers still reproduce and pass on their lower IQ genes, so not at all comparable to homosexuality (yes, many gays have historically had children with women regardless, but this is obviously in spite of, not because of their gayness so its not relevant here).

Expand full comment

I don't have a strong stand on this, but if your arguments was anything of the claimed oft debunking, I don't find it convincing. The gay uncle is as much related to whom he cares for as a grandparent. He contributes to the propagation of the genes. If there's a competitive environment (which selection assumes), then this extra support could help produce the next generation, i.e the family's gay uncle's generation has less offspring, but the latter are more successful and have more offspring than their peers who didn't have gay uncles.

Expand full comment

you did not seem to address the phenotype dispersion part.

Expand full comment

> Gayness cannot be selected for, because you cannot select for (what is functionally) infertility. It doesn't matter that a gay uncle shares a lot of genes with his nieces and nephews, the specific genes for homosexuality have no way of being selected for.

I'm not arguing *for* the Gay Uncle hypothesis which AFAIK is not strongly supported, but this statement is not very accurate. All it takes is for the gene(s) to provide a substantial helping benefit and to only cause infertility in some cases, for example with an environmental trigger. The gene can then be present, unexpressed, in the individuals that are beneficiaries of helping behavior. If (benefit * unexpressed copies) > (cost * expressed copies), then the gene is selected for.

As an extreme example, this occurs in eusocial species such as ants, in which the vast majority of individuals in a colony are permanently physiologically sterile, and work to raise their younger siblings which also carry the same genes. In ants and bees this is favored by a genetic quirk that makes females more closely related to their sisters than to their own offspring, but eusociality also arose in regularly diploid species such as termites, snapping shrimps, and mole rats. Even in these, one is exactly as closely related to one's sibling and their children as to one's own children and grandchildren.

Less extremely, helping behavior by adult individuals who are physiologically fertile but do not reproduce is well documented in several bird species, and IIRC it's more common with men with several male older brothers to be gay, which is both a possible environmental "switch" and a plausible reason to invest resources into one's siblings rather than direct offspring.

Expand full comment

This argument isn't correct. Indeed, if correct it would disprove all sorts of well accepted evolutionary mechanisms like selfish genes.

Consider a gene which caused an increased chance of homosexuality for third and fourth male offspring of a single mother. All you need to select for that gene is that the benefits from greater care/resources allocation from those gay uncles exceed the expected increase in offspring from having them procreate as well.

The gene is selected for because those who are more likely to have the gene (kids of non-gay relatives) are more likely to successfully reproduce long term. Your error is in assuming the gene guarantees homosexuality and non-reproduction.

Expand full comment

Didn't Sasha Gusev use group differences in schizophrenia PGS to argue that they were biased by ancestry. Seems like it might be related to 1.

Expand full comment

"Studies seem to mostly support (1), for example this study of ancient hominid genomes finds that schizophrenia genes are getting less common over time"

Have you read Julian Jaynes' The Origin of Consciousness in the Breakdown of the Bicameral Mind?

Expand full comment

Explanation 3 for those genes hanging around actually seems quite plausible to me. In aging biology, one of the key theories about why we age is ‘antagonistic pleiotropy’, where pleiotropy is the technical term for genes having multiple functions in different tissues/times of life/contexts. The idea in this case is that a gene that causes an animal to grow up and reproduce a bit more quickly and efficiently will be passed on even if it goes on to cause late-life deterioration, because by the time you’ve made it to late life you’ve already passed on your genes (hopefully more quickly and efficiently thanks to the antagonistically pleiotropic gene variants you carry) so you’ve achieved your purpose, evolutionarily speaking.

It’s ‘harder’ for evolution to get rid of a gene that also has a small advantage than one that is merely bad for aging (or schizophrenia) and has no counterbalancing advantage, so actually the 3 explanation seems more likely than 1, though both exist on a continuum to some extent.

Expand full comment

Isn't the argument though that it's bad for reproduction.

Expand full comment

It might not be bad for reproduction if, say, a 0.1% increased risk of schizophrenia is (more than?) counterbalanced by a 0.2% decreased risk of death from infectious disease or whatever.

Expand full comment

To add to this, reading the article, I thought #3 was the 'obvious' reason we have polygenic schizophrenia. If genes are individually advantageous, but don't play well together if you have many of them in your genome (the biochemistry equivalent of "too many cooks", if you will), evolutionary processes will have a hard time filtering them out completely.

Say there are genes A, B, C, D, E and F that contribute to schizophrenia, and each of them also has an upside. Say if you have four or more of them together, you get schizophrenia. So maybe one branch of human genes evolves* toward just having genes A, B and C, and another branch of human genes evolves* to have D, E and F, but it's not possible for us to see from the outside what set anyone has. Now pair up two people like this, and you don't even need to be particularly unlucky to suddenly have four of the schizophrenia risk genes, e.g. A, B, D and E.

(* my understanding is that evolution would probably not even get this far, but let's run with this for the moment.)

This is an extreme example, in that this is a very small number of genes and a low threshold, whereas in real life it's a larger number of genes with a higher threshold, but that just means evolution has an even harder time selecting against them, since e.g. if there are 30 genes and 27 is the threshold at which you get schizophrenia, someone with 26 of them is fitter than someone with e.g. 8 of them. Basically, more is better, until it suddenly isn't.

(Other caveats are that some genes may play nicer with other genes than some others, so it's not entirely linear. Maybe you just need 22 genes to develop schizophrenia if you have genes N and T, for example, but 27 of them if you have only either N *or* T. But again, that doesn't change the problem for the evolutionary process.)

Expand full comment

I'm not sure I buy this model. If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance, and everyone had an independent 50/50 chance of having each gene, then it would indeed be hard to filter that out through evolution, but there also wouldn't be such a thing as being genetically prone to schizophrenia, because every single person on earth would have a chance of getting schizophrenia between 49.5% and 50.5% no matter how lucky they got with their genes.

The problem is that many small independent effects don't necessarily add up to a large difference in effects between people - a large difference between the worst possible and best possible outcomes, yes, but not necessarily between the 1st percentile and 99th percentile.

The fact that some people *can* be predicted to get schizophrenia at a much higher rate just on the basis of their genes means there's something for evolution to easily select on - if those people stop reproducing, the total amount of schizophrenia genes in the population will materially go down (because the ones evolution removed had, by assumption, like twice as many of those genes as normal). This assumes a model where these things are basically additive, but that seems roughly right to me.

Expand full comment

Would this reasoning not also apply to other polygenic traits. What model do you think is workable then for schizophrenia and other traits such as intelligence etc.

Expand full comment

I'm not saying that schizophrenia isn't polygenic! Just that it shouldn't be all three of:

(1) caused in large part by some genes with roughly additive effects on risk

(2) difficult for evolution to get rid of if it wanted to

(3) highly variable in how much genetic risk different people carry

I'm pretty agnostic as to which of those three properties fail to hold - maybe things are pretty nonlinear or non-genetic, maybe it's easy to evolve out but holds some hidden benefit (or it rapidly is being evolved out, and in the ancestral environment things were different), maybe genetic risk for schizophrenia is fake somehow. But I don't think you can get them all at once, or else evolution can apply the strategy of "filter out the people with lots of risky genes" and do pretty well.

Expand full comment

Simply by random chance, some people can wind up with a lot more deleterious alleles on a trait than others (recall that there are lots of traits and selection can only do so much on all of them per generation). The frequency of such people should not be very large though.

Expand full comment

The central limit theorem is a harsh mistress. You can directly calculate how big the outliers would be expected to be based on the postulated individual probabilities and population sizes. And it's not that big.

Expand full comment

> If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance …

I don’t think that’s the scenario that’s being argued. It’s not necessarily linear like that. But also, there may be several genes or combinations that are each necessary, but not sufficient. So individually they don’t do anything, but if you get all of them, you increase risk of schizophrenia by a significant degree. If you get only 90% of them, you’re just a visionary genius.

There might still be a lot of genes involved, increasing chance of each necessary condition by a small percentage, but reaching the threshold for all necessary conditions in one person is still rare.

Expand full comment

> If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance …

I don’t think that’s the scenario that’s being argued. It’s not necessarily linear like that. But also, there may be several genes or combinations that are each necessary, but not sufficient. So individually they don’t do anything, but if you get all of them, you increase risk of schizophrenia by a significant degree. If you get only 90% of them, you’re just a visionary genius.

There might still be a lot of genes involved, increasing chance of each necessary condition by a small percentage, but reaching the threshold for all necessary conditions in one person is still rare.

Expand full comment

>The scare-mongering here has to be false - that is, it can’t be bad to choose an embryo at the 50th percentile of schizophrenia risk rather than the 99.9th, because half of people are at the 50th percentile of schizophrenia risk and nothing bad happens to them

I think the worry is that if we either learn to genetically engineer or do embryo selection for long enough, we could pick people with 0% schizophrenia genes (which wouldn't happen naturally) and only figure it out 20 years later when we have a new generation of weirdly uncreative adults. I don't take this fear totally seriously, but it does imply we should be pretty careful with gene selection if/when we do get it to avoid intense selection pressure on everyone doing it at once

Expand full comment

Alternatively, if schizophrenia genes do boost creativity then maybe every visionary tech founder is genetically at the 99th percentile of schizophrenia risk, and removing those genes would make them just normal high-quality tech engineers. Which isn't "something bad happens to them" in a detectable way, but eliminating visionary tech founders would be bad for society.

(Again, this is low probability - it requires pretty specific assumptions - but I think it's plausible enough to worry about a bit)

Expand full comment
Feb 8·edited Feb 8

It's likely too polygenic and too rare for such selection to take place. Intelligence is much more of a continuous trait, and its one of the most sought after enhancements. Vastly more interest and resources will go into increasing intelligence by a few percentiles than reducing schizophrenia-ness by a few percentiles. If there were something that was obviously a massive risk for serious schizophrenia, then that would have a high chance of being selected out. But I very much doubt many people would be interested in driving the risk as low as possible, especially with how hard it would be to do that.

Also, higher IQ schizophrenics have more weakly negative symptoms than those with lower IQ on average, meaning that if you've made a population higher average IQ, there's a chance that having schizophrenia genes will be less of a problem than the current age and therefore there will be less pressure to eliminate them (though that relationship may not necessarily hold if we're artifically selecting for certain genes).

Also, of course, if two laymen on some blog are talking about this, these considerations are likely to be obvious enough that they may well factor into a future embryo selection framework. Silicon Valley types may actually select for a certain level of these genes precisely to increase the probability of having creative kids. Or by this stage we may have a better understanding of a direct link between alelles and creativity and we're able to select for it more directly (without increasing risk of schizophrenia).

Expand full comment

"based on the evolutionary argument above, I doubt this one"

I think the evolutionary argument also applies to (3), not just (2).

Expand full comment

Amateur question: when you select embryos based on the presence or absence of a specific gene, is there a risk of inadvertently selecting for unrelated traits due to gene correlation? Is there such a thing as gene correlation? For instance, in choosing an embryo with a low risk for schizophrenia, might we unintentionally favour one with a higher risk for heart attacks? This would not be because the same genes cause both conditions, but rather because embryos with genes reducing schizophrenia risk might also possess genes that increase the risk of heart attacks.

Expand full comment

The term is "pleiotropy".

Expand full comment
Feb 9·edited Feb 9

Pleiotropy refers to the situation RH explicitly ruled out. What RH is asking about would be called linkage disequilibrium. And yes, linkage disequilibrium is very common - consider any two historically separate populations and you should see lots of it.

Expand full comment

Pleitropi is very uncommon.

Expand full comment

Why do you say that? That is the precisely the opposite of what I have read in various places, for example, in Plomin, *Blueprint*, and Harden, *The Genetic Lottery*.

Expand full comment

I struggle with how to think about traits that are related to how people treat you. Physical attractiveness is clearly highly heritable. Does that mean that random people smiling at my oldest child when she was a baby is a heritable trait? Surely a lifetime of people being nice to you for no reason has a big effect on someone, and if you did a gene-based study you would almost certainly find 'people are nice to you' is highly heritable, but it's a clearly environmental factor. How is this controlled for, if at all?

Expand full comment

Surely it must have a large effect? The logic of Trivers' theory of genetic conflict is that we would evolve to be robust to such environmental effects.

Expand full comment

>but it's a clearly environmental factor.

An environmental factor for what?

Within a given environment, people will be smiled at by strangers at different rates. There will be a correlation between these rates and a person's genotype, hence a heritability of being smiled at.

Expand full comment

Regarding hypothesis 2: across several cohorts in different countries, having a higher polygenic risk score for schizophrenia is positively correlated with having an artistic profession and with measures of creativity. https://www.nature.com/articles/nn.4040

Free full text on ResearchGate: https://www.researchgate.net/publication/277889916_Polygenic_risk_scores_for_schizophrenia_and_bipolar_disorder_predict_creativity

Expand full comment
Feb 8·edited Feb 8

In Theory 1, a gradual decline in prevalence may have been hampered and slowed in past times by schizophrenic "scary bosses", whose symptoms of occasional sudden morose suspicion and paranoia, possibly leading to unpredictable violence, may have helped them maintain dominance through fear. And (male) chiefs in ancient times tended to monopolize women and have lots of children.

Arguably this applies more to hypothesis 2, but with not so positive creativity in the form of menacing cunning.

Expand full comment

I think no. 1 is the correct answer. The mathsy way of expressing this is "Nearly Neutral Theory", which says that just knowing the "selection coefficient", which is a number for how deleterious the mutation is, is not enough. You also need to know the effective population size, because individuals don't evolve - populations do. In species with high population sizes (e.g. bacteria) slightly deleterious mutations are eliminated more quickly. The lower the effective population size e.g. humans, the more likely that "nearly neutral" mutations are invisible to selection. If an allele isn't eliminated, the only other option is that it eventually becomes fixed i.e. the mutated version becomes the new normal, even though it was a slight downgrade.

https://en.wikipedia.org/wiki/Nearly_neutral_theory_of_molecular_evolution

Expand full comment

Now add in, at least among certain groups:

(1) birth control and

(2) other newish financial/social/economic pressures to delay childbearing into one's 30s

I speculate that if (1) is correct, these factors should quickly and massively decrease the incidence of incapacitating early-onset schizophrenia, particularly in males, who often express it by age 20 or so.

By "quickly", I mean within a few generations? So, would be an interesting and somewhat controlled experiment but also a long one.

Expand full comment

Er, sorry, by "if (1) is correct" I meant Scott's Hypothesis #1 above, the "evolution hasn’t had time to remove all of them yet" bit.

Expand full comment

Why would those things cause a massively polygenic trait to decrease? If anything, shorter generations and larger populations accelerate selection!

Expand full comment

Presumably he is saying that if one goes crazy before having kids (thus becoming less likely to procreate at all) then that increases negative selection.

Expand full comment

I think the case for 2 is pretty strong, especially with the framework of something like the Diametric model which argues for the partial integration of both schizophrenic and autistic traits along a spectrum. If this model is correct it would explain the ubiquity of both autistic and schizophrenic traits across populations.

https://doi.org/10.1038/s41380-022-01543-5

Expand full comment

Huh, so 1 million years ago at the dawn of humanity, Schizophrenia rates were through the roof?

Did civilisation only progress recently because selection effects had finally pushed Schizophrenia rates low enough to permit functional societies?

Reminds me of that "The Bicameral Mind" book, it'd explain a lot of their analysis of history if everyone used to be way more Schizophrenic

Expand full comment

Yeah, it struck me as a little like the Big Bang, where (somehow) the universe starts in a very unlikely state of minimum entropy. Where did all these bad schizophrenia genes come from anyway? Are just the bad luck of the exact population present at that big population bottleneck they speak of some tens of thousands of years ago?

Expand full comment

Presumably they're continually being added by mutations.

Expand full comment

Sure, but explanation 1 extrapolates into a past that has way more of them than we have now. Was there a period when we acquired them much faster than we selected them away (unlike, we gather, now, when we are apparently making headway, albeit slowly, at reducing them)?

Expand full comment

I was thinking the bad schizophrenia genes were only deleterious once our cognitive capacity was great enough that our species was relying on it for survival.

We hit a tipping point where the genes went from neutral(chimps don’t care about schizophrenia) to detrimental(humans struggle to survive with schizophrenia)

And natural selection has been cleaning house since then?

Expand full comment

That’s not bad, though I suspect it underestimates how much cognition chimps do. I would think hallucinations and delusions would be countersurvival even in chimps.

I found a paper “Towards a natural history of schizophrenia” that claims non-human primates never have schizophrenia (which is surely different from just not being bothered by it?).

Their claim, I think, is that it’s basically a spandrel from evolving human-level intelligence, which was such an advantage that it outweighed the downsides of occasional schizophrenia. Perhaps since then evolution had been trying to keep the former while weeding out the latter, which would presumably make the process *especially* slow.

Expand full comment

Maybe we had different schizophrenia-causing genes back then.

Expand full comment

But we started with schizophrenia being evenly distributed through the human race, so there are no low-schizophrenia groups to be found. Or might there be very small (family-sized? village-sized?) groups with high or low schizophrenia rate?

Expand full comment

You can have "schizophrenia is evenly distributed across all populations" or you can have "schizophrenia is inherently severe/very bad", but you can't have both, because the badness of schizophrenia is not evenly distributed across all populations. The classical rejoinder to "prognosis is better in some societies than others" is misdiagnosis of non-schizophrenia as schizophrenia, so if your diagnosis rates look the same...

(The other problem for "schizophrenia is evenly distributed across all populations" is that it's not evenly distributed *within* populations, e.g. within diverse countries there are noticeable racial biases. This means either source populations need to have varying rates of schizophrenia, or there need to be major environmental factors relevant to e.g. recent immigrants, or there need to be diagnostic biases in what someone having a psychotic episode gets diagnosed with. #2 is incompatible with the 'hard' geneticist explanation. #3 assumes a very heterogenenous schizophrenia, which is probably true but also problematizes 'hard' explanations. I think everyone sleeps on #3.)

(The other other problem is that as soon as you assume "consistent across time", where "time" refers to since it was defined as a concept, everything collapses under incompatible definitions. People tend to overestimate the degree to which the definition "consistently narrows over time", but they're getting that overestimate from something very, very real.)

There are isolated areas (e.g. Kuusamo in Finland) with unexpectedly high schizophrenia rates no matter how you slice it. The prevalence of schizophrenia is not actually very clear (1% is a meme overestimate), so getting much further than that is hard. Finland-in-general stats sound suspiciously high to me (they look more like 1% than anything else does). Claims of unusually low schizophrenia prevalence in any given area don't seem to replicate well.

Expand full comment
Feb 8·edited Feb 8

You can have the genetics is evenly distributed and the environmental component (of likelihood) is not evenly distributed. It seems clear that the badness will also vary with culture/population but more in degree than in kind.

Expand full comment

I don't think this explanation is true for intelligence and height.

Selection is proportional to the additive heritability on the absolute scale, which makes your explanation true for schizophrenia: If there is a lot of liability-scale heritability, but the heritability is due to many variants of little effect and the prevalence is low, that translates to very little absolute-scale heritability, and it is true that evolution would have a hard time removing it.

But for height and intelligence, the tiny effects happen on the absolute scale, which means that if they have a strong relationship to fitness, evolution would be very quick at changing them. Like I guess it wouldn't exactly lead to eliminating the variants, but it would move you into a region with diminishing returns to them, making the evolutionary aspect less relevant.

Expand full comment

I'm a postdoc working in bioinformatics, with a focus on cancer & polygenic scores but with a background in (mathematical) evolutionary theory. In my experience there is a surprising amount of misinformation - or just lack of knowledge, depending on how you see it - in medicine and genomics because even many experts are so used to monogenic risks since it was all we could feasibly find for a long time that they forgot that polygenicity is most probably the norm. It's arguably the core of the modern synthesis that happened in the early 20th century. At that point, mendelian inheritance had been proven, but we could see in many traits such as height that there was instead a continuous variation. This was explained by Fisher in 1918 by large numbers of small-effect loci, see "The Correlation between Relatives on the Supposition of Mendelian Inheritance" (he didn't use the word polygenic, but it's the same concept). So we're currently mostly just retreading ground that has been covered a literal century ago. There is a lot of followup papers on this and closely related topics, such as mutational burden, muller's ratchet, etc. that all show how negative fitness small effect mutations - which are the most common mutations to begin with - can stay in a population for a long time and even become fixed.

Expand full comment

Agreed, and I wish people would just read an introductory textbook on population genetics or something before having these debates - this stuff was covered in my undergrad classes back in the early 2000s (with the caveat that I've now forgotten most of it). It's ancient history at this point.

Expand full comment
Feb 8·edited Feb 8

I'm not sure the problem is that the knowledge was lost.

The problem appears to be more that people are unwilling to draw the correct conclusions, so they intentionally ignore the relevant knowledge.

If you make them read a textbook, they just won't apply it to the issues they care about.

Expand full comment

"Evolution hasn’t had time to remove all of them yet. Because a gene that increases schizophrenia risk 0.001% barely changes fitness at all, it takes evolution forever to get rid of it. And by that time, maybe some new mildly-deleterious mutations have cropped up that need to be selected out."

This does not make sense as a story. It is not harder for evolution to remove 100 genes of small effect than it is for it to remove 1 gene of large effect; the returns to selection is controlled ONLY by (narrow) heritability and NOT by the concentration across genes. Once you know the heritability, you know how easy it is for evolution to increase/decrease the trait; further knowledge of whether it's one gene of large effect or many of small effect does not tell you anything else about response to selection!

It is, of course, possible for some unfit genes to remain in the population due to the fact that they are constantly reintroduced via mutation. I'm not saying purely-bad genes are impossible or anything. I'm just saying that "many genes of small effect" does not have any explanatory power for why the genes were not selected out.

Expand full comment

"Most random mutations are deleterious" is so oversimplified high-school biology. Many random mutations that we know about are deleterious, because that's how we know about them. Most random mutations are completely neutral in their effect, being either silent mutations (where there is no change to amino acid sequence of resulting proteins) or in non-coding regions. For the ones that do have an effect, we pay attention to bad things but not to improvements*, so we are more likely to be unaware of the beneficial* effects of random mutations.

*And all of this chatter about "more fit," "advantageous," "improvement," "beneficial" is dependent on the environment. What is advantageous in one environment may be an extinction-level trait in another. There is no such thing as an "evolutionary mistake"--there's just a trait that may not have been selected for yet.

Expand full comment

Which is also why talk of 'dysgenic' traits is a good tell for people who don't know as much as they think they do.

Expand full comment

A trait that kills you in the womb will never be selected for.

Expand full comment
Feb 8·edited Feb 8

As a scientist in this field, I really have to disagree strongly here. Synonymous mutations aside, the majority of mutations is in fact deleterious. This is a direct result of most of biology having relatively strict limitations, so any variation is much more likely than not to be net-negative.

The bias also goes the other way than you are positing - strongly deleterious mutations will quickly make cells non-functioning, so most of our studies are actually only about the set of mutations that aren't sufficiently bad. Then you have to consider that limited sample sizes mean we're biased in favor of mutations that have a higher prevalence (usually due to selection effects) - in other words, we tend to oversample the high fitness mutations on multiple levels.

And unfortunately the same goes for the last point. I'd really wish that the world would function like an RPG where we all get allocated the same amount of stat points and nobody is really worse off, just some people have unusual combinations that are adapted to unusual environments. But the reality is the opposite; The majority of mutations is simply making a certain function be performed less efficient with no positive upside.

Expand full comment

Of course strongly deleterious mutations will quickly make cells non-functioning--or, like @TGGP notes, a trait that kills in utero will not be selected for--but that's not what Scott is talking about here. He's making the claim that random mutations with subtle effects (so subtle they, at minimum, still allow one to be born and survive to reproductive age) are still going to be net deleterious, and I don't think there's any reason to believe that is the case.

Our current biology is not the pinnacle endpoint of a great chain of being. All our current enzymatic processes are not perfect. Even if a mutation makes an enzyme perform slightly less efficiently (but still not be lethal), there will continue to be selection on the descendants of that mutant to increase efficiency.* Usually, this is not a straight reversal of the first mutation, but compensatory mutations that can lead to an even more efficient process. (*And this is assuming that "more efficient" = "superior," which it may not be. For instance, the lack of fidelity in copying genes for antibody production is what makes antibody production work; people who cannot randomly mutate the DNA in their B- and T-cells are much more at risk of dying from infections than people who have truer DNA replication.)

Disclosure and biases: my background is in studying *bacterial* mutations for drug resistance and phage resistance. Drug-resistance mutations leading to a more fit strain of bacteria, because the resulting mutant enzyme is more efficient even in the absence of antibiotics, is a well-documented outcome. It's much easier to study mutations and their potential fitness costs in single-celled critters that can have 30 generations by tomorrow than it is to study the same thing in complicated, multicellular humans who will have 30 generations in, ah, like about 700 years from now. But that's also why I think we should be more humble in our claims about what is "advantageous," "deleterious," or an "evolutionary mistake" in human alleles with subtle effects.

This isn't the same thing as saying "no one is really worse off;" some people are born with excellent combinations of traits for their environments, and others aren't. Some sets of traits are so bad that they're just fatal, as we've noted. The world isn't fair in that way. But that's not what's being talked about here. The examples cited in Scott's essay here of "advantageous" traits are clearly tied to environment: Tallness is good because, um, chicks dig it and it makes you a better hunter? Well, idk, dudes dig shorter chicks and also being short makes it easier to hide from predators and potential prey. Even in the hackneyed realm of our-hunter/gatherer-past evolutionary psych "tall is advantageous" has problems. Other examples, like "creativity" as advantageous are even more culturally-bound.

Expand full comment

What's your field? Oncology?

Because most germline mutations that make it into the population are more or less neutral. And it's the population level we're interested in here.

Expand full comment

I think this depends somewhat on how exactly you define deleterious and most.

Is it true that the majority of non-standard sequences in fact experience negative selective pressure? No doubt. And yes breaking things is easier.

At the same time, it's also true that in very long run populations benefit from a degree of genetic variation which is ultimately the result of said random mutations. Evolution could have selected for substantially better genetic error correction for germ line cells than it has but you need to balance the costs of mutations that fuck you up with the benefits of a wider genetic pool that has the capability to discover beneficial mutations to allow adaptation.

Long term, the expected value of any mutation is probably around zero (harms banced by benefits to diversity).

Seems at least plausible that most mutations could, in the right but unlikely circumstances, contribute to positive traits. In other words, there is probably some useful protein thst mutation decreases the edit distance with ... so in a sense most mutations are near zero in effect because the tiny chance they save some future descendent via enabling future mutations balances the more immediate harms.

Expand full comment

> this study of ancient hominid genomes finds that schizophrenia genes are getting less common over time

Wild speculation: this helps explain the explosion of ancient religions as opposed to the relative dearth of new gods. Far more people heard voices in ancient times, attributed them to the divine, and boom.

Expand full comment

Yeah I had the same thought. I remember a talk by Robert Sapolsky where he says religion is due to schizophrenia-adjacent people. People with high polygenic schizophrenia scores but who aren’t quite schizophrenic themselves. That’s where the miracles are from he says.

Expand full comment

Yes, true. “Sybil”, is perfect example of this, she was abused by her mother, who was a 7th Day Adventist. I wrote and personally think, we have the chromosomes study, if you dig, if my comment was read, but growing up with a schizophrenic, in my family, and her children do have MPD/DID, I personally think is environment. Bundy, Mulligan, Dohmer, and so on, these people were not only by-products of a horrific upbringing, but also it was studied after, “Sybil”, there were actual studies done on the physical brain.

I do have to point out though, in the 50’s when this happened to “Sybil, 16 MPD, was a rare diagnosis, only approximately 200 cases; which grew wide spread in the psych community. After her book/ film came out in the 70’s it spiked to thousands of diagnoses across the US. In the late 80’s, schizophrenia rose to 40 thousand cases. My question is it genetics or environment or both?

Personally, we have more mental illness than ever before, no one brings up aluminum from the sky, starting in the 70’s (chemtrails) to present and what the codex alimentarius, or food codes, which are now own by Monsanto,

phones that jail our minds with frequency and social media, hence, “cell” phones(which btw, can be jail broke), plus all the vaccines you all were given as children and young adults, not me. lol. Put all of this together, with genetics…again, my thought process leads me to believe that it’s environment. We are not born with this.