286 Comments

> intelligence, height, schizophrenia, etc - are necessarily massively polygenic, because one side of them is better for fitness than the other.

I'm curious which side of height you think is better. Being extremely tall or short is clearly bad for fitness (see the health issues of giants and dwarves). Being close to average seems optimal.

Expand full comment

I'm tall and I almost died of cancer at 38. Height appears to raise cancer risks.

My feeling is that women are too prejudiced in favor of tall men, too heightist. That bias made more sense when height was heavily controlled by nurture, so being tall was a sign that a man had relatives with resources when he was a child to help him attain his height. And having affluent relatives is a good thing in a potential husband. But now that height is mostly controlled by nature, it's dumb for women to be so heightist.

But criticizing women's prejudices is not a high priority in our culture, so almost nobody complains about heightism.

Expand full comment

Maybe humans just like round numbers. In which case I predict that by the year 3000, we will have diverged into 2 species:

Americans, who are all exactly 6 feet tall.

Everyone else, who are all exactly 2 meters tall.

Americans will consider the rest of the world freakishly tall, while the rest of the world will consider Americans grotesquely short.

Expand full comment

Mabey! (comical shrug)

Expand full comment

Gregory Cochran has some blog posts about how he thinks height is also indicative of lower mutational load not just a good childhood environment. It doesn't seem fair to criticize preferences that are probably innate either.

Expand full comment

Male height might be like the peacock's tail?

Expand full comment

Don't be too harsh on women having silly preferences for men, these are mostly inborn just like male preferences for women (nice curves, narrow waists and whatnot). If we told men to change their preferences towards wider waists - because, let's say, there are so many women with wider waists and quite few with narrow ones, and the narrow-waist types are actually worse at enduring famines - do you think men would really be able to change their preferences?

Also, women don't really prefer very tall men, just a little taller than themselves. Short women are quite fine with moderately short men. The dating-site phenomenon of ladies filtering out shorter men is probably some problem with the sites, e.g. this is the main thing they can filter, if they could they'd filter something more relevant.

Expand full comment
Feb 8·edited Feb 8

> If we told men to change their preferences towards wider waists - because, let's say, there are so many women with wider waists and quite few with narrow ones, and the narrow-waist types are actually worse at enduring famines - do you think men would really be able to change their preferences?

This is an interesting question as to waist circumference specifically, because fatness is well attested as being historically attractive.

For just one fun example, modern Chinese women, like most women, are intensely concerned with being thin. The word for this is 瘦, and it is highly desirable. The opposite is 胖, fat, which is bad.

But if you look at the characters themselves, 胖 uses a "body" radical (ok, technically "meat", but it's the normal one for body parts), while 瘦 uses the overtly negative "disease" radical.

Expand full comment

Hm, I'm not sure about the "fat was always beautiful" history. Nowadays worldwide polls indicate that preferring lower-normal BMIs by men is almost a human universal, with the poorest and hungriest hunter-gatherers picking the 'BMI=23 figures, all others around 20-21. Of course, the hungriest of them preferring wider women does hint on a slight cultural or circumstantial effect. It just doesn't go very far into higher BMIs.

If you say fatness is well attested to have been historically attractive, which evidence do you mean? (Well, the hieroglyphics are some evidence, but not too strong.) As far as I understand, men generally favor both soft round curves and thin waists. Back in our constantly hungry history most women were skinny and somewhat lacking in the curves department; therefore curves got perhaps glorified and elevated when describing a beauty? While now most have good curves but lacking the thin waist, so the latter gets elevated in descriptions. Does that make sense?

Expand full comment

IIUC, historically men prefer women in their teens, who are still growing, so they won't be too fat (on the average). But I also understand (contradictorily) that in many cultures men prefer women who have successfully been pregnant...which would increase the odds of "fat". I suspect that there's a multi-factor preference going on here, with different factors not always in agreement. Certainly in some cultures an extremely young wife is a sign of status. (I'm referring back to a article I read by an anthropologist multiple decades ago that claimed that among the Australian aborigines the chief was likely to have a wife a young as 8 years old. [What "wife" means in this context, however, I'm not sure.])

Expand full comment
Feb 8·edited Feb 8

South African women wear what appear to be padded inner tubes around their waist to prevent any appearance of narrowness. (Source: watching Family Feud South Africa.) It's not universal by any means, but it appears to be a conventional way to dress yourself up.

Wikipedia showcases this painting in its article on the Judgment of Paris. ( https://en.wikipedia.org/wiki/File:The_Judgement_of_Paris.jpg ) It's from 1599 and you might notice that Hera (the one in the middle) has been drawn with rolls of fat bulging from her waist.

Expand full comment

I've thought about that area European paintings, with those well-fed women. They look like real women, right? My guess is that these were painted from actual live human models. And actual live women don't generally have perfect figures from male point of view (they never had much selection pressure for that, when you think of it). When men try to paint perfect women from the top of their heads, they come up with quite different figures, like those in computer games and anime. Those have extremely narrow waists coupled with nice round hips. I'm not a man myself so it's only a guess, but it kind of looks like this is what they actually prefer? There are also plenty of ancient figurines of women with the same kind of unnaturally narrow waists from different places in the world.

On the other hand, every now and then there are strange beauty fads here and there, like the one with soft baggy jawlines necessary for European women during a century or so. And the narrow hips fad of 20th-21st century. Culture interfering with inborn biases?

Expand full comment

>this is the main thing they can filter, if they could they'd filter something more relevant.

They could have, say, filtered on acohol or tobacco usage. But they don't want that.

Expand full comment

This is interesting. Alcohol and tobacco don't matter if one's looking for a short-term mate, maybe that's the reason. For a long-term relationship I'd predict many would filter out tobacco (and very strange if they don't) but certainly not alcohol, it's kinda nice to have a glass of booze together.

Expand full comment

Cancer, like schizophrenia, doesn't always present by traditional mating age; someone could be 25 and already have two kids, so it would be too late to avoid mating with someone with the gene(s). On the other hand, being taller than average has some social advantages.

Plus, people didn't know that cancer or schizophrenia weren't caused by too much black bile or demons centuries ago.

Expand full comment

"But criticizing women's prejudices is not a high priority in our culture, so almost nobody complains about heightism."

Brutal

Expand full comment

It makes no sense to criticize anyone’s sex preferences. They are not based on deliberate considerations of offspring’s reproductive success. We may as tell men to stop being more attracted to younger women because older ones are more mature and our life spans are now long enough, family planning technology good enough, that age at first childbirth matters less.

Expand full comment

Our society spends a huge amount of effort criticizing some people's sex preferences, such as, recently, older men who like younger women. Criticizing female heightism might or might not be effective, but it's striking that virtually nobody does it in the mainstream media, while constantly denouncing various male -isms.

Expand full comment

Yeah we can imagine scenarios favoring both short height (lower caloric consumption, less strain on the joints, faster recovery from injuries) and tall height (stronger, better at fighting, possible sexual selection effects for males). I wouldn't say it's definitely tilted one way or the other.

The modal human is not very tall by Western standards. In India, the average male height is 5'5. In China, it's 5'6. That's about a sixth of the world's population right there.

Expand full comment

I've read that soldiers fall into three categories-- big (strong, but need more food and more room), fast (but not especially strong), and enduring (what it says on the label).

Expand full comment
author

Thanks for questioning that.

My original answer was going to be that although being very tall seems bad, marginal increases in height seem good. But then there would have to be some point at which that stopped being true, and the height evolution made us seems as likely to be that point as anything else!

I originally thought that height must be good because 1. it helps with hunting and stuff 2. at least in men it seems to raise sexual attractiveness 3. healthier people with better nutrition are taller. But it's possible that either cancer risk or difficulty getting enough nutrition counterbalance 1 and 2, and 3 is just circumventing an evolutionarily precise mechanism.

Overall I would guess height improves fitness today, but that it might not have in evolutionary times when there was more food insecurity. I've taken that out and replaced it with "strength" as an example.

Expand full comment
Feb 8·edited Feb 8

Strength still require more calories.

One way we can see that strength is not always selected for is that men are stronger that women. If strength was always an advantage, woman would also be selected to be strong.

Intelligence also requires calories.

Expand full comment

I can see that a human brain burns more calories than the brain of a frog; and humans are smarter than frogs.

But do smarter humans burn more calories thinking than their less gifted peers?

Expand full comment

That seems like a near certainty; even if doing the same amount of thinking costs them fewer calories, thinking is more valuable when they do it, so they're going to do more of it.

Expand full comment

People don't shut off their brain, people are always thinking.

Expand full comment

In the first place, this is false.

In the second place, the amount of thinking people are currently doing is not constant.

Expand full comment
Feb 8·edited Feb 8

Chess masters (allegedly) burn a shocking amount of calories during tournaments. Even if they require the same amount of energy at rest, smarter brains may use more energy during peak performance.

Expand full comment

We would need to exclude the possiblity that chess masters just burn more calories because of what the adrenaline does to their bodies, before we blame it all on the brain.

Expand full comment

I had done some reading about it and it was the conclusion : Stress is what burn calories in chess master.

The brain baseline energy consumption in unusually high in the animal world but thinking harder does not burn more calories

A fun fact in that the greatest chess masters do quite a lot of physical training. Turns out sports is a good way to train your organism for stress and chess tournament are very stressful

Expand full comment

I read once long ago that researchers were surprised to find the opposite, that in solving problems like math, very intelligent people used less energy. I treat such studies with high skepticism and do not know if it has been replicated.

However, having taught math and discovered to my astonishment that most people think about it radically differently than I do, explaining the percentile, I find this consistent with my experience.

As it happens, my mother is schizophrenic, which has been a lifetime of difficulty. She is completely mathematically incompetent.

Expand full comment

Don't be too skeptical. People used to solving problems are more apt to recognize a new problem as just a reframe of one they already know how to solve. In which case it's quite likely that they could solve the problem with less effort. ... You say you're a math teacher, so consider the problem of "factor (x -1)^3 = 0". You could probably solve it trivially, but your students...(well, I don't know what level of math you teach, they may not be able to do it after an hour's work).

Expand full comment

Completely off-topic but I felt a burning need to reply: "factor (x-1)^3" is a trick question, it's already factored! (x-1)*(x-1)*(x-1). You probably meant "factor (x^3-1)", which is indeed easy when you've done it a million times but hard if you don't know the 'difference of cubes' trick.

Expand full comment

Your example is an interesting one and I will add some thoughts about it to my Rehchoortahn reply after breakfast. (I am not a math teacher but taught for a while, unexpectedly, an illuminating step outside my comfort zone alone in a room with a computer.)

I am not skeptical due to the conclusion, which was apparently not clear. I am skeptical of non-replicated research in general, especially with current rampart careerism, and any studies on topics hard to unambiguously define and quantify in particular. Skepticism does not mean I prefer to doubt the conclusion. I don't know if the conclusion is true or not. Very few studies are as convincing with one experiment as was the double slit experiment.

Most people learn by rote and that is not the bad thing it is often disdainfully considered to be by those who don't. Rote learning makes it possible for many people to effectively apply concepts they could not have originated themselves, permitting powerful cultural transmission. Most people have astonishing powers of language, pattern recognition and memory.

Personally, I am not good at math because I sometimes recognize that a new problem is a reframe of one I already know how to solve, at all. That would fall into the category of "how most people think of math" IMO.

What do the symbols describe? What does it mean? What does it model? Extremely important, what is arbitrary and what is essential?

How does this connect to everything else I know? Why is this approach a useful way to think about the scenario? What possible implications could it have for other situations? Is there a better way to model it? How does it relate to historic approaches and how do those constrain representation of the scenario? How is it applied and what do the authors want me to conclude? How can I identify assumptions based on those intended conclusions and step outside of those assumptions?

This seems like a lot of work as I write it down yet I memorize almost nothing and rederive on the fly most of the time. I have a lot of trouble talking about math because I bypass language mentally while doing math, which speeds it up dramatically. I usually don't recall the names for what I am doing. That made it extremely challenging (and interesting) to teach math. One can't just point and say, "Notice this, then it will be self-evident!" (Feedback was that I was a good math teacher in spite of those limitations.)

Expand full comment

One complaint about Common Core math in the U.S. is that parents struggle to help kids with homework because the kids are taught multiplication differently than how the parents were taught.

My initial reaction is surprise. Because if the parents had more than a superficial, purely algorithmic understanding of multiplication, then the common core methods shouldn't have been a big deal.

Also, how many proofs are there of just the Pythagorean Theorem. At least 40, right?

Expand full comment

I'd imagine people who are bad at math have more stress response when you ask them to do it: muscular tension, gritted jaw, elevated heart rate, etc.

I'd expect the effect of that to be way bigger than any change in brain function.

Expand full comment

>that in solving problems like math, very intelligent people used less energy.

There is more and better evidence that good runners use less energy to run a certain distance.

Now, if you put smart vs smart in math battle and compared them against dumb vs dumb, that'd be more interesting.

Expand full comment

As I recall the answer is mostly no when looking at 'how much energy is burned while doing some problem set'. Smarter people were actually slightly more efficient. But brain size is only weakly correlated with intelligence in humans, and I'm sure that in terms of passive energy consumption there's a reasonably linear relationship between that and brain size.

Expand full comment

There will presumably be some genetic variation in how efficiently the muscles work, such that those who are higher in this trait are stronger without requiring more energy, plus a much larger variation in strength in a more resource-dependant way. The argument still applies to the former variants, which will mostly become fixed in the population, reducing the variation, while the latter will be subject to stabilising selection. The genes affecting efficiency will not all be completely fixed for the same reason as the schizophrenia genes, that minor harmful mutations are continuously arising.

This point is considerably stronger for intelligence, where it is more intuitively plausible that such efficiency variation could exist, particularly since human brains have changed a lot relatively recently, while muscles have been subject to much the same selective pressure for hundreds of millions of years.

Expand full comment

Male height is probably like the peacock's tail?

Expand full comment

As a female, though I find men at my eye level attractive, too, I do tend to swivel toward male height. It signals fitness in physical competition against other men. Height corresponds with relative reach, throw and somewhat with mass. It is hardly the main factor, though, easily overridden.

I agree that because it is quantifiable, it may assume a disproportionate importance in dating sites.

Expand full comment

Humans however are downright much weaker than the other great apes. We are clearly strongly selected for being less strong.

Expand full comment
deletedFeb 8
Comment deleted
Expand full comment

Throwing seems a bit niche, kind of a human specialty with no real competitors. A better category would be spitting, at least then we'd have some real competition.

Long-distance running seems good. There are a few species that specialize in that. It actually is kind of amazing that humans are at or near the top.

Expand full comment

Og throw rock, hit & kill rabbit; eat rabbit good.

Expand full comment

This is probably true. I suspect that the trade-off is weaker muscles vs. longer life.

Expand full comment

iirc the main trade-off is flexibility.

Expand full comment

I would think reduced calorie consumption and the associated resistance to starvation were the main benefit. More generally, there doesn't have to be any specific *benefit* gained from less muscle (beyond reduced resource consumption) so long as the need to have them is reduced or eliminated. Whole organs and appendages get deleted by evolution when they're not useful any more, for no reason beyond 'this uses calories and protein'.

Expand full comment

I don't think anyone claims that being weaker is not a disadvantage. So it would automatically be selected against unless there were a compensating advantage And I don't think that it's reduced calorie consumption. Orangutans frequently undergo seasons where their preferred food aren't available, and they didn't evolve weaker muscles.

Expand full comment
Feb 8·edited Feb 8

Also, violence advantage.

Downside (and what might have kept things in check and explain geographic variations): calorie and protein demand.

It seems optimal to have genes for a taller height and then have malnourishment check it if it takes place.

Expand full comment
Feb 8·edited Feb 8

> Overall I would guess height improves fitness today, but that it might not have in evolutionary times when there was more food insecurity. I've taken that out and replaced it with "strength" as an example.

As I point out in a root-level comment, what you just said about height applies just as strongly to any trait where there is significant existing variation. (Such as strength and intelligence.) If one side was better than the other, the variation in the population would disappear.

Expand full comment
Feb 8·edited Feb 8

I thought it was established that many sexual selection characteristics (thick lips in women, big foreheads and height in men) were indicators for high sex hormones during adolescence. And that high sex hormones increased susceptibility to disease and caloric needs. Thus the actual signaling was of an innate robustness to disease(since despite high sex hormone, they had in fact survived) and either robustness to caloric deficit or success at acquiring food.

Expand full comment

I think this would have no analogy to schizophrenia, but point 2. offers a stabilising (regressing to the mean) mechanism. Supposing that height (its higher percentiles) reduces sexual attraction in females, and that some of the genetic effects on height are sex agnostic, then an extreme height polypenic score, while increasing fitness of sons, would decrease fitness of daughters.

Expand full comment

Height has disadvantages. First, if you're big, you need to eat more. Big strong guys didn't survive as long in the Gulag as shrimps. Second, if you're big in body but you aren't big in heart size, your heart is going to have to work harder. Third, if tall men have tall daughters, that might hurt the daughters' marital marketability.

Expand full comment

other cons:

cube-square law (strength). I.e. Large objects have less strength per unit of volume. This is why ants/spiders/etc are "10x" stronger than humans)

cube-square law (heat). I.e. large animals have lower surface-area per unit of volume, and therefore conserve heat better. This is more important for aquatic life. I guess this sort of dovetails with "caloric intake" that others mention, but it's worth mentioning anyway.

Expand full comment

Just in terms of reproductive success in the current landscape, taller men and shorter women have more children (Europe and US, not sure about elsewhere).

In the very long run, this would be the kind of thing that should lead to increased dimorphism - but that is a hard target for evolution to operate on, since the great majority of variation in height is due to genetic variants that do more or less the same thing in both sexes. So instead we're probably close to equilibrium.

Expand full comment

The existence of noticeable differences in height by racial ancestry strongly suggests that whether or not height is beneficial is environment dependent (that's compatible with height being attractive everywhere via a peacock tail type effect...ohh they can support that huge calorie load...sexy).

Height has substantial effects on heat dissipation so likely Inuit and sub-saharah Africa face different pressures.

Expand full comment
Feb 8·edited Feb 8

Being tall might have some health risks, but you will also be stronger (important in a violent society) and most important _by far_, tall men have an attractiveness and workplace advantage. So definitely tall is better for males.

I don't believe average is ideal for men - taller than average (within reason) is surely better. 6" and up to some inches taller?

Expand full comment

I think that's a false assertion. Most boxers and wrestlers aren't especially tall. It's basketball players who are tall. That's an argument not for tall but rather for sturdy. But sturdy body types don't do as well in hot climates.

Simple answers are going to be wrong. A lot is cultural, and a lot is environmental (not counting health), and another lot is health. And they aren't all pulling in the same direction.

However, within any small genetically nearly homogenous group, being tall will be an indication that you did well growing up in the current environment. There are a range of reasons for why this might be true, but they all indicate that you're probably a good mate. (An exception might be for things like acromegaly .)

Expand full comment
Feb 8·edited Feb 8

Boxers and wrestlers are weight-classed already, so that has an inherent tradeoff (you can't be so tall you don't have weight left for muscles). Superheavyweights are _not_ short (and height correlates with reach, which is super important). Offensive linemen have an average height of 6'5".

Expand full comment
Feb 8·edited Feb 8

The five latest UFC heavyweight champions were 6'4", 6'4", 5'11", 6'4", 6'4". And note that even the heavyweight division is weight-classed (265 lbs max), which could potentially limit some even taller guys.

Expand full comment

You do know that virtually all combat sports have strictly controlled weight limits, right? And since taller people at the same BMI will be heavier than shorter people, we should expect height to scale with weight, which it does: https://themmaguru.com/ufc-fighter-height/

And there's usually nothing stopping smaller athletes fighting in higher weight classes, meaning that shorter fighters could bulk up and take on taller, leaner fighters. Most non-HW fighters have to dehydrate themselves to make their weight class, and so considering smaller fighters are usually more technically skilled and they wouldn't have to go through a dehydration before a fight, we should expect this to happen a lot to the point that shorter champions moving up a weight class and becoming champion should be common, but it's not. In boxing especially, being taller (at least to a point) is a huge advantage in terms of both usually having a significant reach advantage, and the fact that you punch down and your opponent has to punch upwards.

Heavyweight fighters ARE significantly taller than average, and even then there's still usually weight limits that likely keep the tallest fighters from being as competitive.

Also, NFL, MLB and NHL players, and male tennis champions are all around four inches taller than the American male average, so its not just an advantage in basketball.

Expand full comment

Brains are metabolically expensive. An obvious reason for chimpanzees to be less intelligent than humans. Koalas & sloths are relatively dumb because they are so optimized for saving metabolic energy on their low-nutrition diets.

Expand full comment

All else being equal, being taller means being stronger, and having a reach advantage. You do run into square-cube problems, which is why we aren't 4 meters tall.

Expand full comment

The big thing that's missing from this piece is the adaptive benefit of diversity and variance.

If everyone in your tribe is tall, being the first short person born might make you better suited for some niche in the community/environment that's not being exploited. Or it might just give you a huge relative advantage the one year out of 20 when famine hits.

So it's likely that things like the 'best height' are hugely contingent and contextual, which is a lot of why we have variance over those traits and maintain a polygenic store of diversity-generators for them.

Expand full comment

A simple model of short and tall: in good times, the tall guys do better because they can hunt more and get the women; in bad times, the tall guys all starve to death and the women have to settle for shorties. Thus, the proportions of each wax and wane.

Expand full comment

You're right that neither extreme is optimal, and height variants of large effect size are selected against because either way tends to be bad. (Big rare shortness genes seem to cause bad things, while big rare tallness genes seem to generally point to growth dysregulation in particular eg https://www.biorxiv.org/content/10.1101/2023.02.10.528019.full https://www.medrxiv.org/content/10.1101/2021.12.13.21267756.full .) But there's antagonistic pleiotropy: being taller than average is very good for male fitness, but very bad for female fitness so it operates differently by sex. (https://gwern.net/doc/genetics/selection/natural/human/dysgenics/2021-song.pdf) Since you can't choose to have exclusively female or male children, that means that tall males give back their fitness gains in their female children and vice-versa. So there's population-level stability. The optimum also differs by environment: pygmies are genetically short (https://gwern.net/doc/genetics/selection/natural/human/2019-lopez.pdf https://gwern.net/doc/genetics/selection/natural/human/2023-fan.pdf), and Peruvians might have a shortness variant being selected for (https://www.science.org/content/article/study-short-peruvians-reveals-new-gene-major-impact-height); or for a more extreme example, consider Homo floresiensis and the 'island effect'. Because it's environment-dependent, you can't even say whether height is being selected for or against: in European samples, in the very long term (thousands of years) there appears to have been selection for increased height (health? warfare eg. https://www.biorxiv.org/content/10.1101/690545.full https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8995853/ https://www.biorxiv.org/content/10.1101/2022.09.22.509027.full / population structure https://www.biorxiv.org/content/10.1101/2023.10.04.560881.full ?), but if you look at the most recent fitness estimates in American & UK populations, it looks like the female disadvantage more than offsets the male advantage and there's selection for decreased height there (possibly because shorter women do worse on the labor market due to the standard pro-height discrimination, and so are more likely to have kids early & more kids in general)?

But it's a large mutational target because 'height' is just the sum of all your parts (similar to temperature being the sum of all the atoms' motions in a volume), which are themselves the outcome of many processes (eg. childhood nutrition and infection, metabolic efficiency), and there are many ways to affect all those. So you can have a very large number of variants affecting it slightly without being purged quickly because they tend to push, ever so slightly, away from the optimum average height.t

Expand full comment

But issues related to height generally don't crop up until after typical reproductive years so they're not going to be selected against.

Expand full comment

Being healthy beyond reproductive years must have some benefit (helping to raise grandkids etc) otherwise wouldn't women drop dead shortly after menopause, like salmon dying after spawning?

Expand full comment

Being taller makes you physically more powerful all else equal, which was probably ancestrally important. There is a known societal correlation between height and status that has probably been present throughout history.

Expand full comment

You say that with height one side is better for fitness than the other. Not really, it depends on the environment. Tall people need more food, but are better at fighting. In environment with little food and little violence, shortness is likely selected for. In an opposite environment tallness is likely selected for. (This is simplified, height affects other things than just food and fighting.)

Expand full comment

Short king detected

👑

🩳

Expand full comment

The sexual preference for height is way too strong for it not to be way better for fitness. Saying this as a bitter, despairing, lonely short king.

Expand full comment

Except that this is only true for men, and it's the opposite for women, and most of the genetic variants operate similarly in both sexes. Thus the equilibrium.

Expand full comment

Women have an almost unviersal preference for height, making height strongly fitness increasing

Expand full comment

Scott! Have you heard of the microbiome? It's a big fucking soup of highly variable biochemistry that is only very loosely under genetic control. And interestingly enough, all the schizophrenia risk genes worth talking about are MHC genes! Does that not tell you something?

Like, let's talk about the actual biology of the disease rather than speculating from a theoretical/statistical perspective!

Why is the abundance of Ruminococcus gnavus 10,000x higher in the guts of schizophrenics than in healthy controls? Why is that figure nearly 1,000,000x in treatment-resistant schizophrenics?

Expand full comment

What are some of the hypotheses for the abundance of Rum. gnavus in schizophrenics ?

Expand full comment

In ascending order of interesting-ness:

1. Diet effect, wherein schizophrenics mostly eat shitty (low-fiber, high-salt) foods that encourage the growth of gnavus over other bacteria [EDIT: they checked, the diets cluster with healthy controls' so this is out]

2. Direct medication side effect, e.g. it eats clozapine or something

3. Indirect medication side-effect, e.g. clozapine causes constipation which somehow creates conditions favorable for the growth of gnavus.

4. Inflammatory capsular polysaccharide drives chronic inflammation (well known feature of scz), which induces IDO/TDO, enzymes that convert tryptophan to kynurenine rather than serotonin. Kynurenine is the precursor to kynurenic acid—an NMDA antagonist elevated in scz that sits at the core of the best current theories about the origin of the disease's "positive" symptoms.

5. Gnavus's uniquely efficient & "promiscuous" tryptophan decarboxylase enzyme turns tryptophan & phenylalanine into tryptamine & phenethylamine in the gut. Colonic absorption lets these molecules circumvent first -pass metabolism that usually keeps them from being psychoactive when eaten. Elevated phenethylamine is a known feature in blood of scz patients, and there are reported derangements of tryptamine levels.

6. Gnavus is known to possess a lyase enzyme, which lets it break down and scavenge/sequester the conditionally essential nutrient queuine. In animals, queuine deficiency leads to impairment of the aromatic amino acid hydroxylases via oxidation of their cofactor tetrahydrobiopterin, or BH4. These enzymes are responsible for conversion of tryptophan to 5-HTP, tyrosine to L-DOPA, and phenylalanine to tyrosine. Impairing them would produce a syndrome that looks a lot like the "negative" symptoms of schizophrenia, and possibly the positive ones too, because of the same so-called "kynurenine shunt" discussed in (4).

The BH4:BH2 ratio has been shown to be off in schizophrenia. BH4 is also the cofactor for nitric oxide synthase, which is responsible for vasodilation; this might be part of why schizophrenics have a much higher risk of cardiovascular disease. NOS is also the "sharp edge" of the immune system, so derangement of this system could create a "vicious cycle" where the body can't muster the usual forces to control the microbiome.

Some combination of any of these may be true for the variety of schizophrenia subtypes, and there are other bacteria with some of these functions, so I doubt gnavus is a sole or even major culprit in every case...but any of the latter explanations is more satisfying to me than "it's probably genetic, and maybe autoimmune?"

Expand full comment

This is very interesting stuff you're mentioning, but it looks like the evidence is still super tentative?

Expand full comment

Yeah, this is bleeding-edge stuff. The microbiome angle, anyway; things like the elevated phenethylamine and kynurenic acid have been topics of discussion since the '70s or earlier.

Even the association between GI dysfunction and mental illness has been known about since ancient times—but it's only in the last ten years or so that molecular biology and gene sequencing have gotten cheap enough that you could just have a hundred people shit into an Illumina to see what's going on down there.

But again, all of these are *known, concrete biological mechanisms* which could plausibly explain various features of the disease in a direct way. Held to a similar standard, calling the evidence around most schizophrenia risk alleles "tentative" would be generous.

Expand full comment

I'd been scoffing* at Chris Palmer and his promotion of keto diets as a means of treating the schizophrenia. Now I feel somewhat humbled.

* https://twitter.com/TeaGeeGeePea/status/1632572607055683584

Expand full comment

I feel you. When we sequenced the human genome, we thought we could finally read straight from the book of life.

More and more, it's looking like that was just the index.

Expand full comment

It's weird how much emphasis has been placed historically on the gut (having guts, trusting one's guts, etc.), but somehow we shocked the gut has a very real say in our well being. Which is, of course, very dumb, you would think the thing responsible for consumption and conversion to energy (basically, lift itself) would obviously be considered super important (you are what you eat!). But shrinks obviously don't like the bio angle, for which they'd just be plumbers, instead it has to be some quasi-philosophical investigation (about as reliable a guide to life as other types of philosophy).

Expand full comment

Right on, dude. The idea that there's something to be done about it is terrifying to a certain kind of person when they're not equipped to do that thing, or at least not easily. The genetics/microbiome dichotomy is sort of a reflection of the same shrink/plumber dichotomy you mentioned; one is rooted in just thinking a lot about the disease and trying to learn to cope with it; the other is about doing something to fix it.

Lot of "motivated reasoning" from Scott here justifying why there's no reason to explore the possibility of a cure.

Expand full comment

Do you have any cites for that last paragraph?

Expand full comment

Cmon, this is just not true. While the MHC region is the top hit in the latest GWAS, there's several reasons I disagree with this framing.

1. The MHC peak is not there in Africans.

2. Rare variants associations are not in MHC

3. Overall, most genes are related to synaptic function and neuronal genes, not immune function related genes.

4. SNP-based heritability estimates typically remove the MHC region - all our molecular heritability estimates are based on genotypes without MHC.

Expand full comment

Do you know WHY most analyses remove the MHC region?

It's because the effect sizes of every other risk allele are so absolutely dwarfed, so brutally CORNCOBBED by the relative size of the MHC signal that it fucks the scale and nothing else looks significant!

It's like doing a study on geothermal activity in the US and leaving out Hawaii! If you want to talk about the likelihood of seeing a volcanic eruption in Washington vs. Wyoming vs. Oregon, then sure, you've got to leave Hawaii out of your analysis. But if you're interested in actually studying an active volcano, you don't need to do any statistical analysis; you just need to buy one of those mylar suits and pack your bags, because it's very obvious that there's only one place with much of anything going on. This is what I meant when I said "the only risk genes worth talking about are MHC genes".

The fact that you're talking about "the latest GWAS" as if this hasn't been the status quo for decades tells me you're googling as you go and don't actually have expertise here.

Next!

Expand full comment

Alright, here we go. Another person just making shit up on the internet. Trubetskoy et al, 2022 is the largest GWAS to date of schizophrenia (https://pubmed.ncbi.nlm.nih.gov/35396580/). Take a look at supplemental table 1, where can actually look at the effects sizes and alleles frequencies for all genome-wide hits.

MHC is indeed the most significant, with an odds-ratio of 1.22. Note however that the allele frequencies in cases vs controls: 93.1% in cases vs 91.4%. That's not an being absolutely dwarfed..

The second-most significant loci is at chromosome 7, with an odds-ratio of 1.09. My point stands: MHC is indeed the most significant loci, but it's not even remotely close to dwarfing everything else. You are just making this up out of nowhere.

If we also look at rare variation, where effect sizes can actually be large (singh et al, 2022: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9805802/.). Take a look at figure 6.

We see CNVs and deletions/duplications, such as 22q.11 or a deletion in the GRIA gene. Not the MHC.

And lastly, the MHC region is not removed because of its large effect size, it's because the correlation between haplotypes (linkage disequilibrium, LD) is extremely complex around that region, and is one of the few regions that contain long-range correlations. Methods that estimate the aggregate heritability, or methods predicting phenotype based on genotype needs to handle this dense correlation - which can typically be very hard. Take a look at their wiki: https://github.com/bulik/ldsc.

Expand full comment

Are you willfully misunderstanding my point or just bad at parsing?

You're right that 93.1% vs. 91.4% is a pretty tiny difference! I'm not saying "the MHC allele has a large effect size".

Like all genomic associations with schizophrenia, it has a tiny, shitty effect size. But compared to all the other genomic associations, it is more than twice as large.

If you are five feet tall, and I am more than twelve feet tall, I dwarf you. That is the ratio between these effect sizes. It's also an intron variant. If you understood what an intron is, you would not be waving this example around so proudly.

The fact that you keep misusing words like "loci" is your third strike. Begone and take your 300-author paper with you. This is not gravitational wave astronomy,

Expand full comment

But you are literally wrong with "You're right that 93.1% vs. 91.4% is a pretty tiny difference! I'm not saying "the MHC allele has a large effect size" -> As I posted, there are several genetic variants with very large effect sizes - but all rare.

How am I using "loci" wrong?

And all your other claims are that the microbiome is important because all the main schizophrenia genes are immune related - I just clearly displayed that this is completely wrong.

You are just jumping completely over the place without meeting any of my arguments:

1) It is ignorant and wrong to proclaim that schizophrenia genetics mainly points to immune function because the top loci is in MHC. In aggregate, its clearly neuronal function that genetics point at.

2) the MHC loci does not "dwarf" the other associations, it's merely the strongest one out of all loci. If you rank by effect size, it's not even the largest one - it's just the combination of effect size + effective sample size.

3) You were wrong and clueless about why the MHC region is removed in typical genetic analysis.

Expand full comment

I am not interested in some inbred Algerian family with a mutation that causes a syndrome which technically meets the DSM criteria for schizophrenia.

I am interested in curing the nearly 1% of the population that suffers from this disease. Anyone who has studied schizophrenia seriously understands that it is highly heterogeneous and the diagnosis encompasses multiple distinct etiologies.

My claims about the microbiome's importance stem from the fact that it provides clear mechanisms to explain the disease's symptoms.

Regarding "loci": look it up.

Goodbye! 🖖🏼💩

Expand full comment

Ignoring the ES&P guy for a second, I'm still confused about why MHC genes should be excluded from our thinking here. The idea that the immune system effects our microbiome which effects the phenotype of the whole organism seems plausible to me, and while neuronal genes are the first place I'd look for a heritable mental illness, the argument seems that I should discount the MHC genes because they are confounded somehow? I cannot figure out the github wiki at all. Do I just need to spend some time reading about LD? Any references I should start with? I feel like this will help me understand GWAS better in general.

Expand full comment

My argument was not really spelled out properly - my bad.

You can take a look at this paper to familiarise yourself with the different heritabilities:

https://www.sciencedirect.com/science/article/pii/S0006322320316693?via%3Dihub

The SNP heritability for schizophrenia is roughly 20%, can be interpreted as the amount of variance we can explain with common genetic variants (given large enough sample sizes).

In genetics papers, we typically estimate the heritability with LDscore regression, which is a method that recommends removing the MHC region for methodological reasons).

My point is simply that a large portion of the heritability is outside the MHC region, as ESP tried to make the claim that the MHC is by far the most important association.

For LD, perhaps check out side this paper:

https://www.nature.com/articles/nrg2361

Expand full comment

Thank you very much. I will check out those two links.

Expand full comment

Thanks for sharing this. Gut micro biome monotheists are exhausting – and wrong. Tom Chivers and Stuart Ritchie did a great show on this.

Expand full comment

Something I usually find missing in these discussions is noticing that effects, in general, are not additive. E.g. "each of which individually has a small effect, adding up to a large total effect". Genes are not a D&D like system where each contribution gives "+1" to whatever. There are complex interactions. I think of the extreme case as breaking systems on a plane: you remove one circuit controlling an engine, and nothing happens. You remove two, still good. Remove three, suddenly the engine malfunctions, and you go straight from "it's all ok" to "plummet and crash". Obviously it's not like all biological systems are made like this (although we have plenty of redundancy inside too). The more general version is a system with different components that still sort of complement each others functionality. E.g. a city accesible by train, road and a harbor may not function quite the same if roads are closed, but it's definitely not even 1/3rd as bad as "all roads, trains and harbor being closed".

I find especially relevant for answer 3 to the "why keep small bad effect genes", because it makes a lot of sense that you have a bunch of mutations which may cause a positive outcome (close the roads, less pollution!) while causing a bad effect that is much-less-than-additive compared to all of them activating (city is inaccesible, everybody starves). This naturally makes the "bad" effects of single genes much, much harder to see. Obviously the city example is exaggerated, and you'd see the effects of closing the roads - but when it's extended to dozens or hundreds of factors, one shouldn't even expect them cause effects on the order of 1%, as the naive "additive" decomposition would suggest.

Expand full comment

Steve Hsu, argued that they are surprisingly additive (https://arxiv.org/abs/1408.3421).

Expand full comment

The evolutionary argument is straightforward: additive effects can be picked up by natural selection, and nonadditive effects can't (because of the mechanics of gamete production).

So any functionality that is the result of natural selection is going to consist almost entirely of additive effects. Then you ask "how much of the genome's functionality is the result of natural selection?".

Expand full comment

Conditionally additive effects can still be selected for though.

E.g. if gene A is positive when you have gene B but is otherwise neutral, and gene B has non-trivial prevalence, than A can still be selected for. The same goes if you substitute B with environmental factors.

Expand full comment

I don't think that argument necessarily applies here. Yes, in cases where there is a measurement whose outcome depends on many loosely coupled factors (better health, better brain architecture, better blood flow, better parenting etc) like intelligence are likely to be roughly additive (when changing only a few). Less so for genes causing specific conditions eg sickle cell.

We don't really understand what kind of think skitzophrenia is yet.

Expand full comment

Actually when it comes to these sorts of GWAS type studies the results are surprisingly additive. Epistasis turns out to really not matter all that much in humans at least (in some other species it does) over a wide variety of different conditions from our current level of knowledge compared to the effect sizes of the genes themselves (and it's not like we haven't tested it, we have but didn't find large effect evidence).

Expand full comment

A guest (I can't remember which one) on Razib Khan's Unsupervised Learning talked about why genes of large effect tend to get selected out. There's often some sweet spot, and genes of large effect will tend to result in overshooting it. With lots of genes of smaller effect you get more of the "law of large numbers" resulting something closer to average, whereas with smaller number of large effect genes random noise will tend to produce larger deviations from the average.

Expand full comment

There's much to be said about this, but one thing particularly jumping out was your comment "most random mutations are deleterious."

I have to disagree: silent or neutral mutations occur all the time. Indeed, evolution works by random mutation; if most were deleterious, life wouldn't have progressed past the most rudimentary organisms.

Expand full comment

There are a lot more ways in which a random change makes things worse than ways it makes things better. (Consider typos as an analogy.) Evolution works because it operates across many individuals over many generations, so it has the opportunity to pick out the occasional helpful mutations out of the majority of deleterious ones.

(I'm not sure about neutral ones, maybe you're right about those, but deleterious >> beneficial.)

Expand full comment

Yeah, pop genetics seems rigidly adaptionist, while all the population genetics I did in undergrad emphasised neutral selection.

Expand full comment

I think the literature agrees with you that "most nonsynonymous and nearly all synonymous mutations have no detectable effects on fitness", as these guys argue when refuting a paper that suggested the opposite: https://doi.org/10.1101/2022.07.14.500130

Expand full comment

Ultimately you have a situation where theory strongly suggests that they are not likely to be useful and are quite likely *slightly* deleterious, but the genetic architecture has evolved in such a way that most small changes that result in a viable organism are indeed likely to have no *detectable* effect on fitness. Not surprising, given that even the variants that we end up flagging with high powered GWAS often show tiny effect sizes.

Expand full comment

Isn't it the case that a significant portion of fertilized egg cells are non-viable due to harmful mutations? So if we're looking at the genetics of born babies, we're getting at a distorted picture - the subset of mutations which weren't rapidly discarded already in the embryo stage.

Expand full comment

Could we steelman this to: "of those random mutations that have an effect on fitness, most are deleterious"?

Expand full comment

I think that must be true.

Expand full comment
author

Yeah, okay, I meant "most meaningful random mutations". I'll edit it in.

Expand full comment

I agree with your point in general, but I don't think 'if most were deleterious, life wouldn't have progressed past the most rudimentary organisms.' is true.

Expand full comment

Agreed, depending on just how deleterious they were and the environment they were in, this could even lead to evolution happening faster than it happened in our world (c.f. bacteria developing antibiotic resistance, introducing the antibiotic makes certain alleles deleterious and leads to protective variants fixing much faster than they would have (or even not fixed at all))

Expand full comment

Except that in asexual reproduction, presumably most mutations are deleterious (in sexual reproduction most are ignored because they occur only on one parent's contribution). Except that asexual reproduction like bacteria or yeast involves doublings upon doublings upon doublings so that if 10% of your culture dies due to mutations each generation, it's more or less a who cares.

Razib Khan had a post a while back saying that each individual has something like 50 new (first generation) mutations that are ignored because they are only from one parent and pretty much everyone has 1-3 (multi-generation) mutations that would be fatal if you had both copies.

Expand full comment

Most mutations are silent - have no effect on the phenotype.

Of mutations with a small effect, 50% shouold be beneficial (50% chance that a random small change moves you towards optimum vs away from it).

Mutations with large effects are almost always deleterious.

Expand full comment

The case for 2 would be mostly the positive effects of Schizotypy. Some are known, it's not just creativity, nice overview paper: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4373632/

The elephant in the room is religiosity. Like most psychological traits that's like 50% heritable. I'm unaware of any genetic analysis of it, but I'd easily bet 10:1 it is also polygenic and significantly correlated with the Schizotypy/Schizophrenia genes. Meaning if you screen or engineer to prevent Schizophrenia, you're also screening or engineering against religiosity. In the upcoming huge culture war about genetic interventions in human reproduction, that's predictably a factor that's not going to calm things down.

Expand full comment

Also been argued that this explains high religiosity in ancients etc. Relating to 1.

Expand full comment

That could be fun to watch. Some parents or governments will want to engineer religiosity in and others will want to select it out.

Expand full comment

Especially if the theory pans out about certain types of political movements being fundamentally religious in nature.

Expand full comment
Feb 8·edited Feb 8

Schizotypy and religion is messy. Schizotypal people are not conventionally religious, though you'd get this impression from Crespi's idea of what schizotypy is. The most schizotypal religions tend to be the most materialist -- many New Religious Movements are only "religious" in the most tenuous sense, in that it's the best concept we have for rounding e.g. "UFO religions" to, or because philosophies that use the trappings of an existing religion tend to be treated as "offshoots" of it even if they reject just about every claim it makes.

Like everything else, The X-Files does this really well -- Mulder (not only the single best portrayal of a schizotypal person in any media, but the single best portrayal of *any neurodivergent person* in any media, completely by accident) is, famously, much more interested in unusual ideas as a whole than Scully, but he's far more dismissive of conventional religion. Whenever the case turns religious, she's open to it and he goes "is this organized religion? cringe".

Most research I've seen on schizotypy and religion hasn't been able to seriously grapple with the differences between simultypal and schizotypal religion, because it looks at/assumes some sort of "general factor of spirituality" that isn't reflective of the real world. Mainstream religion and alternative spirituality are more unalike than mainstream religion and mainstream irreligion are.

Expand full comment

I broadly agree, and would point to the concepts of intrinsic vs. extrinsic religiosity, which is mainstream in the (tiny, so not meaningfully mainstream itself) psychology of religion. Extrinsic religiosity is oriented towards the congregation / umma / sangha / etc. and is emphasized in the big denominations, while intrinsic religiosity is the one of solitary prayer, piety and the heartfelt weirdness of schizotypy, and is bigger among religious specialists like priests and monks, and among converts to religions, which includes almost all members of NRMs.

They're correlated, but not hugely. Your statement that they're less alike than mainstream religion and irreligion seems to me daring but not absurd.

Expand full comment

It amuses me to no end that Buddhism was more or less areligious until Buddha died and people decided it would be a lot easier to sell to the masses if you added back in deities and giant statues.

Expand full comment

Mahayana Buddhism even developed holy wars and swordpoint conversions.

Expand full comment

My wife thinks she is Buddhist. She is Vietnamese. There is none of Buddha's teachings but lots of fortune telling and ancestor worship. On the other hand, it's still probably better than most western religions.

Expand full comment

> Mulder (not only the single best portrayal of a schizotypal person in any media, but the single best portrayal of *any neurodivergent person* in any media, completely by accident)

My vote would be for Harriet the Spy, the best portrayal of Asperger's that I've seen.

Expand full comment

She's not aspergers.

She's a child whose parents are totally absent from her life, and is raised by a maid who doesn't really parent her or care about her much. She's also a nasty piece of work, but its not her fault because she is left to herself and never really got connected to her family to be taught empathy.

you definitely knew kids like her back in that timeframe, but calling it aspergers is bullshit because it completely absolves the parents and adults lack of responsibility.

its a book that changes when you read it as an adult. Maybe there is a point here about all this desire to place the burdens of personality on the individual alone in an essentialist way, but Harriet grew up without real parenting.

Expand full comment
Feb 11·edited Feb 11

I don't think it's a matter of a lack of parenting. There's a lot of signs of aspergers including lesser known ones, like the way that Harriet insists on a rigid schedule and eats the same thing every day. It's to the point where I strongly suspect that the author based Harriet on someone she knew in real life with undiagnosed aspergers.

Expand full comment

a lot of kids insist on that, they grow out of it. in the old days that would be seen maybe as a little babyish not as a psychiatric condition. or as a response to a bad home life. i grew up with 60s-80s kid lit; it definitely did not medicalize anywhere near the post ritalin age.

like psychiatrists were generally there in response to issues with family life; they wouldn't start with assuming the kid had a condition. In Harriet's case it really felt like all the adults in her life were emotionally absent. and that can do a number on a perceptive kid like her.

like the dark side of medicalization is putting the burden on the kid. Your parents not being there can do a lot too,

and a lot of those old books pointed it out. empathy and social stuff is learned too.

Expand full comment

> Schizophrenia is bad for fitness, so if it were genetic, evolution would have eliminated those genes.

I think what is overlooked is that phenotype dispersion (is that the right term?) is good for the species, even when it is bad for the individual. If the same set of genes creates descendants all over the spectrum of a certain trait, then you end up with gay uncles who do not procreate but are still useful. Or with a spectrum of intelligence where some do more intellectual labor and others more menial. Dispersion in gene expression for any set of genes would definitely shaft the unfortunates who end up with, say, severe schizophrenia or autism, but the species as a whole would do better because of the diversity.

There, I explained group selection.

Expand full comment

The math of genetic group selection doesn't work because there's so much more variance within groups vs between groups. And gay uncles don't actually seem to do any of the things people theorize they do.

https://westhunt.wordpress.com/2013/01/10/group-selection-and-homosexuality/

Expand full comment

This argument is exaggerated. In some circumstances group selection isn't effective. However, given that we are in fact large groups of cells in others it must be effective.

I agree it's not a good explanation for traits like altruism or pro-social behavior (I believe there are compelling mathematical models here) but that's a very different claim than the suggestion that genes which cause low frequency individual harms might offer enough benefit to allow group selection to work.

Its kinda irrelevant for homosexuality since gay uncles is a kin selection theory not a group selection theory.

Expand full comment

"We are in fact large groups of cells" with the same DNA.

Expand full comment
Feb 8·edited Feb 8

>If the same set of genes creates descendants all over the spectrum of a certain trait, then you end up with gay uncles who do not procreate but are still useful.

This has been debunked on here many, many times.

Gayness cannot be selected for, because you cannot select for (what is functionally) infertility. It doesn't matter that a gay uncle shares a lot of genes with his nieces and nephews, the specific genes for homosexuality have no way of being selected for. Offspring without "gay genes" will on average dominate the next generation and even if these gay uncles were useful in some way, anyone producing kids that don't reproduce will be punished in terms of gene pool representation.

Altruism is different, because the most common expression of altruism is towards one's children, or people who could raise your children, which means your altruism genes can directly help increase your individual fitness.

And the idea that an infertile gay uncle is of such great group value is taken as a given when it really shouldn't be. They consume resources without producing offspring, and 'caring for children' is by no means necessarily worth the cost of lower group fertility.

https://www.greaterwrong.com/posts/QsMJQSFj7WfoTMNgW/the-tragedy-of-group-selectionism

>Or with a spectrum of intelligence where some do more intellectual labor and others more menial.

Of course, lower IQ menial workers still reproduce and pass on their lower IQ genes, so not at all comparable to homosexuality (yes, many gays have historically had children with women regardless, but this is obviously in spite of, not because of their gayness so its not relevant here).

Expand full comment

I don't have a strong stand on this, but if your arguments was anything of the claimed oft debunking, I don't find it convincing. The gay uncle is as much related to whom he cares for as a grandparent. He contributes to the propagation of the genes. If there's a competitive environment (which selection assumes), then this extra support could help produce the next generation, i.e the family's gay uncle's generation has less offspring, but the latter are more successful and have more offspring than their peers who didn't have gay uncles.

Expand full comment

you did not seem to address the phenotype dispersion part.

Expand full comment

> Gayness cannot be selected for, because you cannot select for (what is functionally) infertility. It doesn't matter that a gay uncle shares a lot of genes with his nieces and nephews, the specific genes for homosexuality have no way of being selected for.

I'm not arguing *for* the Gay Uncle hypothesis which AFAIK is not strongly supported, but this statement is not very accurate. All it takes is for the gene(s) to provide a substantial helping benefit and to only cause infertility in some cases, for example with an environmental trigger. The gene can then be present, unexpressed, in the individuals that are beneficiaries of helping behavior. If (benefit * unexpressed copies) > (cost * expressed copies), then the gene is selected for.

As an extreme example, this occurs in eusocial species such as ants, in which the vast majority of individuals in a colony are permanently physiologically sterile, and work to raise their younger siblings which also carry the same genes. In ants and bees this is favored by a genetic quirk that makes females more closely related to their sisters than to their own offspring, but eusociality also arose in regularly diploid species such as termites, snapping shrimps, and mole rats. Even in these, one is exactly as closely related to one's sibling and their children as to one's own children and grandchildren.

Less extremely, helping behavior by adult individuals who are physiologically fertile but do not reproduce is well documented in several bird species, and IIRC it's more common with men with several male older brothers to be gay, which is both a possible environmental "switch" and a plausible reason to invest resources into one's siblings rather than direct offspring.

Expand full comment

This argument isn't correct. Indeed, if correct it would disprove all sorts of well accepted evolutionary mechanisms like selfish genes.

Consider a gene which caused an increased chance of homosexuality for third and fourth male offspring of a single mother. All you need to select for that gene is that the benefits from greater care/resources allocation from those gay uncles exceed the expected increase in offspring from having them procreate as well.

The gene is selected for because those who are more likely to have the gene (kids of non-gay relatives) are more likely to successfully reproduce long term. Your error is in assuming the gene guarantees homosexuality and non-reproduction.

Expand full comment

Didn't Sasha Gusev use group differences in schizophrenia PGS to argue that they were biased by ancestry. Seems like it might be related to 1.

Expand full comment

"Studies seem to mostly support (1), for example this study of ancient hominid genomes finds that schizophrenia genes are getting less common over time"

Have you read Julian Jaynes' The Origin of Consciousness in the Breakdown of the Bicameral Mind?

Expand full comment

Explanation 3 for those genes hanging around actually seems quite plausible to me. In aging biology, one of the key theories about why we age is ‘antagonistic pleiotropy’, where pleiotropy is the technical term for genes having multiple functions in different tissues/times of life/contexts. The idea in this case is that a gene that causes an animal to grow up and reproduce a bit more quickly and efficiently will be passed on even if it goes on to cause late-life deterioration, because by the time you’ve made it to late life you’ve already passed on your genes (hopefully more quickly and efficiently thanks to the antagonistically pleiotropic gene variants you carry) so you’ve achieved your purpose, evolutionarily speaking.

It’s ‘harder’ for evolution to get rid of a gene that also has a small advantage than one that is merely bad for aging (or schizophrenia) and has no counterbalancing advantage, so actually the 3 explanation seems more likely than 1, though both exist on a continuum to some extent.

Expand full comment

Isn't the argument though that it's bad for reproduction.

Expand full comment

It might not be bad for reproduction if, say, a 0.1% increased risk of schizophrenia is (more than?) counterbalanced by a 0.2% decreased risk of death from infectious disease or whatever.

Expand full comment

To add to this, reading the article, I thought #3 was the 'obvious' reason we have polygenic schizophrenia. If genes are individually advantageous, but don't play well together if you have many of them in your genome (the biochemistry equivalent of "too many cooks", if you will), evolutionary processes will have a hard time filtering them out completely.

Say there are genes A, B, C, D, E and F that contribute to schizophrenia, and each of them also has an upside. Say if you have four or more of them together, you get schizophrenia. So maybe one branch of human genes evolves* toward just having genes A, B and C, and another branch of human genes evolves* to have D, E and F, but it's not possible for us to see from the outside what set anyone has. Now pair up two people like this, and you don't even need to be particularly unlucky to suddenly have four of the schizophrenia risk genes, e.g. A, B, D and E.

(* my understanding is that evolution would probably not even get this far, but let's run with this for the moment.)

This is an extreme example, in that this is a very small number of genes and a low threshold, whereas in real life it's a larger number of genes with a higher threshold, but that just means evolution has an even harder time selecting against them, since e.g. if there are 30 genes and 27 is the threshold at which you get schizophrenia, someone with 26 of them is fitter than someone with e.g. 8 of them. Basically, more is better, until it suddenly isn't.

(Other caveats are that some genes may play nicer with other genes than some others, so it's not entirely linear. Maybe you just need 22 genes to develop schizophrenia if you have genes N and T, for example, but 27 of them if you have only either N *or* T. But again, that doesn't change the problem for the evolutionary process.)

Expand full comment

I'm not sure I buy this model. If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance, and everyone had an independent 50/50 chance of having each gene, then it would indeed be hard to filter that out through evolution, but there also wouldn't be such a thing as being genetically prone to schizophrenia, because every single person on earth would have a chance of getting schizophrenia between 49.5% and 50.5% no matter how lucky they got with their genes.

The problem is that many small independent effects don't necessarily add up to a large difference in effects between people - a large difference between the worst possible and best possible outcomes, yes, but not necessarily between the 1st percentile and 99th percentile.

The fact that some people *can* be predicted to get schizophrenia at a much higher rate just on the basis of their genes means there's something for evolution to easily select on - if those people stop reproducing, the total amount of schizophrenia genes in the population will materially go down (because the ones evolution removed had, by assumption, like twice as many of those genes as normal). This assumes a model where these things are basically additive, but that seems roughly right to me.

Expand full comment

Would this reasoning not also apply to other polygenic traits. What model do you think is workable then for schizophrenia and other traits such as intelligence etc.

Expand full comment

I'm not saying that schizophrenia isn't polygenic! Just that it shouldn't be all three of:

(1) caused in large part by some genes with roughly additive effects on risk

(2) difficult for evolution to get rid of if it wanted to

(3) highly variable in how much genetic risk different people carry

I'm pretty agnostic as to which of those three properties fail to hold - maybe things are pretty nonlinear or non-genetic, maybe it's easy to evolve out but holds some hidden benefit (or it rapidly is being evolved out, and in the ancestral environment things were different), maybe genetic risk for schizophrenia is fake somehow. But I don't think you can get them all at once, or else evolution can apply the strategy of "filter out the people with lots of risky genes" and do pretty well.

Expand full comment

Simply by random chance, some people can wind up with a lot more deleterious alleles on a trait than others (recall that there are lots of traits and selection can only do so much on all of them per generation). The frequency of such people should not be very large though.

Expand full comment

The central limit theorem is a harsh mistress. You can directly calculate how big the outliers would be expected to be based on the postulated individual probabilities and population sizes. And it's not that big.

Expand full comment

> If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance …

I don’t think that’s the scenario that’s being argued. It’s not necessarily linear like that. But also, there may be several genes or combinations that are each necessary, but not sufficient. So individually they don’t do anything, but if you get all of them, you increase risk of schizophrenia by a significant degree. If you get only 90% of them, you’re just a visionary genius.

There might still be a lot of genes involved, increasing chance of each necessary condition by a small percentage, but reaching the threshold for all necessary conditions in one person is still rare.

Expand full comment

> If there were a million independent genes that each increased your risk of schizophrenia by an absolute 1-in-a-million chance …

I don’t think that’s the scenario that’s being argued. It’s not necessarily linear like that. But also, there may be several genes or combinations that are each necessary, but not sufficient. So individually they don’t do anything, but if you get all of them, you increase risk of schizophrenia by a significant degree. If you get only 90% of them, you’re just a visionary genius.

There might still be a lot of genes involved, increasing chance of each necessary condition by a small percentage, but reaching the threshold for all necessary conditions in one person is still rare.

Expand full comment

>The scare-mongering here has to be false - that is, it can’t be bad to choose an embryo at the 50th percentile of schizophrenia risk rather than the 99.9th, because half of people are at the 50th percentile of schizophrenia risk and nothing bad happens to them

I think the worry is that if we either learn to genetically engineer or do embryo selection for long enough, we could pick people with 0% schizophrenia genes (which wouldn't happen naturally) and only figure it out 20 years later when we have a new generation of weirdly uncreative adults. I don't take this fear totally seriously, but it does imply we should be pretty careful with gene selection if/when we do get it to avoid intense selection pressure on everyone doing it at once

Expand full comment

Alternatively, if schizophrenia genes do boost creativity then maybe every visionary tech founder is genetically at the 99th percentile of schizophrenia risk, and removing those genes would make them just normal high-quality tech engineers. Which isn't "something bad happens to them" in a detectable way, but eliminating visionary tech founders would be bad for society.

(Again, this is low probability - it requires pretty specific assumptions - but I think it's plausible enough to worry about a bit)

Expand full comment
Feb 8·edited Feb 8

It's likely too polygenic and too rare for such selection to take place. Intelligence is much more of a continuous trait, and its one of the most sought after enhancements. Vastly more interest and resources will go into increasing intelligence by a few percentiles than reducing schizophrenia-ness by a few percentiles. If there were something that was obviously a massive risk for serious schizophrenia, then that would have a high chance of being selected out. But I very much doubt many people would be interested in driving the risk as low as possible, especially with how hard it would be to do that.

Also, higher IQ schizophrenics have more weakly negative symptoms than those with lower IQ on average, meaning that if you've made a population higher average IQ, there's a chance that having schizophrenia genes will be less of a problem than the current age and therefore there will be less pressure to eliminate them (though that relationship may not necessarily hold if we're artifically selecting for certain genes).

Also, of course, if two laymen on some blog are talking about this, these considerations are likely to be obvious enough that they may well factor into a future embryo selection framework. Silicon Valley types may actually select for a certain level of these genes precisely to increase the probability of having creative kids. Or by this stage we may have a better understanding of a direct link between alelles and creativity and we're able to select for it more directly (without increasing risk of schizophrenia).

Expand full comment

"based on the evolutionary argument above, I doubt this one"

I think the evolutionary argument also applies to (3), not just (2).

Expand full comment

Amateur question: when you select embryos based on the presence or absence of a specific gene, is there a risk of inadvertently selecting for unrelated traits due to gene correlation? Is there such a thing as gene correlation? For instance, in choosing an embryo with a low risk for schizophrenia, might we unintentionally favour one with a higher risk for heart attacks? This would not be because the same genes cause both conditions, but rather because embryos with genes reducing schizophrenia risk might also possess genes that increase the risk of heart attacks.

Expand full comment

The term is "pleiotropy".

Expand full comment
Feb 9·edited Feb 9

Pleiotropy refers to the situation RH explicitly ruled out. What RH is asking about would be called linkage disequilibrium. And yes, linkage disequilibrium is very common - consider any two historically separate populations and you should see lots of it.

Expand full comment

Pleitropi is very uncommon.

Expand full comment

Why do you say that? That is the precisely the opposite of what I have read in various places, for example, in Plomin, *Blueprint*, and Harden, *The Genetic Lottery*.

Expand full comment

I struggle with how to think about traits that are related to how people treat you. Physical attractiveness is clearly highly heritable. Does that mean that random people smiling at my oldest child when she was a baby is a heritable trait? Surely a lifetime of people being nice to you for no reason has a big effect on someone, and if you did a gene-based study you would almost certainly find 'people are nice to you' is highly heritable, but it's a clearly environmental factor. How is this controlled for, if at all?

Expand full comment

Surely it must have a large effect? The logic of Trivers' theory of genetic conflict is that we would evolve to be robust to such environmental effects.

Expand full comment

>but it's a clearly environmental factor.

An environmental factor for what?

Within a given environment, people will be smiled at by strangers at different rates. There will be a correlation between these rates and a person's genotype, hence a heritability of being smiled at.

Expand full comment

Regarding hypothesis 2: across several cohorts in different countries, having a higher polygenic risk score for schizophrenia is positively correlated with having an artistic profession and with measures of creativity. https://www.nature.com/articles/nn.4040

Free full text on ResearchGate: https://www.researchgate.net/publication/277889916_Polygenic_risk_scores_for_schizophrenia_and_bipolar_disorder_predict_creativity

Expand full comment
Feb 8·edited Feb 8

In Theory 1, a gradual decline in prevalence may have been hampered and slowed in past times by schizophrenic "scary bosses", whose symptoms of occasional sudden morose suspicion and paranoia, possibly leading to unpredictable violence, may have helped them maintain dominance through fear. And (male) chiefs in ancient times tended to monopolize women and have lots of children.

Arguably this applies more to hypothesis 2, but with not so positive creativity in the form of menacing cunning.

Expand full comment

I think no. 1 is the correct answer. The mathsy way of expressing this is "Nearly Neutral Theory", which says that just knowing the "selection coefficient", which is a number for how deleterious the mutation is, is not enough. You also need to know the effective population size, because individuals don't evolve - populations do. In species with high population sizes (e.g. bacteria) slightly deleterious mutations are eliminated more quickly. The lower the effective population size e.g. humans, the more likely that "nearly neutral" mutations are invisible to selection. If an allele isn't eliminated, the only other option is that it eventually becomes fixed i.e. the mutated version becomes the new normal, even though it was a slight downgrade.

https://en.wikipedia.org/wiki/Nearly_neutral_theory_of_molecular_evolution

Expand full comment

Now add in, at least among certain groups:

(1) birth control and

(2) other newish financial/social/economic pressures to delay childbearing into one's 30s

I speculate that if (1) is correct, these factors should quickly and massively decrease the incidence of incapacitating early-onset schizophrenia, particularly in males, who often express it by age 20 or so.

By "quickly", I mean within a few generations? So, would be an interesting and somewhat controlled experiment but also a long one.

Expand full comment

Er, sorry, by "if (1) is correct" I meant Scott's Hypothesis #1 above, the "evolution hasn’t had time to remove all of them yet" bit.

Expand full comment

Why would those things cause a massively polygenic trait to decrease? If anything, shorter generations and larger populations accelerate selection!

Expand full comment

Presumably he is saying that if one goes crazy before having kids (thus becoming less likely to procreate at all) then that increases negative selection.

Expand full comment

I think the case for 2 is pretty strong, especially with the framework of something like the Diametric model which argues for the partial integration of both schizophrenic and autistic traits along a spectrum. If this model is correct it would explain the ubiquity of both autistic and schizophrenic traits across populations.

https://doi.org/10.1038/s41380-022-01543-5

Expand full comment

Huh, so 1 million years ago at the dawn of humanity, Schizophrenia rates were through the roof?

Did civilisation only progress recently because selection effects had finally pushed Schizophrenia rates low enough to permit functional societies?

Reminds me of that "The Bicameral Mind" book, it'd explain a lot of their analysis of history if everyone used to be way more Schizophrenic

Expand full comment

Yeah, it struck me as a little like the Big Bang, where (somehow) the universe starts in a very unlikely state of minimum entropy. Where did all these bad schizophrenia genes come from anyway? Are just the bad luck of the exact population present at that big population bottleneck they speak of some tens of thousands of years ago?

Expand full comment

Presumably they're continually being added by mutations.

Expand full comment

Sure, but explanation 1 extrapolates into a past that has way more of them than we have now. Was there a period when we acquired them much faster than we selected them away (unlike, we gather, now, when we are apparently making headway, albeit slowly, at reducing them)?

Expand full comment

I was thinking the bad schizophrenia genes were only deleterious once our cognitive capacity was great enough that our species was relying on it for survival.

We hit a tipping point where the genes went from neutral(chimps don’t care about schizophrenia) to detrimental(humans struggle to survive with schizophrenia)

And natural selection has been cleaning house since then?

Expand full comment

That’s not bad, though I suspect it underestimates how much cognition chimps do. I would think hallucinations and delusions would be countersurvival even in chimps.

I found a paper “Towards a natural history of schizophrenia” that claims non-human primates never have schizophrenia (which is surely different from just not being bothered by it?).

Their claim, I think, is that it’s basically a spandrel from evolving human-level intelligence, which was such an advantage that it outweighed the downsides of occasional schizophrenia. Perhaps since then evolution had been trying to keep the former while weeding out the latter, which would presumably make the process *especially* slow.

Expand full comment

Maybe we had different schizophrenia-causing genes back then.

Expand full comment

But we started with schizophrenia being evenly distributed through the human race, so there are no low-schizophrenia groups to be found. Or might there be very small (family-sized? village-sized?) groups with high or low schizophrenia rate?

Expand full comment

You can have "schizophrenia is evenly distributed across all populations" or you can have "schizophrenia is inherently severe/very bad", but you can't have both, because the badness of schizophrenia is not evenly distributed across all populations. The classical rejoinder to "prognosis is better in some societies than others" is misdiagnosis of non-schizophrenia as schizophrenia, so if your diagnosis rates look the same...

(The other problem for "schizophrenia is evenly distributed across all populations" is that it's not evenly distributed *within* populations, e.g. within diverse countries there are noticeable racial biases. This means either source populations need to have varying rates of schizophrenia, or there need to be major environmental factors relevant to e.g. recent immigrants, or there need to be diagnostic biases in what someone having a psychotic episode gets diagnosed with. #2 is incompatible with the 'hard' geneticist explanation. #3 assumes a very heterogenenous schizophrenia, which is probably true but also problematizes 'hard' explanations. I think everyone sleeps on #3.)

(The other other problem is that as soon as you assume "consistent across time", where "time" refers to since it was defined as a concept, everything collapses under incompatible definitions. People tend to overestimate the degree to which the definition "consistently narrows over time", but they're getting that overestimate from something very, very real.)

There are isolated areas (e.g. Kuusamo in Finland) with unexpectedly high schizophrenia rates no matter how you slice it. The prevalence of schizophrenia is not actually very clear (1% is a meme overestimate), so getting much further than that is hard. Finland-in-general stats sound suspiciously high to me (they look more like 1% than anything else does). Claims of unusually low schizophrenia prevalence in any given area don't seem to replicate well.

Expand full comment
Feb 8·edited Feb 8

You can have the genetics is evenly distributed and the environmental component (of likelihood) is not evenly distributed. It seems clear that the badness will also vary with culture/population but more in degree than in kind.

Expand full comment

I don't think this explanation is true for intelligence and height.

Selection is proportional to the additive heritability on the absolute scale, which makes your explanation true for schizophrenia: If there is a lot of liability-scale heritability, but the heritability is due to many variants of little effect and the prevalence is low, that translates to very little absolute-scale heritability, and it is true that evolution would have a hard time removing it.

But for height and intelligence, the tiny effects happen on the absolute scale, which means that if they have a strong relationship to fitness, evolution would be very quick at changing them. Like I guess it wouldn't exactly lead to eliminating the variants, but it would move you into a region with diminishing returns to them, making the evolutionary aspect less relevant.

Expand full comment

I'm a postdoc working in bioinformatics, with a focus on cancer & polygenic scores but with a background in (mathematical) evolutionary theory. In my experience there is a surprising amount of misinformation - or just lack of knowledge, depending on how you see it - in medicine and genomics because even many experts are so used to monogenic risks since it was all we could feasibly find for a long time that they forgot that polygenicity is most probably the norm. It's arguably the core of the modern synthesis that happened in the early 20th century. At that point, mendelian inheritance had been proven, but we could see in many traits such as height that there was instead a continuous variation. This was explained by Fisher in 1918 by large numbers of small-effect loci, see "The Correlation between Relatives on the Supposition of Mendelian Inheritance" (he didn't use the word polygenic, but it's the same concept). So we're currently mostly just retreading ground that has been covered a literal century ago. There is a lot of followup papers on this and closely related topics, such as mutational burden, muller's ratchet, etc. that all show how negative fitness small effect mutations - which are the most common mutations to begin with - can stay in a population for a long time and even become fixed.

Expand full comment

Agreed, and I wish people would just read an introductory textbook on population genetics or something before having these debates - this stuff was covered in my undergrad classes back in the early 2000s (with the caveat that I've now forgotten most of it). It's ancient history at this point.

Expand full comment
Feb 8·edited Feb 8

I'm not sure the problem is that the knowledge was lost.

The problem appears to be more that people are unwilling to draw the correct conclusions, so they intentionally ignore the relevant knowledge.

If you make them read a textbook, they just won't apply it to the issues they care about.

Expand full comment

"Evolution hasn’t had time to remove all of them yet. Because a gene that increases schizophrenia risk 0.001% barely changes fitness at all, it takes evolution forever to get rid of it. And by that time, maybe some new mildly-deleterious mutations have cropped up that need to be selected out."

This does not make sense as a story. It is not harder for evolution to remove 100 genes of small effect than it is for it to remove 1 gene of large effect; the returns to selection is controlled ONLY by (narrow) heritability and NOT by the concentration across genes. Once you know the heritability, you know how easy it is for evolution to increase/decrease the trait; further knowledge of whether it's one gene of large effect or many of small effect does not tell you anything else about response to selection!

It is, of course, possible for some unfit genes to remain in the population due to the fact that they are constantly reintroduced via mutation. I'm not saying purely-bad genes are impossible or anything. I'm just saying that "many genes of small effect" does not have any explanatory power for why the genes were not selected out.

Expand full comment

"Most random mutations are deleterious" is so oversimplified high-school biology. Many random mutations that we know about are deleterious, because that's how we know about them. Most random mutations are completely neutral in their effect, being either silent mutations (where there is no change to amino acid sequence of resulting proteins) or in non-coding regions. For the ones that do have an effect, we pay attention to bad things but not to improvements*, so we are more likely to be unaware of the beneficial* effects of random mutations.

*And all of this chatter about "more fit," "advantageous," "improvement," "beneficial" is dependent on the environment. What is advantageous in one environment may be an extinction-level trait in another. There is no such thing as an "evolutionary mistake"--there's just a trait that may not have been selected for yet.

Expand full comment

Which is also why talk of 'dysgenic' traits is a good tell for people who don't know as much as they think they do.

Expand full comment

A trait that kills you in the womb will never be selected for.

Expand full comment
Feb 8·edited Feb 8

As a scientist in this field, I really have to disagree strongly here. Synonymous mutations aside, the majority of mutations is in fact deleterious. This is a direct result of most of biology having relatively strict limitations, so any variation is much more likely than not to be net-negative.

The bias also goes the other way than you are positing - strongly deleterious mutations will quickly make cells non-functioning, so most of our studies are actually only about the set of mutations that aren't sufficiently bad. Then you have to consider that limited sample sizes mean we're biased in favor of mutations that have a higher prevalence (usually due to selection effects) - in other words, we tend to oversample the high fitness mutations on multiple levels.

And unfortunately the same goes for the last point. I'd really wish that the world would function like an RPG where we all get allocated the same amount of stat points and nobody is really worse off, just some people have unusual combinations that are adapted to unusual environments. But the reality is the opposite; The majority of mutations is simply making a certain function be performed less efficient with no positive upside.

Expand full comment

Of course strongly deleterious mutations will quickly make cells non-functioning--or, like @TGGP notes, a trait that kills in utero will not be selected for--but that's not what Scott is talking about here. He's making the claim that random mutations with subtle effects (so subtle they, at minimum, still allow one to be born and survive to reproductive age) are still going to be net deleterious, and I don't think there's any reason to believe that is the case.

Our current biology is not the pinnacle endpoint of a great chain of being. All our current enzymatic processes are not perfect. Even if a mutation makes an enzyme perform slightly less efficiently (but still not be lethal), there will continue to be selection on the descendants of that mutant to increase efficiency.* Usually, this is not a straight reversal of the first mutation, but compensatory mutations that can lead to an even more efficient process. (*And this is assuming that "more efficient" = "superior," which it may not be. For instance, the lack of fidelity in copying genes for antibody production is what makes antibody production work; people who cannot randomly mutate the DNA in their B- and T-cells are much more at risk of dying from infections than people who have truer DNA replication.)

Disclosure and biases: my background is in studying *bacterial* mutations for drug resistance and phage resistance. Drug-resistance mutations leading to a more fit strain of bacteria, because the resulting mutant enzyme is more efficient even in the absence of antibiotics, is a well-documented outcome. It's much easier to study mutations and their potential fitness costs in single-celled critters that can have 30 generations by tomorrow than it is to study the same thing in complicated, multicellular humans who will have 30 generations in, ah, like about 700 years from now. But that's also why I think we should be more humble in our claims about what is "advantageous," "deleterious," or an "evolutionary mistake" in human alleles with subtle effects.

This isn't the same thing as saying "no one is really worse off;" some people are born with excellent combinations of traits for their environments, and others aren't. Some sets of traits are so bad that they're just fatal, as we've noted. The world isn't fair in that way. But that's not what's being talked about here. The examples cited in Scott's essay here of "advantageous" traits are clearly tied to environment: Tallness is good because, um, chicks dig it and it makes you a better hunter? Well, idk, dudes dig shorter chicks and also being short makes it easier to hide from predators and potential prey. Even in the hackneyed realm of our-hunter/gatherer-past evolutionary psych "tall is advantageous" has problems. Other examples, like "creativity" as advantageous are even more culturally-bound.

Expand full comment

What's your field? Oncology?

Because most germline mutations that make it into the population are more or less neutral. And it's the population level we're interested in here.

Expand full comment

I think this depends somewhat on how exactly you define deleterious and most.

Is it true that the majority of non-standard sequences in fact experience negative selective pressure? No doubt. And yes breaking things is easier.

At the same time, it's also true that in very long run populations benefit from a degree of genetic variation which is ultimately the result of said random mutations. Evolution could have selected for substantially better genetic error correction for germ line cells than it has but you need to balance the costs of mutations that fuck you up with the benefits of a wider genetic pool that has the capability to discover beneficial mutations to allow adaptation.

Long term, the expected value of any mutation is probably around zero (harms banced by benefits to diversity).

Seems at least plausible that most mutations could, in the right but unlikely circumstances, contribute to positive traits. In other words, there is probably some useful protein thst mutation decreases the edit distance with ... so in a sense most mutations are near zero in effect because the tiny chance they save some future descendent via enabling future mutations balances the more immediate harms.

Expand full comment

> this study of ancient hominid genomes finds that schizophrenia genes are getting less common over time

Wild speculation: this helps explain the explosion of ancient religions as opposed to the relative dearth of new gods. Far more people heard voices in ancient times, attributed them to the divine, and boom.

Expand full comment

Yeah I had the same thought. I remember a talk by Robert Sapolsky where he says religion is due to schizophrenia-adjacent people. People with high polygenic schizophrenia scores but who aren’t quite schizophrenic themselves. That’s where the miracles are from he says.

Expand full comment

Yes, true. “Sybil”, is perfect example of this, she was abused by her mother, who was a 7th Day Adventist. I wrote and personally think, we have the chromosomes study, if you dig, if my comment was read, but growing up with a schizophrenic, in my family, and her children do have MPD/DID, I personally think is environment. Bundy, Mulligan, Dohmer, and so on, these people were not only by-products of a horrific upbringing, but also it was studied after, “Sybil”, there were actual studies done on the physical brain.

I do have to point out though, in the 50’s when this happened to “Sybil, 16 MPD, was a rare diagnosis, only approximately 200 cases; which grew wide spread in the psych community. After her book/ film came out in the 70’s it spiked to thousands of diagnoses across the US. In the late 80’s, schizophrenia rose to 40 thousand cases. My question is it genetics or environment or both?

Personally, we have more mental illness than ever before, no one brings up aluminum from the sky, starting in the 70’s (chemtrails) to present and what the codex alimentarius, or food codes, which are now own by Monsanto,

phones that jail our minds with frequency and social media, hence, “cell” phones(which btw, can be jail broke), plus all the vaccines you all were given as children and young adults, not me. lol. Put all of this together, with genetics…again, my thought process leads me to believe that it’s environment. We are not born with this. Our brain mutates from trauma. Period. Food for large brain thinkers thoughts…🤣😊💯

Expand full comment
Feb 9·edited Feb 9

Ancient religion is already perfectly explainable by attempts to answer the (then) unanswerable, with pattern matching and confirmation bias doing the rest. There's no mystery here. Humans aren't different today in any meaningful way, the difference is just that we go to a hospital rather than a temple to pray for safe childbirth, so the domain of religion has shrunk to a pinpoint.

https://acoup.blog/2019/10/25/collections-practical-polytheism-part-i-knowledge/

Expand full comment

I believe more explanation is needed than we used to have a God-of-the-Gaps (or gods) and we've since been filling in those gaps with science. As Nietzsche complained, "1900 years and not a single new God!", despite the fact that there were still many unfilled lacuna in Nietzsche's time.

Expand full comment

*have* there been no new gods? I mean, apart from the obvious pedantic objection that Islam and Mormonism are less than 2000 years old and that Japanese Shintoism added gods on a regular basis (heck, there's even a big shrine to Emperor Meiji!), you have stuff like homeopathy and crystal healing that fill a similar purpose nowadays.

Expand full comment

I recall in an earlier post someone did bring up schizophrenia genes of larger effect. In the face of selection that's possible with rare de novo mutations.

Expand full comment
Feb 8·edited Feb 8

There are a slew of copy-number variants (deletions or duplications of a region of a chromosome) that increase the chance of schizophrenia. Compared to other neurodevelopmental disorders (including autism!), CNVs that affect SZ are rare and tend to have fairly small effects. There's one big exception (velocardiofacial syndrome), but VCFS is extremely weird and I'm not convinced it's wholly identical to schizophrenia in the general population. (It's certainly not nearly as clinically homogeneous as you'd intuit from its genetic homogeneity.) CNVs that affect SZ only survive a few generations at most, but that's because they're significantly disabling in other ways, not (primarily) because of their almost-always-small increase of that.

Expand full comment

Hmmm. Genetics doesnt always work in the ways we expect.

By analogy: when we were scampering around on all fours, a few of us had various genes for smaller nimbler front legs and feet. Some were able to walk around a bit on just our back legs. Only for short bursts. And at some serious costs in terms of backache, risks of falling over, loss of speed, etc.

And yet... this apparently rubbish set of aberrations came together and paved the way for hands ... thumbs... and a species that is able to dominate the planet.

So i suspect that individual genes that facilitate schizophrenia are often positive in ones and twos, but they cause a problem when they all crop up in one individual.

Expand full comment
Feb 8·edited Feb 8

Bipedal motion has nothing to do with hands and thumbs. For one thing, the hands and thumbs clearly came first - all primates have them. Many monkeys also have thumbs on their feet.

Expand full comment
Feb 8·edited Feb 8

> So many of the traits we’re most interested in - intelligence, strength, schizophrenia, etc - are 𝘯𝘦𝘤𝘦𝘴𝘴𝘢𝘳𝘪𝘭𝘺 massively polygenic, because one side of them is better for fitness than the other. If they were monogenic, evolution would have already selected for the good side, and there would be no remaining genetic variance.

This is wrong. You can easily make the case for schizophrenia, but for strength and intelligence we want to draw the 𝗼𝗽𝗽𝗼𝘀𝗶𝘁𝗲 conclusion - it is definitely not the case that one side has historically been better for fitness than the other, because the amount of variation within the population is great enough that selective pressure would have had no difficulty producing an obvious response.

(Compare schizophrenia, where variation within the population is extremely low, which is what you'd expect from selective pressure against it.)

If you're having trouble with the idea that more strength could be worse, Steven Pinker put the point pretty well (though he was talking about intelligence) when he said, in my paraphrase from memory, "People present the evolution of humanity as being driven by selection for larger brain size, with increased intelligence as a happy side effect. But this is absurd. Metabolically, the brain is a pig. Any selection on brain size alone would surely have favored the pinhead."

Expand full comment

4.) Schizophrenia is mostly a spandrel. I have no papers or examples, just another option.

Expand full comment

It's a shame to cite Liu et al. 2019 without synthesizing one of the more interesting hypotheses in that paper. Granted, the paper itself is dismissive of it. It says:

“Although it has been reported that fertility among relatives of patients with schizophrenia is increased, a large cohort study and meta-analysis identified that this increase was too small to counterbalance the reduced fitness of affected patients (Bundy et al., 2011; Power et al., 2013). In fact, MacCabe et al. (2009) showed that patients with schizophrenia had fewer grandchildren than in the general population, demonstrating that the reduced reproductivity persists into subsequent generations.”

So the authors of this paper note that non-descendant relatives of schizophrenia patients have increased fertility but don’t find this fact interesting enough to incorporate into their overall interpretation.

Yet there is a way of interpreting that information that strikes me as painting a considerably more compelling picture: the decline in schizophrenia-dispositive SNPs does not presage the eradication of schizophrenia, but is part of an equilibrium-seeking adjustment to the group-level predominance of such SNPs.

This pattern is observed for left-handedness. It is advantageous to be right-handed in a right-handed world: it aids cooperation; tools are made for you. But in conjunction with other traits, it is also advantageous to be left-handed, specifically because left-handedness is rare. The word “sinister” comes from a root meaning “left” for this reason: lefties were looked down upon, but also hated, because their left-handedness gave them particular advantages—to wit, the element of surprise in combat—as a result of which left-handedness was never eradicated despite its obvious cost.

Could schizophrenia-dispositive SNPs not have a similar role? I’ll put my cards on the table and say I believe many of them do—specifically, that they contribute to shamanic and prophetic tendencies which have a positive kin-selective effect. I'd be remiss not to cite my friend Drew Schorno as helping me form this view: https://arcove.substack.com/p/null-call

The credit also goes to you, Scott, for helping me understand how schizophrenia-dispositive traits can be marginally beneficial. This was in the context of predictive processing: https://slatestarcodex.com/2016/09/12/its-bayes-all-the-way-up/

Switching to a metaphor I find easier to work with: schizophrenia-dispositive SNPs, or at least some subset of them, have their effect because they raise the effective temperature of cognition: long-range connections, seemingly meaningless “accidents”, fail to be ruled out immediately as they are in a neurotypical mind. The extra “heat” means that people with a lot of schizophrenia-dispositive SNPs think a lot more wild, nonsense thoughts. This is usually not good for the individual but may be very valuable for their community, because every once in a while their wild thinking leads them to notice something very important like the hidden ill intentions of a neighboring tribe or signs of an impending drought.

Schizophrenia itself is a failure mode: the cognitive temperature is so high that the community can’t keep the lines of communication open (although—note that pre-industrial societies were a lot less likely to medicalize and ostracize people with schizophrenia-like symptoms). But schizophrenia-adjacent “disorders”, “personalities”, whatever, do appear to serve a social purpose.

This is consistent with the results that Liu et al. cite, that relatives of schizophrenia patients have some level of increased fertility while the patients themselves do not. At the same time, the net effect on the kin group is negative, which is consistent with the view that the prevalence of schizophrenia-dispositive SNPs will continue to trend downwards. I suggest that the end-point of this will not be eradication of those SNPs but a new equilibrium.

Expand full comment

"This seems less like the sort of thing that happens naturally, and more like the sort of thing you would claim if you wanted to make your theory untestable."

Can you point to where Torrey explicitly writes this in his article?

Expand full comment

Wouldn’t possibility #3 show up on genetic correlations between schizophrenia and other health states? From what I’ve seen reported in research studies, most genetic correlations of schizophrenia are with other diseases, including other forms of psychopathology, but also things like cardiovascular disorders and immune disorders. So #3 seems unlikely. Some studies have found positive correlations with educational attainment, but recent studies have shown that this is explained by the genetic overlap between bipolar and SZ, and that unique SZ genes had a negative effect on educational attainment.

Expand full comment

A (very likely?) possibility that can have an outsized effect on our evolutionary understanding is that schizophrenia is like weak ankles or some such -- it's a latent thing which isn't usually a problem until it gets nurtured with a sledgehammer or similar. In the past the other genetic effects would have been excellent for fitness except in rare cases, but in a world where weed (or, I dunno, television) is freely available it becomes vastly less adaptive.

Expand full comment

First and foremost, I would like to thank KW, for this article. KW has way with opening our minds to conversations and greater positive awareness on different topics.

As this article presents the genetic material of this matter. What about the randomness of the schizophrenic people we know, ie: one of my non-biological grandmothers sister, due to her insidious and behavioral thought patterns that altered her, was diagnosed clinically, was institutionalized, yes, we still have a medical facility for the diagnosed disease, “Greystone”. But speaking with my mother who is not related, stated, my aunt and uncles are also diagnosed with the same irrational disease. Interesting, right?

What I know as schizophrenia, has a stigma this is DID and MPD, disassociated identity disorder, and multiple personality disorder. This is how we speak of schizophrenia. In all of my studies I have realized, it has the possibility of genetic traits, (genetics saying it has to do with the 6th chromosome and 22, now, not in the 50’s)

My question to this is, how long has this truly been genetically studied? The environment in which we were reared, has evolved us or mutated us into something we are not.

One great example of this is “Sybil”. She grew up in the 50’s, was the product of a very abusive childhood, in part, due to her religious upbringing. (7th Day Adventist). In the 50’s when her story came to being, as she had 16 MPD’s, which grew a wide spread fascination by the medical community, psychiatrists and psychologists lobbied to have MPD or schizophrenia, to be put in their “bible; DSM or Diagnostic and Statistical Manual”, this rare diagnosis led to a common diagnosis. “There were less than 200 cases in Western civilization” in or around the 50’s. “But after the book and film in the 1970’s, her case sparked hundreds of thousands of prognosis’s, and by the late 80’s there were 40 thousand diagnosed case in the US alone.”

Does this make a case for genetics and environment? I think so.

For example, in 1986 (date coincides to the diagnostic psych manual for MPD), Billy Mulligan, he had over 24 different personalities. He was diagnosed young, said he was a psychopath, chronic liar, and a murderer. He was seen by several psychiatrists put on Thorazine, a common drug for MPD. He stated he was becoming worse due to the publicity, society, and medication.

This subject has always fascinated me. I see in everyday situations, like a “narcissistic/pathological liars”, people who make up lies, and believe their lies and grandiosity, has become part of the mutating brain from all of the trauma. Right? Slightly altered personality is considered either schizophrenic or MPD.

Ok, there are two examples of cognitive behavior. This leads to, if not monitored, Schizophrenia/MPD/DID. These different behaviors change the brain, people actually perceive their beliefs and morphs them into their own monsters or demons, your preference.

Ted Bundy, Charles Manson, even though these guys are murderers, they heightened their own perception/personality (MPD) to believe what they did was ok, and fought for it. True or not? I do not believe we inherit a gene for their delusional beliefs/thoughts through our society.

Thank you again, KW, great food for thought.

Expand full comment

Hey, nice series. I ran your article through Tutor GPT and wrote about it on my Stack! It's complicated stuff, I appreciate the insight. We are working on making diagnostics for polygenic diseases so this is super fun to think about. As an aside, have people run a subset of these through a markov model to simplify the gene-only interactions? then you look at the other end of it and look at a PCA and see which of these genes are migrating over time to see what we're selecting for . if its really .1% one way or another every generation over 50 generations we'd just see the probability clould jiggle for a thousand years

Expand full comment

"Because a gene that increases schizophrenia risk 0.001% barely changes fitness at all, it takes evolution forever to get rid of it. And by that time, maybe some new mildly-deleterious mutations have cropped up that need to be selected out."

I don't understand the second part of this. Evolutionary pressures don't operate on one mutation at a time, do they?

Expand full comment

Acquiring mildly-deleterious mutations happens at a certain rate. Selecting out existing schizophrenia mutations happens at another rate. It’s like a bucket with a hole in it. If you’re filling the bucket with a hose that delivers a quart per minute, but you make a hole that drains a gallon a minute, the bucket will be empty very soon and will stay empty despite the hose. But if the hole drains only a quart per minute, it will *never* empty.

Expand full comment

>The scare-mongering here has to be false - that is, it can’t be bad to choose an embryo at the 50th percentile of schizophrenia risk rather than the 99.9th, because half of people are at the 50th percentile of schizophrenia risk and nothing bad happens to them. Schizophrenia genes can be at best fitness-neutral;

I think that you're conflating two different notions of 'bad' here.

It might be desirable to select an embryo with traits that are not maximally-reproductively-fit in the evolutionary environment, but are useful in modern humans.

And given that we're talking about variance in mental traits here, that even seem like a pretty likely scenario to me.

Expand full comment

I'm rather sure it's a combination of 1 and 3. Assuming that mutations occur almost randomly, the ones that have an advantage as well as a disadvantage will tend to be selected against (or for) more slowly. And there are a lot more ways to break something than the improve it, if it's any good at all.

I.e., the average mutation that is detrimental will be selected against more strongly if it doesn't have any advantages. And there's no reason to believe that the advantages will be related to the disadvantages, except through a VERY tortuous chain of interactions. (And, of course, almost all mutations are either neutral or detrimental. Neutral drift theory enters here importantly, but whether the "neutral mutation" is absolutely neutral is very difficult to demonstrate. E.g. some yield the exact same proteins, but make the RNA slightly less stable. Just try to prove whether that's advantageous or not.)

Additionally, some mutations are advantageous (or neutral) in some environments, and not in others. (E.g. being tall and thin is disadvantageous in really cold climates, but advantageous when the temperatures are higher. [But not too much higher. I think the advantage disappears when the average highs are over about 95F, though that's a wild guess.])

Expand full comment

It is not at all surprising that complex traits are polygenic, for a different reason than the evolutionary one described here. We have only 20,000 genes; roughly speaking, 20,000 "ingredients". How do you get from this the staggering complexity of form and function we observe? Through combinations of genes acting together. You can wire together 3, for example, to form a time-keeping oscillator (something I had my class act out a few days ago). Just as it is silly to ask which one is the oscillator gene, it is silly to expect one, or even a few "schizophrenia genes." Even height has ~1000 determinants. The idea of one gene = one trait is incorrect and sadly widespread.

Expand full comment

Low confidence speculation here, but I have often gotten the impression that schizophrenia is sort of the opposite of autism. I wonder if the low schizophrenia risk people have high autism risk and vice versa, with "healthy normal" being a balance between the two.

Expand full comment

A significantly larger proportion of people diagnosed with autism are also diagnosed with schizophrenia than the general population, and the rare disorders that significantly increase SZ risk basically always significantly increase autism risk too (less clear the other way around, in large part because syndromic autism is damn near a synonym for diagnostic substitution). The rejoinder by the people very invested in the diametric model/imprinted brain (the 'opposites' hypothesis) is that this is a misdiagnosis based on the extreme similarities of autism and schizotypy, and that people with these disorders don't have 'distinctive' autistic traits, but 1. once you're conceding "extreme similarities" you've already lost most of the case and 2. no, they do have autistic traits that aren't also schizotypal traits, e.g. stimming. (I'm in the final stages of pre-submission editing for a very large literature review on one of the disorders Crespi uses as a case model for misdiagnosis, and came away *even more confident* than before that it's an accurate diagnosis of real comorbidity.)

Expand full comment

Very interesting. I have little confidence in how accurately these diagnoses get made but the fact that the two are correlated rather than anti-correlated certainly pushes back against the "opposites" concept.

Expand full comment

The problem of polygenicity is not exclusive to schizophrenia. Indeed most intractably difficult modern diseases are thought to be polygenic, and for the same reasons. I studied asthma in grad school, where half my lab was arguably genetics driven.

Rates of asthma have been dramatically increasing for decades (including severe asthma requiring hospitalization, so you know it's not just over diagnosis). If that's the case, shouldn't we be focusing on some environmental factor?

Enter "Danger Theory" and the Hygiene Hypothesis. The immune profile of allergies, asthma, some kinds of IBD, and other auto-inflammatory and auto-immune diseases often looks like an inappropriate activation of the immune response to a parasitic infection (an inherently polygenic process). What's that all about?

The thinking goes that humans throughout history have been exposed to constant parasite challenge, but that this has gone down dramatically in the modern era. Different people have a different 'threshold' for immune activation of a parasite, with some people allowing more infestations and others activating against the slightest threat. Why doesn't everyone activate at high levels? Well, activating the immune system is inherently destructive, so activating inappropriately can have negative side effects. Those effects are worth it when you get the activation right, but harmful when you're wrong.

Until two hundred years ago, having an immune system on high alert to parasites would have been evolutionarily advantageous (most of the time). High threshold people would be sick a lot more, since they'd allow more parasites in, while low threshold people would be out working. In today's society the dynamic is exactly the opposite, since most parasite activations are going to be wrong and therefore harmful.

If baseline immune activation is more of a rheostat than an on-off switch, it makes much more sense for this to be controlled at a population level by variable or polygenic mechanisms, than by a single mutation that 'causes' some specific baseline level of immune alertness to parasites.

Multiple studies have demonstrated genetic components to many of these immune-related diseases, but there's reason to suspect that the changing human environment/culture/society has shifted the definition of what is adaptive versus maladaptive. What was selected for yesterday may be selected against tomorrow, and that is true across millions of years of evolution. In other words, environment determines whether a gene confers fitness or not.

What is natural selection, if not long-run genetic responses to environmental stimuli? Without environmental inputs, evolution has nothing to direct it. Therefore we can expect that dramatic changes to an organism's environment will result in some adaptations becoming deleterious - especially at a population level.

What holds true for immunology almost certainly holds true for psychiatry as well. Human interactions have shifted dramatically over the past few hundred years, to the point where the human brain is operating in an environment significantly different from the training data.

(Alternately, maybe this is all just cope to avoid having to explain that we don't understand the underlying mechanisms? Even so, I think the polygenic case still holds for most human diseases I've looked at.)

Expand full comment
founding

I think 1% of people are at 50th percentile risk, not 50%.

Your previous argument showed that dropping the extremal 1% barely affected the next generation, but collapsing the distribution probably will have significant long-term effects (hopefully positive!).

Expand full comment

Well he probably meant 50th percentile risk, *or less*. Given the way percentiles are defined the 'or less' part can seem redundant even though it's really not.

Expand full comment

Scott is right and Torrey is wrong in regard to these two objections. Evolution can fail to remove bad effects even if they are monogenic, or low polygenic, if they also have some fitness advantage. Two examples:

- Sickle cell anemia is monogenic but persisted due to its advantage regarding malaria.

- Hyperlipidemia with elevated ApoB drives cardiovascular disease. It is polygenic, but not greatly. It persisted because it was adaptive for energy use during period of food scarcity, such as the Ice Age.

Intuitive arguments about fitness often go wrong because we are naive about the many varied things that contribute to fitness.

Expand full comment

Isn't there a negative correlation between height and longevity? That is, shorter people live longer on average and have more time to contribute to the well being of their offspring. That alone would exert powerful selective pressure towards moderation in height across generations.

Expand full comment

"The scare-mongering here has to be false - that is, it can’t be bad to choose an embryo at the 50th percentile of schizophrenia risk rather than the 99.9th, because half of people are at the 50th percentile of schizophrenia risk and nothing bad happens to them."

This is normatively true, not best-of-all-possible-worlds true. Neurotypes have the prevalences they do because, approximately, that was about as common as worked best in ancestral environments. A trait being the midpoint or most common doesn't mean it's the best; it would be unreasonable to say people at the 50th percentile of intelligence probably don't have any problems that aren't also faced by people at the 99.9th percentile of intelligence. Humans are "about as smart as made for the best tradeoffs", which is less smart than ideal.

I don't think the neurotype carved out as NT is the best of all possible worlds, or that it should be "the highest-prevalence by a juggernaut huge margin". Of course, it has various benefits (e.g. acting on plans rather than theorizing about them forever, big-picture thought, caregiving, etc). A world with no primary care physicians would be a disaster. A world with no MBAs...uh...okay, we'll circle back to that one. I don't want a world where all the smart people are primary care physicians or MBAs, though.

Expand full comment

Here's an idea for a possible fourth explanation, but with the caveat that I'm not a biologist and I'm open to arguments if this is implausible. I've thought about this every time someone brings up the "but homosexuality must have some evolutionary advantage or it wouldn't exist, something something gay uncles" line.

Suppose you have the job of manufacturing parts to a certain tolerance. The more exact your production line, the more expensive the process - basically once you've calibrated the machines to get the average part about right, the smaller you want the SD of your output parts' measurements to be, the more it'll cost you. You can trade off between two ends of a scale. At one extreme, make your process good enough that every single part that rolls off the line is within tolerance (with the most expensive machine ever). At the other extreme, make parts where those in some range like half a SD from the mean are within tolerance, with the cheapest machine you can bodge together, measure each part that comes out and trash (or recycle if you can) those that aren't. Either way, every part that makes it to the customer meets the spec. Of course, the optimal point is a balance between the cost of the materials (including how easy they are to melt down and start over) and the cost of getting the assembly line to a certain precision; if you're cutting natural diamonds then you're at the opposite end of the spectrum to if you're 3D printing plastic.

Maybe evolution is doing the same? If the optimal height of humans in some environment is 160cm, and an evolved mechanism for making everyone exactly that high even if some grow up better nourished than others would have more of a fitness or energy cost than making everyone grow "close enough", I'd expect evolution to take the latter path.

The old "group selection" post (Studies on Slack) mentioned that in on a planet that gets hit by a solar flare once an eon, species will not evolve a radiation shield if that costs slightly more in energy for each individual during all the years when there's not a need for it. Similarly, if schizoprenia / homosexuality / autism / pick your favourite condition is a net negative from a reproductive fitness perspective, even if a biological mechanism is available to filter it out and evolution would have the time to do so, I'd expect the filter not to exist if the cost it imposes on the individuals (such as extra energy expenditure) is in some sense higher than the reproductive fitness cost of not evolving the filter.

Evolution has, after all, managed to create frogs and fish that have thousands of children and 999 of them generally get eaten or otherwise die before sexual maturity, which is the "cheap machine" end of the scale (which I believe is called r/k selection in biology textbooks).

Is it plausible that schizoprenia exists because the evolutionalry cost of it not existing would be higher than otherwise? Without group selection? Or have I missed something obvious here?

Expand full comment

I understand your point that it makes more sense for both manufacturers and for nature to tolerate some imperfect products if the cost of making perfect ones is too high. But in the case of something the reduces reproductive fitness, it seems like natural selection would weed out the genes for the imperfection. There isn't some process analogous to what a manufacturer might go through, of deciding whether it's worth the effort of retooling so that products with a particular defect never make it to market. Natural selection seems to involve the" factory" retooling itself. Schizophrenics tend to have about half as many offspring as non-schizophrenics, and currently have a life expectancy about 15 years shorter than that of people without the illness. Given those stats, seems like we need some explanation for why the genes for this illness have not been eliminated from the population via natural selection, and I don't see how there could be some analogue of the manufacturer's decision process. But maybe I am missing something about your idea?

Expand full comment

I think this is called the Cliff-Edge model. I don't know much more than that.

Expand full comment

The Cliff-Edge model has to do with obstetric selection in humans. Basically neo-natal size is highly variable and larger is better up to a critical size where delivery becomes impossible. The upshot of all this is that humans have a very high risk of obstructed labor relative to other species.

Perhaps there is something similar going on with maximizing intelligence, high variance, and overshooting the sweet spot causing schizophrenia and/or other mental disorders.

Expand full comment

The idea of "schizophrenia genes" is counterintuitive and not particularly useful. And, since schizophrenia is a mental dysfunction it is natural to look for an association with other brain mediated attributes such as intelligence or creativity.

But perhaps that is like looking for your lost key under the street light when you actually dropped it across the street.

There is a lot of data that shows an association between schizophrenia and every aspect of the immune system. A highly dysregulated immune response is a frequent comorbidity in schizophrenics and a promising area of research.

Further understandings on the interplay of the different arms of the immune system, the immune cell types implicated and their origin, the role of immune responses to common viruses, the gut brain immune axis, and aspects of inflammation and autoimmunity, are bound to shed a great deal of light on the causes of schizophrenia.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082498/#:~:text=It%20has%20been%20shown%20that%20schizophrenia%20is,from%20innate%20to%20adaptive%20immunity%20and%20from.

Expand full comment

> The clearest way to resolve these questions would be to genetically engineer someone to zero schizophrenia risk and see what happened (this is beyond current technology)

Just checking, but this is also beyond current ethical boundaries, right? Right???

Expand full comment

Why? Suppose next year Purdue pharma comes out with a drug that animal studies suggest will, if given to pregnant women, virtually eliminate the chance their child is skitzophrenic (animals and non-pregnant people show no detectable harms) . Ok, we aren't 100% sure it's not going to, surprisingly, turn out to reduce some beneficial function but Skitzophrenia really sucks so if we think it's more likely to net help than hurt it seems not only ethical to try the medicine but actually unethical not to do clinical trials.

Nothing different because it's genes not chemicals. Both cases you are influencing someone for the whole of their life. It's just a matter of how confident we are that this kind of selection doesn't have unexpected harms.

Expand full comment

That's involuntary experimentation on children.

Expand full comment

Studies show that the polygenic risk score for schizophrenia is not associated with creativity, indicating limited contribution of the accumulated effect of these risk variants to creativity.

However, neural inflammation and certain viral infections are strongly correlated with the development of schizophrenia, as is infection with the parasite Toxoplasmosis.

Perhaps the correlation between schizophrenia and creativity is due, in part, to shared vulnerability factors in brain architecture and function, including neural hyper-connectivity, novelty salience, cognitive disinhibition, and emotional lability.

Thus, individuals with both a genetic predisposition to specific differences in brain wiring and function, AND also having an abnormal immune response to infection by certain viruses (Epstein barr, CMV, retroviruses, etc) or parasites (Toxoplasmosis), are statistically far more likely to develop schizophrenia.

It's like rolling snake eyes in a game of chance. A creative brain is only a risk when combined with a dysregulated immune response to specific but commonly encountered pathogens.

No doubt there are other pathways to the development of schizophrenia and it's all more complex than what we currently know. Future research will undoubtedly reveal far more about it.

Viral infections correlated with the development of schizophrenia:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10302918/

T. gondii as a risk factor in the development of schizophrenia:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6382596/

Accumulated effect of schizophrenic and creativity risk factors:

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6792478/

Expand full comment

I'm not clear why we're rejecting #2. Possibility #1 seems like the default hypothesis, and #2 sounds reasonable to me, in a way like "the dose makes the poison". #3 is similar, but requires that the gene influence at least two unrelated systems, so it seems less likely to happen than #2 which just requires a small effect on one system. So just on intuition I'd guess a mixture of #1 and #2 would be most likely. But this is really far from anything I know well.

Expand full comment

There are some crucial things I don't get about how polygenic risk works. Can somebody here explain a few basic things? So there are thousands of genes that cause schizophrenia, sprinkled all over the genome. So let's say there are 5000 genes that cause schizophrenia, each of which raises risk by a tiny amount, on average .02%. So does someone have to get all 5000 of them to be schizophrenic? But the chances of someone getting all 5000 of them is minuscule! Is it more that you have to get, say, at least 2000 of them. So what are the chances of someone getting 2000 out of the 5000? If these genes are scattered all over the genome, then the chance of getting any one of them is not correlated with the chance of getting any of the others. (I think. Is that right?). So then it should be possible to calculate the odds of someone getting,say, 2000 particular genes. If you take any one of these genes, what fraction of the population has it? Does one assume it's 50%? If you know how likely someone is to get any one of the genes, and approx how many genes there are that increase risk of schizophrenia, it should be possible to figure out what fraction of the population will be schizophrenic. And the result should be approx. 1%, since that what fraction of the population is. I'm sure I'm not the only one who'd like some help with understanding these basics.

Expand full comment

Seems plausible to me that some of the genes that raise the risk of schizophrenia are genes that cause the person to have a harder life. You can think of them as genes that affect the individual's environment. For instance, physically attractive kids are better liked by peers and seen as smarter by teachers: They literally get more smiles. Just now looked up research on schizophrenia & physical attractiveness. There were studies that found photos of schizophrenics were rated as less attractive by judges than those of non-schizophrenic peers, but I think that result might be accounted for by schizophrenics being more likely to have poor grooming and and other signs of self-neglect, and a less pleasant facial expression. But then found one where the high school photos of people who later became schizophrenic were rated as less attractive. It's herer:

A. Farina et al (see record 1978-23202-001) investigated the relation between mental illness and physical attractiveness and found that female psychiatric inpatients were less attractive than normal controls. The current study extended this investigation in 2 ways. First, 28 psychiatric inpatients were compared to 3 separate control groups of 53 low, middle, and high socioeconomic status Ss. Mental patients were judged significantly less attractive than either middle- or high-income controls but were not significantly different from low-income controls. Second, to examine physical attractiveness prior to hospitalization, attractiveness ratings of the patients' high school yearbook pictures were compared with ratings of the adjacent same-sex photographs. Patients' photographs were judged significantly less attractive than their peers' even in high school. Findings suggest that being physically unattractive may predispose an individual to a number of negative social outcomes, one of which is mental illness.

((Napoleon, T., Chassin, L., & Young, R. D. (1980). A replication and extension of "Physical attractiveness and mental illness." Journal of Abnormal Psychology, 89(2), 250–253. https://doi.org/10.1037/0021-843X.89.2.250)

Expand full comment

I realize this is somewhat off topic, but I recall from my high school biology class that researchers had isolated a gene for manic-depression. In fact two different studies, one focusing on Iceland, and the other on Pennsylvania Amish, had located two different genes, on two different chromosomes. Is this still considered true, or has it been discarded in the intervening years?

If still considered true, it would indicate that at least some mental disorders have single-gene causes.

Expand full comment
Feb 8·edited Feb 8

"why are there still even these genes of very small effect?"

Explanation 1) "Evolution hasn’t had time to remove all of them yet" seems to assume that it's harder for evolution to remove a lot of small cumulative genes than one single gene. I don't believe this to be the case.

To get an intuition, I quickly wrote a small python script which evolves a small population with 1024 binary genes under two different conditions:

a) a 50% probability of death if gene 666 is True

b) a 50% probability of death if the 666th fourier component of the genome is >0.5

(I didn't implement recessive or dominant traits)

Both factors seem to have approximately the same half-life, even though one is a single gene and the other is a bunch of genes with small effect. I'm not posting the script right now as there is a high probability that I've made some embarrassing error, since I wrote it in 10 minutes and didn't do much testing, but the result seems plausible. Unless there is something special about the 666'th fourier component, I'd guess this to be true of any linear transform component.

Expand full comment

The now-classic argument for posting your code is evolutionary: "The more eyes on the code , the fewer bugs."

Expand full comment
Feb 9·edited Feb 9

Dunno as an excellent Genetics graduate and one who has been told that he is exceptionally creative (humble brag), and one who initially self-diagnosed and then was formally and objectively diagnosed with schizophrenia, I really, _really_, hope that there is some exceptional if rare boost to having this condition.

The alternative is too drab.

Expand full comment

either way you get to keep the creativity, though, right?

Expand full comment

True, but I don't want to be one of the few lucky schizos. I'd like to think all can tap into some creativity.

On the topic of the article, I would like to propose a 4th case that is neither the unfinished weeding out, nor the hidden selective trait, nor the hidden positive effect on strange systems.

I am thinking that the selection of schizophrenia might be situation-based (so loosely related to the selective trait). In most situations, schizophrenia is objectively detrimental, however, I can give (albeit) anecdotal evidence that psychosis is an incredibly powerful motivator and can lead to bizarre but brave decisions.

For instance, I've heard plenty of stories of people walking 15+ km in a single day (when not normally doing so) in an episode of psychosis. Almost everybody with psychosis I know, has a story in a mental institutions where 5+ people, including the orderlies, could not contain them during an episode. While this sounds like subjective exaggeration, I have seen it happen with my eyes, and it makes sense.

Basically, my argument is that most situations select against schizophrenia, however, in a small number of primal situations, it might be selected for. And that is without considering something intangible such as creativity.

Expand full comment

Even if not directly if you're family has increased risk of skitzophrenia then you should expect that you have better other genes (the ppl who have the highest genetic burden are less likely to reproduce and you did so conditional on you having a negative probability of a positive increases).

Expand full comment
Feb 9·edited Feb 9

The big risks I see of embryo selection are Goodhart's law and out of sample extrapolation. It's possible for gene A to be correlated with B in an unselected population while still having it be a bad idea to exercise extreme selection pressure on A, either because the correlation breaks down under selection or because their are non-linear effects when taken to extremes or because there are other unknown side effects.

Those aren't issues for any plausible near-term IVF selection, but could cause problems in some of the more extreme proposals (selecting from thousands of embryos or even straight out genetic engineering.)

Expand full comment

Man, you can't leave this topic alone, can you...

...oh wait, wrong poly. Well, it probably explains that one too.

Expand full comment

About the image for this piece: It's a little known fact that lions suck the schizophrenogenic genes out of their prey. For a lion they're like Reese's peanut butter cups.

Expand full comment

Chris Masterjohn has been thinking along the same lines, but came to a different conclusion. According to his logic (link: https://open.substack.com/pub/chrismasterjohnphd/p/unlocking-performance-and-longevity) most SNPs analyzed are just noise and there are only a few important ones, but scientists need a mechanistic (ie biochemical) model of how they interact, not brute force statistics…

Quote: Common polymorphisms by definition on average produce the average, because they are defined by their deviation from normal. Rare disease genes on average produce a rare disease, because they are defined by their pathogenicity. Natural selection minimizes the presence of harmful genes, which makes the rarity of a gene correlate to its severity. Thus, the biggest problems have to be the few rare problems, not the many common variations.

The individuals who display a disease phenotype because they are heterozygous for more than one severe defect in related pathways have been termed ‘synergistic heterozygosity’.

Expand full comment

I have a friend with early-onset schizophrenia who considers it "a superpower that you have to work hard and learn to deal with." In other words, I think he'd take issue with a blanket statement about it decreasing fitness. Having seen his reflexes and pattern-detection in action I would find it hard to disagree; he's survived situations that would have killed someone slower on the react or less inclined to observe everything and notice connections between things. In other words, less paranoid and ready to act accordingly. Compared to him, most people are just not paying attention. Perhaps the increases in fitness for the people who are able to deal with it outweigh the decreases in fitness for those who can't. After all, we don't necessarily know who all is schizophrenic. It's not only the people who go get medicated about it. And antipsychotics decrease fitness in and of themselves, so there's a selection bias there.

None of that is to say that anything you said about the geneticity of it is wrong. His family has quite the history of it.

Expand full comment

Has your friend said much about what kinds of hard work and learning it takes to deal with it?

Expand full comment

I haven't grilled him about it but to the extent that I've asked, his main answer was absolutely bonkers amounts of meditation. And enough introspection and psychedelics to get him knowing who exactly he was and get him on a team with himself. I'm very much paraphrasing that one. But (it seems to me like) part of the key was to not freak out? Like, he was young enough that he doesn't really know what it's like to not be schizophrenic and I think that probably helped. Because he knew what schizophrenia was from an early age due to his family history with it and instead of railing against it or trying to make it an enemy he had to figure out ways to work with it and function anyway. Which is not to say that he didn't have a lot of problems, he definitely did. There's a reason I know he could survive crazy situations - it's because he ended up in them. But his experience was hard but never really the kind that is stereotyped, he didn't have enemies in his head urging him to die or kill people, so they could get along better than most. He attributes that largely to having a lot of support in his life, both to keep him from having brain demons that wanted him dead, and to give him the space to figure out how to deal with what he did/does have. That's backed up by a lot of research about schizophrenia in other cultures by the way - much more positive outcomes in places where it's not considered a terrible thing to talk to spirits or whatever and people doing it are given normal human amounts of respect and personal dignity.

Expand full comment

Are you sure he was schizophrenic and not bipolar & psychotic? I ask because Mark Vonnegut, who wrote the Eden Express to describe his psychotic episode, has done OK in life, and refers to his illness as schizophrenia, but I'm pretty sure he's bipolar with psychotic features. Also, very few schizophrenics I have seen do anything remotely like energetically work on figuring out ways to keep having a life. And their failure to do that does not have the appearance of a general lack of character and courage -- it seems more like a part of the illness. Most schizophrenics, in addition to having the positive symptoms of delusions, hallucinations, etc., also suffer from negative symptoms: poverty of thought, anhedonia, lack of insight, lack of will and energy. There's a sort of blankness and passivity that comes along with the more dramatic and weird symptoms.

Expand full comment

He's sure. I've asked him before, he's done the research to know. And he's definitely not bipolar. I'm guessing a lot of things are different when this happens when you're a little kid (it was very early onset) instead of a fully-formed adult already losing steam. But that was part of my original point: I never would have known if he hadn't told me. I'd know he was different from your given average Joe but I wouldn't know what was happening in his head at all. There are probably more than one of those people. I know a few other people who have self-diagnosed as schizophrenic or schizo-affective, none of them are medicated, and all of them (that I know about) have lives, passions, hobbies, dreams. One has a PhD. They all struggle. Some don't succeed. Everybody is different and these things affect everybody differently. I'm not denying that the symptoms you're talking about exist, or even that my friend doesn't have them to some degree, he just manages despite that. Of course the most obvious cases are the ones with the biggest problems and vice-versa. It's not really surprising if the people most obvious on the medical radar are the ones who really can't deal. That's true with every disease.

Expand full comment

(The odd thing about talking about human genetic evolution is that it's probably over. Assuming we *don't* kill ourselves and *don't* halt all interesting tech advancement, we will probably mostly move away from this form the way have we mostly moved away from caves, adobe houses, and horse carts. In the interim, knowing about evolution will help us to develop patches and also to understand how our brains work. But the following is about natural human DNA evolution as if it were a going concern.)

Genomes evolve to evolve, and without going into the ways they manage that, but antropomorphizing: the overall genome and organism design gets selected toward *some balance* of dependable to "speculative". If we wanted to let human evolution continue by its own processes, it would *not* make sense to fix a lot of genes that contribute to schizophrenia, because then the results of lots of combinations of those and other genes would not get tested. In other words, if we assume our genome is at about the right place on the risk/reward curve, adjusting for less risk for individuals in the very next generation would mean fewer beneficial advancements on many-generation time scales. It's probably even harder to identify the meta-evolutionary mechanisms than the functional effects of current gene prevalences on odds of schizophrenia, but evolvability-tuning can be seen in simple situations and it makes sense to assume it happens in general.

I say this to add another spin on what function all those potentially harmful genes might serve. For instance, in the last couple millions to thousands of years, it may have served our genes well to invest in a lot of brain-tech startups. I believe (sorry forget source!) that humans have a particularly high ratio of brain-specific to other genes.

Another spin is that different *distributions* of characteristics in populations not only effect the population-level success but also affect the context in which individual genes can be beneficial or detrimental. Sheer diversity can be beneficial if the body, brain and social context can accommodate it.

Expand full comment

I'm not sure we're moving away from genetic evolution. I think most people do not thrive when the ratio of virtual people (movies, games) or half-known people (social media) to real acquaintances gets as high as it is now. And AI is surely going to make the average ratio even higher. And we're not exactly thriving -- note the increasing depression scores of teens and 20-somethings. And did you know that 11% of the US is currently on an antidepressant, and the % is even higher in some European countries? So I think those who are not doing well with the present virtual/real ratios, and will do even worse with the near-future ratios, will have fewer kids, and more deaths of despair, and whatever genes lead to their failure to thrive will start fading away. Those that adapt and even do well with life as it is now will dominate, and so will their genes.

And why should natural selection stop there? In the future there will be opportunities for things like space travel and various kinds of close collaboration or partial merging with machines and artificial intelligence. We did not evolve to do well with that sort of thing, and most of us will not. But of course a few will.

So I think natural selection will continue.

Expand full comment

I was referring to body and brain evolution rather than online living. In a small number of bio-human generations, it will become nicer, more convenient and less expensive for individuals to get retrofitted to new brain and body construction technologies rather than keep up the old-fashioned kind. The high-maintenance bio brains and bodies will be a specialty interest for rich antique lovers and for reenactments.

So human DNA evolution will mostly stop. First of all, antique-lovers will want to have mostly-authentic 21st or 22nd C bodies, and also there won't be nearly the billions or millions of people still inhabiting bio bodies and interbreeding them, so most DNA evo that does happen will probably be more deliberate experiments than natural changes.

This upgrading will happen within a hundred? Two hundred?? years, an amount of time in which almost nothing naturally evolves as far as DNA of a complicated species is concerned.

I imagine people will be inhabiting human-feeling, human-*like* bodies, either physical or virtual, and will have human-like intelligences, and that many will live on Earth or in real or virtual Earth-*like* places, for a while after the original protein/DNA stuff goes out of style.

Non-bio brains will keep evolving, and if current AI is any clue, besides being tweaked deliberately, they may evolve holistically like LLM weights, and even do recombination equivalent to sexual reproduction, in which case, all the questions of evolving mental traits will be there on different runways.

Expand full comment

Taking over genetic manipulation doesn't end evolution it just makes it way more odd (eg consider selection for genes making you more willing to engineer your kid's genes).

Expand full comment

I am myself part of an active measurable genetic diffusion in humans, a mutation around 2500 years old - I am homozygous for CCR5 Delta- 32 and apparently immune to HIV. Humans are still changing, within relatively short timeframes.

Expand full comment

There is also the possibility of non linear effects, where you only get a bad effect if you have, say, A, B, and C, and where any one or pair of them does nothing. That would further make it difficult for evolution to get rid of these, as in most people the effect is zero

Expand full comment

I’d highly recommend everyone listen to Robert Sapolskys lecture on schizophrenia that covered exactly this topic.

https://m.youtube.com/watch?v=nEnklxGAmak&t=1359s&pp=ygUWU2Fwb2xza3kgc2NoaXpvcGhyZW5pYQ%3D%3D

The TL:DR proposal he has for why genes that lead to schizophrenia got selected for is that they can make people more prone to magical thinking and intuitive/creative ways of perceiving things that a lot of cultures historically in people playing the roll of shaman or spiritual leader. In a lot of these cultures these people weren’t expected to be celibate and were highly valued so the traits got passed on.

If you’re trying to survive in the wilderness and believe in spirits of the dead and forces of nature, you value having someone who thinks they senses things you can’t and who makes strange connections. We know that people related to schizophrenic people tend to be into conspiracy theories or occult beliefs or any kind of interest where you make connection between things other people normally wouldn’t. Someone obsessed with making connections that seem spurious but interesting feels like a summary to a lot of ancient mystical thinking imo. It’s not surprising people prone to that would be valued & I also would worry that we might be losing something of value trying to engineer those traits out of existence.

Expand full comment

There might be some truth to this, but as Scott points out, most mutations are deleterious if they impact phenotype at all and therefore most genetic diversity is either bad or useless (i.e, occurs within Junk DNA.) Negative pleiotropy is more the exception than it is the norm because mutations are more likely to just break things than they are to introduce even partial tradeoffs in favour of fitness.

Expand full comment

Quick note, you can't assume that because the people with the lowest skitzophrenia risk don't have other problems it follows that there is no such effect of these genes.

1) Its possible that those individuals have other compensating genes. Maybe one genetic heritage has some other set of beneficial genes that give the same benefit with another downside and other groups have the skitzophrenia promoting genes (you'd want to see if some mixed individuals do much better in average).

2) Its possible that the benefits aren't addative. Skitzophrenia is rare so what if each gene that increases risk offers a small benefit to those who have it but you don't get much extra benefit (any?) for having multiple genes that increase risk. This differs from your 3.

3) What if the 'benefit' offered by these genes is in utero or in pregnancy-- say they reduce head size. If they increase the chance of being born at all or not dying looking at those who didn't die tells you nothing.

4) Other really complex interactions between genes could be taking place.

Expand full comment

Possible support for possibility #2: https://www.nature.com/articles/nn.4040

Expand full comment

Excellent discussion! Just to add to point (1), I think it’s important to remember that selective pressures act via *reproductive fitness. To my knowledge, the average age of onset in schizophrenia is near late-adolescence to early-adulthood. Considering that the average age of first-time mothers was 21 as recently as the 1970s, it seems totally reasonable that we might be overestimating the negative selection effects of schizophrenia, simply because most people throughout history likely began having children before they saw the full effects of their condition. I see this as similar to how many of us carry increased risk factors for cancer, heart disease, etc. that simply don’t have any effect until we’re past our main reproductive years. Clearly, there is some selective pressure acting here, or else there would not be a decline in schizophrenia over time, but my point is that it’s probably weaker than we would expect for other genetic/congenital conditions which act earlier in life.

Expand full comment

I think more intelligent people have fewer children, so not a slam dunk for a positively selected

Expand full comment

I have skin in this game. We recently went through IVF and we had our embryos tested for polygenic disease risk. Our "best" embryo had significantly reduced risk for a wide range of diseases, except schizophrenia, where it was 85% percentile. We had other embryos with low schizophrenia risk, but higher risks of more common diseases. As you might imagine, we thought about this very carefully.

Ultimately we chose to implant the "best" embryo. Partly our reasoning is that 85% percentile of schizophrenia is still <2% absolute risk. But another part of our thinking was that there's some evidence of antagonistic pleiotropy, i.e. that higher schizophrenia polygenic risk score correlates with desirable traits.

This paper (https://www.nature.com/articles/s41467-018-05510-z) for example finds that:

"Higher educational attainment (EA) is negatively associated with schizophrenia (SZ). However, recent studies found a positive genetic correlation between EA and SZ. We find strong genetic dependence between EA and SZ that cannot be explained by chance, linkage disequilibrium, or assortative mating. Instead, several genes seem to have pleiotropic effects on EA and SZ, but without a clear pattern of sign concordance. Our results reveal that current SZ diagnoses aggregate over at least two disease subtypes: one part resembles high intelligence and bipolar disorder (BIP), while the other part is a cognitive disorder that is independent of BIP."

There is also some evidence that polygenic risk scores for schizophrenia predict creativity: (https://www.nature.com/articles/nn.4040)

So I do think selecting for the lowest possible schizophrenia polygenic risk score is potentially a bad idea, unless you also have PGS scores for lots of other traits to ensure you're not inadvertently selecting against them.

The embryo we picked is now a happy little baby crawling around as I type this. Hopefully we made the right decision!

Expand full comment

Conceptualizing a creativity gene is in unnecessary.

People live longer

X cost of raising children increasing

X easily available birth control

= children born later in life

Rate of SNP mutations are proportional to age of father at conception (between 20 & 40 years paternal mutation rate triples, not the maternal rate). This is perhaps ordinary entropic change of the paternal genome which was revealed by non-genetic changes in longevity due to global health and nutrition at population levels.

If you were to graph US average age of father at conception and US rate of schizophrenia and autism you should be able to see the effect of aging on accumulated genetic defects, completely independently of a “schizophrenic gene” which would be selected against strongly of course.

Unsurprisingly the rates of autism and schizophrenia have been rising in recent decades. Likewise Schizophrenia is more prevalent in males than females, understandably if it were possibly Y-linked, paternal influenced. Finally schizophrenia should be growing fastest in world regions where lifespan is showing the most rapid improvement, and paternal age of conception. I would hypothesizes it is most pronounced in central and subsaharan Africa.

Genes which favor non reproduction are easy to maintain equilibrium in a population if they enhance familial reproductive success even at the cost of individual offspring. This has long been discussed and modeled. [Dawkins, E.O. Wilson] A gene group which enhances longevity will enhance schizophrenia and autism without any biochemical relationship to the specific source of the root issue.

Consider disruption of NMDA receptor interaction with multiple modulatory transmitters - failure to attenuate sensory prediction error; Schizophrenia can be induced chemically by disrupting this system; schizophrenics are strikingly resistant to illusions normally created by imprecise perception; schizophrenics cannot involuntarily predict and track motion with their eyes. First-degree relatives also have eye tracking dysfunction but not full-blown schizophrenia. It only takes a few more SNP’s induced by age to cross the line. Epigenetic aging mechanisms induce it.

I’m gay, and have no interest in having kids. This at an individual level selects against reproductive fitness as a gene or group. However the frequency of gay men in the population is relatively stable. What produces non-reproducing offspring may enhance the reproduction of siblings indirectly, without biochemical signature.

Also, “last brother being gay” effect is driven by late maternal age of conception, whereby maternal antibodies to testosterone developed during sequential prior male pregnancies (even miscarriages) alter the process of testosterone imprinting on the fetal brain starting at 9 weeks. Another case of what appears as a possible gene effect is due to epigenetic factors related to aging, health care and nutrition at the population level.

This article had many comments, apologies if any of this is a rehash.

Expand full comment

I believe most evolutionary theory and thinking is skewed towards considering humans as individual organisms. While of course our bodies are discrete organisms, when it comes to mind, we are necessarily linked. The brain is, among other things, a social organ. Mine, at least, seems quite busy and consumed with understanding how my larger social world works, its power dynamics, and what I need to do to have social value. What if schizophrenia is polygenic with many traits that map out and model meaning structures and complex social dynamics?

We are fundamentally complex social primates. So complex, in fact, that we cannot live alone. By the time all of us turn 12 (it is extremely rare for schizophrenia to manifest before this age), our families and tribes have nursed us, fed us, taken care of us, and bonded with us at no small sacrifice. Homo sapiens as I understand us would not lightly stop taking care of a kid that hears voices, often until past the age of reproduction.

H. sapiens evolved as tribes. We keep our elders around, for their wisdom and knowledge is a strong fitness advantage - for the whole tribe. We take care of each other, to a degree that is not rational.

Now I'm going to stereotype. Would love to hear others' data points.

The people I know who struggle with schizophrenia seem (anecdotally) to be above average when it comes to sensitivity and spirituality. I wonder if there are linkages between schizophrenia and mystics or shamans. As I observe mind and try to locate "self," I readily get lost in a sea of connections, social conditioning, culture, and my group's foundational myths and narratives - to a degree that might make me less functional, but also possibly mystical in my inner modeling of complex networks of human meaning, purpose, attention, and care.

I would posit that schizophrenia is polygenic and related to a host of genes that assist in the mapping and modeling of complex human social and meaning structures, and would require extremely fine tuning to select out without resulting in asociality or sociopathy.

Or maybe I have just watched *A Beautiful Mind* too many times?🤷‍♀️

Expand full comment