"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education? A certain sort of contrarian might relish this conclusion. "
Success in education is quantized (you either have a 4-year degree or you do not) and noisy (all 4-year degrees are the same ...) right? Certainly compared to IQ measurements?
I don't know why I should find this as implausible as Scott suggests. Yes, surely what grade you get in a STEM class is highly correlated with IQ, but how many years of higher education you manage to stick to.. I wouldn't be *that* surprised if that was not correlated or even negatively correlated, with IQ.
You could convince me that the day to day practice of psychology or psychiatry taps into a meaningfully different form of intelligence than a STEM education.
But I challenge you to make even a facially plausible argument that academic sociology requires a meaningfully different form of intelligence than a STEM degree.
It could be negatively correlated at one range of the spectrum but certainly not across all of it. We know that fewer morons become doctors. We know that PHD tend to be above average. Perhaps you meant that the correlation might be weaker than linear?
The technical definition of "moron" is someone with an IQ between 50 and 70. This is somebody with the mental age of between 7 and 12. None of them are doctors. Not sure what you mean by midway but Google/AI suggests that the average IQ of MD's is between 120 and 130. I would probably agree with this estimate. It seems silly to describe a group centered between the top 15% and the top 5% as morons.
No I mean it might also be negatively correlated. What are there most of, people who get a five year degree in the humanities getting straight Ds, or people getting three year degrees in STEM getting straight As? The former have "higher educational attainments" than the latter.
I'm not really saying it *is* negatively correlated, but I wouldn't be surprised, and the 55% to 10% drop isn't even a little bit surprising. Like, at all.
That is what I meant by a particular range of the IQ distribution. Probably, nobody with a IQ below 75 or above 125 is in either of those groups. Also, not sure what you mean by 3 year vs. 5 year. A BS in STEM usually takes 4 (or 6 for some of us). What humanities degree is someone getting in 5? And are we sure that the ranking for "educational attainment" might not rank a BSEE above a MS-basketweaving?
Here in Europe, all BSc. degrees (typical of STEM) are three years, and master's typically five.
I am just assuming they measure educational attainments in number of years, regardless of field and what grades people got, but I have no idea, I guess we could check.
I think they use highest obtained diploma because that’s the data that is easily obtainable. It does translate roughly to years of education on average even though it is obviously a very stupid metric, especially since fields of studies have become so varied and often very disconnected to objective measurement.
I think nowadays education is a meaningless metric and it is largely linked to many of academia’s issues and also the terrible state of politics (for people who obtain absurd amounts of power/legitimacy while being rather dumb, look nowhere else than political « science »).
The guy above seems to think that education has a high pass filter but I have seen a study showing that actually there are some people with masters that are around 80-90 IQ.
is educational attainment measured in years? I know many who studied a bit longer, because they had enough money and student life is great, or did an Erasmus semester (studying abroad, with the goal (by the EU itself) on having a good time and connecting with other students).
I agree about the bottom end of the distribution, but are you really saying no one with an IQ over 125 gets top grades in a bachelor's STEM degree and then leaves academia? They all go on to do graduate degrees rather than going into (better paid) industry?
I am absolutely not saying that. I said that the result is generally not linear (a doubling of IQ does not guarantee a doubling of educational attainment). And not negative across the majority of the IQ distribution (It is possible that dumber people get more of some particular degree over a narrow range of IQ but not across the entire IQ distribution).
My understanding is that most "years of education" measures would treat me as having a 4 year degree even though I graduated early, the same as a student (who I briefly partnered with in my first semester) who started years before me but graduated at the same time.
How do we know this? I have a Ph.D. and I can assure you that native intelligence had little to do with completing the program. Work ethic (a completely different trait on the "Big 5" scale) is far more important.
If you have a Ph.D. then you should have enough reading comprehension to understand what I wrote. Were there numerous people in your Ph.D. program that was too dumb to make it out of 3rd grade? A few who would have found bagging groceries to be intellectually challenging? If not then educational attainment is not negatively correlated with IQ across the entire spectrum of IQ.
You're right, I didn't read your post carefully enough. What you are describing is a floor effect -- someone must have a minimum IQ just to get into higher education, but above that academic performance would be uncorrelated. That seems plausible.
I don't think that it would be truly uncorrelated. I suspect that it would be more weakly (but positively) correlated. The original comment that I objected to just said that smart people were more likely to forgo higher education and thus IQ and education were negatively correlated. I suspect that the correlation is always positive but that over narrow ranges of IQ effects like that described (forgoing higher education) likely attenuate the correlation (it is possible that it goes negative over narrow regions but I would guess that it does not).
In Europe it's not common to normalize grades. For instance one of the harder maths classes I took had an extreme almost perfect exponential distribution.
Also that tests are unreliable is not actually a problem for statistical correlation. Lots of processes are highly stochastic, yet have strong measurable correlations.
It's like the old joke: What do you call someone who graduates at the bottom of their class in medical school? Doctor.
There's going to be a huge amount of noise added when "Scraped a passing grade in a Communications BA at their local state school" and "Top of their Harvard Theoretical Physics class" are averaged into one group. Enough to make a 55% heritable trait look like a 10% heritable trait? Maybe.
who knows! I still haven't been able to "solve" that Raven's Progressive Fuckery that Scott posted many moons ago, yet I have a post-honor dropout from a prestigious CS program and working in "tech" and even I'm usually scared how fast I notice things on the screen when there's a shitton of boring server logs. Not all patterns (or anti-patterns?) are made equal apparently :)
I have not looked into it much but I think Raven Progressive Matrices are an example of over specification/specialization.
You can’t define intelligence by simple evaluating performance at a single narrow task, otherwise the guy from Rain Man who could memorize a phone book would be considered extra intelligent when it’s not really the case.
There is a correlation between various tasks performance but the reason to use multiple test subjects is precisely to average out weird outliers (that may come from genetics or environmental pressures).
Also, the IQ tests in general exist solely in academic/schooling oriented form for efficiency/convenience but I believe you could come up with various « real world » tests (solving problems with whatever you have at hands) and they give an even better picture.
Yes, the formal version of this is that robust IQ tests need ~as many different kinds of intellectual tasks as they can reasonably fit into paper or electronic format, and g-factor is strongly but not overwhelmingly correlated with IQ.
Also, heritable means generational, and college enrollment has nearly doubled in the last 30 years.
You'd expect low heritability for college enrollment just because of historical trends that make one generation very different from the next on this measure at a macro level.
Remember, heritability is always a ration between genes and environment. If you're measuring something with a large environmental contribution, heritability will be lower.
In general you should expect to see educational attainment become a less useful signal over time as college has become more of a class marker and screens less.
If the study's "educational attainment" is mostly just whether or not someone has a 4-year college degree, then I'm guessing that the most significant factor by far is how much their possibly adoptive) family and their high-school teachers feel it is imperative that they have a four-year degree. Even someone with a moderately low IQ can get a four-year degree in *something* if they're pushed and supported enough, and on the flip side there are probably a lot of high-IQ kids from blue-collar families who wind up putting that to work in a trade or running a small business.
So, yeah, not surprising if this sort of "educational attainment" is mostly environmental.
Might be illuminating if you could break it down by, e.g., of the subset of people who enrolled in a four-year college, how many graduated with an intellectually challenging degree, graduated but only after 5-6 years, graduated but with a degree in underwater basketweaving or whtanot, and how many just dropped out. There'd still be a strong environmental component, I'd expect, but also a stronger correlation with IQ.
> Might be illuminating if you could break it down by, e.g., of the subset of people who enrolled in a four-year college, how many graduated with an intellectually challenging degree,
To MM's point above, you can get roughly back to the "top 2%" threshold that a degree used to mean 100 years ago if you just filter by "STEM grad degree" from any school. This is <4% of the pop, and at least half those are foreign students.
And if you go by "T20 undergrad degree," which still is a genuine quality filter, it's ~0.5% of the pop.
That's quite a myth. 100 years ago land grant unis already existed, less-than-rigorous courses already existed (pick up a stats or econ course from the 20s if you wanna have a laugh), and education was much less math-heavy at all levels. Sure, it's kinda impressive these guys were somewhat proficient in Latin (much less than you'd think if you do not know Latin: they were translating Ceasar, not Lucretius), but you could get in without knowing any calculus or even precalc.
I think that this idea that if x% went to college, then it must have been the smartest x% of the population is just projecting backward something that's very contingent to our society. 100 years ago, nobody would have thought that. Sure, there was a floor (but again, hardly a prohibitive one) to get a degree, but beyond that it was mostly a matter of social class and interest. You could get almost any job, even being an engineer or a lawyer, without a degree, which obviously dramatically changed the calculus for a smart lad.
Heritability of IQ increases with age. Years of education is heavily determined when people are young and their parents have more ability to make them go to school when they don't want to.
IQ scores are less reliable at younger ages. They are still useful, and were developed to detect people with learning disabilities, but usefulness increases closer to adulthood.
Lots of speculation here about what “educational attainment” actually refers to. In these UK GWAS papers it almost always comes from UK Biobank Field 6138 (“Qualifications”) (https://biobank.ndph.ox.ac.uk/ukb/field.cgi?id=6138
), which is just the highest credential someone reports (they have MA/PhD in some versions, I think).
Researchers then convert that credential into “years in education” using a fairly confusing system that doesn’t correspond to real schooling length. For example, GCSEs are coded as 10 years, and a Master’s degree gets coded as ~21 years.
Pretty terrible, but this is the best you can do with the dataset. If they measured grades, university attended, and subject, educational attainment would start tracking IQ far more closely.
Surely there's a bunch of studies out there already showing a strong correlation between IQ and educational attainment, we don't need to make weird roundabout guesses?
If this is the UK it’ll be 3 years, and if you’re not distinguishing between universities then age will probably swamp all genetic factors (% going to university has doubled since the early 90s). I’d also expect it to track class fairly closely, particularly in older generations, which will create genetic confounders and make it look more heritable than it is.
If it’s looking at postgraduate education, I’m not sure they’ve picked the right country. Academia is broadly looked down on in the UK so PhDs will mostly negatively track employability among the moderately intelligent, and non-matriculation MAs/MScs will be disproportionately concentrated among people going who struggled in the graduate job market first time round.
You have to take seriously that the modern educational system may be actively selecting against high intelligence overall.
It is obviously selecting against disagreeableness which seems like a requirement for an IQ above 120. How much nonconformity can you get away with and still make it through a master’s degree?
A part for that, there are so many other factors influencing educational attainment. Even if you buy that all good things are correlated, the more factors matter, the more the importance of each shrinks. Sure, conscentiousness is correlated with IQ, but it's hardly collinear, so the smart-lazy and dumb-diligent lower the correlation. Parental income might be correlated (bc parents' IQ affects both), but again, hardly collinear, so smart-poor and rich-dumb dampens the correlation further. Add a good measure of random shocks, mental illness, truly orthogonal factors (like your state's education policies), and it's not unbelievable that the correlation might not be as big as one expects
It still suggests a lot of "missing heritability" relative to typical results from twin and adoption studies. I'd be interested to see if there's any way to compute estimates for shared environmental effects from this data, though, which is what nurturists should be looking for.
The fight isn't really about that, it's a status competition to determine whose assumptions generally deserve to be taken as the null hypothesis (among the narrow circle of contrarian autists who don't just defer to their tribe's priesthood).
Except, at least for social phenomena, they don't provide any actual mechanisms.
There is not a single hereditarian in the world that I can go to and get an answer to which gene, or genes, are responsible for the difference in intelligence between von Neumann and Donald Trump (insofar as there is a genetic explanation for that difference). While that is one specific example, this is a globally valid critique too.
If don't know what genetic material is causing the phenotype, then by definition you can't know the variation that exists in that genetic material. If you don't have the genetic variation, you can't make a claim about heritability.
If you don't know what genetic material is causing the phenotype then by definition you can never make any claim about heritability.
(1) I very explicitly am restricting my comments to social phenomena. It is literally the very first thing I said in this thread.
(2) The newness of a claim has precisely 0 to do with whether or not the claim is correct.
Humanity has known about quarks for less than 0.001% of our existence. Does that mean quarks don't exist? I am sorry knowledge progresses and that this progress makes you uncomfortable.
Well, to be fair, if you want to privilege environmental explanations, you have to provide an environmental mechanism. We have one, upbringing. Certainly goes a long way toward explaining the performance difference between von Neumann and Trump.
"A claim no one has ever made about height, etc." is Oliver explaining why it is an isolated demand for rigor, not a positive argument for the hereditarian case /per se./
(There *is* a pretty obvious story to tell here about discomfort—but I don't think it's the one you're telling.)
(2) Suppose Bob has assembled a bicycle. Alice is skeptical on quality of that bicycle and says "your pile of junk will break really soon". If Alice makes this claim on day 0, does it have same probability of being correct as at day 100 when Bob already traveled 1000 miles?
It is possible to narrow a cause down to a large area without narrowing it down to a smaller area.
I can know that my car accelerates when I press the gas pedal without knowing the internals of how my car works.
I can know that my feet hurt when I wear a particular pair of shoes without knowing what it is about those shoes that causes the foot pain.
I can know that having higher population in a Civilization video game increases science output without knowing which bytes in the executable file perform that calculation.
Your claim that we can't know that a cause is genetic without pinpointing the exact responsible genes seems obviously false to me. I also note that you didn't actually provide any supporting reasoning; you just repeated the claim 3 times with no support.
(My model of stupid internet arguments is warning me that you are likely to motte-and-bailey with a silly definition of "know", e.g. claiming that you meant "know with literally 100% certainty". If you choose to reply, please say something smarter than that.)
It's more that if we have two competing explanations of why pressing the car pedal causes the car to move faster, the more detailed and plausible mechanism is probably correct. This corresponds to a claim that environmental mechanisms are fairly well mapped out, but the genetic ones aren't (this in spite of the fact that genetic measures are far more precise than environmental ones).
AIUI, it is widely agreed that both environment and heredity have non-zero effects, and the argument is about the _strength_ of the effects. You seem to be arguing as if there were a presumption that only one of the effects is real? Even if we agreed that a certain mechanism was "more plausible", that is not the same as being stronger.
Also, Jared was trying to say the claim was categorically disallowed, but you seem to be making an argument about probabilities. If we're going to allow the claim at all and advance to the question of likelihood, then shouldn't we be adding up all the evidence on both sides from years of scientific experiments, rather than squinting at vague high-level patterns like "how detailed is this hypothesis?" I would think the debate is well past the point where this sort of observation has serious relevance.
But, regarding the specific claims:
Humans often have an intuition that detailed stories are more likely than less-detailed ones, but this is actually false; more details mean more ways the story could be wrong, and therefore lower intrinsic likelihood. (See: conjunction fallacy.)
Now, if a hypothesis makes a lot of detailed predictions, and those predictions are later verified, then THAT would lend support to the hypothesis. But that's impressive specifically BECAUSE those details make the hypothesis unlikely, all else being equal.
Regardless, that seems different from the question of whether the explanation comes from a domain that is well-studied. Studying a domain has a chance of uncovering evidence that would support a related hypothesis. However, if you study the domain and fail to find such evidence, that actually makes the hypothesis LESS likely, because you had a chance of finding evidence and didn't find it. (See: conservation of expected evidence.)
And of course, if studying the domain DID uncover specific evidence, you should be citing the actual evidence, not merely the fact that the domain is well-studied.
But, again, all of this seems pretty irrelevant to a long-running debate about the strength of these effects.
The demand for causal granularity will never end, they will claim we can't model particle physics well enough to prove the causality of protein encoding.
Thank you for supporting my position that we can't make a claim about heritability given our current understanding of genetics. I do appreciate it when people provide support for my positions.
(1) "Can you identify all genes, and only those genes, that are responsible for the difference in intelligence between von Neumann and a hamster?"
No.
(2) "If no, must we believe that the difference between human and hamster intellectual capacity is socially determined?"
Are you seriously saying you are uncertain on the question of whether the difference in intelligence between humans and hamsters is socially or genetically determined, and you need more information before you decide one way or the other.
The difference between hamsters and humans is an interaction between genetic influences and environmental ones, an interaction effect that has been called "Natural Selection." The problem is that this process cannot explain the differences between individual human beings (for theoretical reasons). It could have potentially explained the differences between certain human populations (race) but it didn't. But given the importance of natural selection to the phenotypic characteristics of a species, including humans, some sort of interaction between upbringing and inheritance should probably be given additional weight.
I think you're conflating two questions: (a) "what determined the current difference in intellectual capacity between any particular human & any particular hamster: upbringing, or genetics?" and (b) "what made things that way in the first place?"
(a) is "it's the genes", and supports the point Jake makes.
(b) is "natural selection", and isn't germane.
One could make some sort of argument along the lines of "well, you can't explain why different groups of people would evolve different intellectual capacities, so it can't be due to any heritable element"—but that's a different argument (and also wrong).
Common anti-hereditarian answer is that humans and hamsters are different species so therefore argument is invalid. (Silently throwing away notion that each species originated with accumulation of small changes)
If I were given nothing but a running wheel and an oversized straw water bottle, I would develop an acute desire to sprint in place! Us social determinists are far more intelligent than retarded hereditarians. Our environments must have been far more conductive to intellectual growth.
Your joking, but if there were in fact a consistent and reliable IQ difference between the two groups of researchers, that would have dome interesting implications for the two theories.
You remind me of the academic virologists who were furious back in mid 2020 that anyone would claim that a viral mutant was more transmissible than wild type without also knowing the biochemical details of why it was more transmissible. Good old Vince Raccianello.
Hereditarians at least know you can clone these individuals and get their traits. Yet human cloning is fordibben, for deceit of anti-hereditarians could not be exposed.
Cloning von Neumann's environment is not forbidden. Yet you can't have average couple, give them book "How to raise a genuis" and other stuff and get a genius later. What we do have though, is some cherry-picked studies of educational interventions that give goodharted points than usually fade after several years after end of interventions.
Theists used exactly this argument to dismiss evolution, eg. "missing" fossil records, the "irreducible" complexity of the eye, etc. The flaws of this response are hopefully obviously by now. That the precise factor is not yet known is not sufficient to refute our knowledge that it must exist.
personally, for me it's not case closed on the most-investigated traits like education and IQ, but Gusev has convinced me there's some bad twin-studies out there, and I am starting to wonder about the personality results. Twins can plausibly affect each other's self reports in weird ways more than test scores.
Didn't Laura Baker find stunningly high heritability estimates for childhood antisocial behaviour in the Southern California Twin Registry, using a multi-rater approach to reduce noise in the signal? That can't all be pinned on self-report effects.
We don't know, and that's the problem (if we did, we could correct for it). It seems that this question could be answered with more research, but I am not aware that anyone has done that research.
My mom and her relatives eat like saints and still struggle with out of control high blood pressure, which seems to be a heritable family trait.
Mom's diet is Really Saintly. I know she's not secretly eating junk because she doesn't LIKE junk. She was raised in a health food house and she prefers health food.
As I youth, I use these anecdotes to conclude that humanity consistently underrates the influence of genetic factors on high blood pressure. I now know that conclusion was epistemically unsound, but I wonder if I didn't accidentally stumble upon the truth, anyway.
I feel like there's a much better approach to this question: just go through the study measurements and calculate how much assortative mating there is on each trait.
If they have lots of brothers, sisters, and cousins, presumably they also have lots of mothers and fathers.
(They seem to be covering about 0.6% of the population of England, so in the abstract everyone could be unrelated to everyone else, but that doesn't appear to be the case.)
I don’t know the acronyms/jargon. I think the meaning is:
ACE twin h² = the heritability estimate (h²) derived from a classical twin model.
Sr = sibling regression (don’t know what that is)
RDR = Relatedness Disequilibrium Regression. The new method estimating heritability using random variation in relatedness among siblings (not relying on twin assumptions). It tends to give different h² estimates than twin models.
Twin studies systematically underestimate environmental effects because no one is deliberately going to give a baby to adoptive parents who are abusive or poor. You're not comparing Twin A, whose parents paid for tutoring and a house in a great school district, with a Twin B who grew up food insecure and had to start help paying rent at 16, much less a Twin B who grew up hiding bruises and watching their parents bounce in and out of jail. Ruling out any family that's below-average on financial and emotional stability creates a restriction of range effect that greatly reduces the measurable contribution of the environment.
I'm a complete idiot on this stuff, so this may be completely confused, but is it weird that population structure could affect hard biomedical traits? If white blood cell count is correlated to some population substructure that people *do* assortively mate on, would that work? Is it just implausible that there is such a substructure that correlates with white blood cell count? Or am I being dumb and misunderstanding the proposed mechanism here?
Yeah, I saw his comment as I was finishing up mine.
If I recall correctly, there were some results a few years ago where taking polygenic scores for fairly hardcore biomedical traits (my memory says blood pressure or heart disease, but I can't find exactly what I was thinking of) from GWASes that had been performed on Europeans, and applying them to Africans found them to be much less predictive; at the time I remember reading that undiscovered population structure was thought to be a contributing factor.
Is there a reason to believe a similar effect wouldn't apply here? Or has subsequent research found that there was a different explanation for what was going on with those polygenic scores? Or something else?
It's pretty typical that polygenic scores developed from looking at one human population don't generalise as well to others, but I imagine the authors are aware of this problem and took pains to either correct for those effects or restrict their sample to a relatively homogenous population. (There is such a thing as overcorrecting for population structure as well- c.f. the sociologist's fallacy.)
I think Scott once wrote that much of attractiveness is an indicator of resistance to disease. Both Angelina Jolie's lips and Brad Pitt's forehead indicate high sex hormones during adolescence . High sex hormones make you more susceptible to disease, so that attractiveness might well be a stand-in for high white blood cell count (immune system function)...
I mean, as someone who was very immune to Angelina Jolie's charms, I find this a little bit of a just-so story, but yeah, stuff kind of along these lines doesn't sound totally implausible to me
I feel the same way about her but speaking as somebody who has spent time hanging out with a supermodel, you might be surprised how powerful they are in person.
I think it's fairly well understood that people look much worse on camera?
If you want to make a career out of looking good on camera, you could try to invest in strategies that reduce the penalty the camera applies to you, but it's easier to just look better, which also improves your appearance on camera.
My understanding of the story presented was that survival of adolescence with higher than normal sex hormones indicated an unusually strong underlying immune system.
More susceptible during adolescence when the sex hormones were abnormally high. Back to normal during the time when making was likely to occur (and thus sexual/genetic selection). (edit: If you have high sex hormones but still survive to reproductive age, you must have a top notch immune system.)
A bit like how a peacock's tail is itself a disadvantage (bulky, heavy, etc), so for a peacock to be successful despite that disadvantage demonstrates that he has other valuable advantages.
The OP is saying high levels of sex hormones make you more susceptible to disease, so if you have them but still haven't died of a disease, we can infer you also have a good immune system.
Yeah, this is a great point. Isn't one of the nurturist arguments that twin studies give high heritability for things like peanut allergies that we know are not genetic?
I'd love to know what this study design would say about that.
My impression (from a paywalled Lyman Stone article that I only skimmed, so very much a low confidence impression) is that twin studies yield implausibly high estimates for peanut allergy specifically, even though it's known that there's a large environmental component.
Maybe some or all of that is wrong, but I'm not sure it's incompatible with allergies in general being highly heritable
Note that heritability does not mean there cannot be large cohort changes if the environment changes. Since allergies often have to do with exposure (allergy = the immune system has incorrectly learned to attack a non-dangerous targets with large force), this is not so surprising.
Yeah, maybe I'm borrowing too much of Lyman's framing of what's plausible; I think the broad point still stands that looking at a case where we are very confident that there are environmental interventions that can produce broad cohort effects so the causal model shouldn't be "100% caused by genes, no plausible environmental intervention could possibly matter" would help calibrate people on how to think about such a study.
Do we see that environmental pathway showing up in a lower heritability estimate then we get from twin studies? Is the drop bigger than for other traits? Etc.
Isn't the main environmental component that peanut allergies are less likely if the mother eats peanuts during pregnancy? And twin studies measure pre-birth effects as part of genetics, so of course that one's going to come out too high.
(Though, for all I know it could just be that peanut-allergic embryos get miscarried and replaced by ones with no-peanut-allergy genes later, rather than truly an environmental effect.)
I think my saying "implausibly high" is maybe detracting from my main point: peanut allergies are a place where twin studies yield high heritability estimates, even though there can be large environmental effects.
Intuitively, other ways of measuring heritability might be expected to find lower estimates if they are somehow able to capture those environmental variables in a way twin studies can't, but where we already have a decent idea of what those environmental variables are so seeing a lower heritability wouldn't have us debating if it's because we're missing rare variants, GxG interactions, etc... We *know* there's environmental effects, so expect those to explain at least some of the decline in heratibility.
But then we can use that to calibrate our sense of how meaningful the differences are between twin studies heritability and this new method heritability: it's already been mentioned that height behaves differently than other traits, presumably because it is very strongly genetically caused and we're not missing much environmental stuff in our modern environment; ideally peanut allergy could function as an endpoint on the other end: here's what it looks like for a trait where we know twin studies are failing to see a big environmental effect.
Then you take other traits of interest and see, do they behave more like height, or more like peanut allergy, and then that's a good first estimate of whether the missing heritability is because of undetected environmental effects or not.
But without peanut allergy, we're left debating how to interpret what this study says for those non-height traits.
Allergies are definitely "genetic", but "heritability" is supposed to be a measure of how much of observed variation is correlated with genetics, I think (so that, properly measured, heritability can differ between cultures, subpopulations, and environments). I heard that "wearing earrings" was somewhat heritable but not genetic, which leaves me too confused to say more.
Heritability is a population (and sample) statistic, so it can trivially differ between different populations, times etc. If you reduce non-genetic variance in some phenotype, heritability increases.
Pretty much every behavior is heritable to some extend because they reflect dispositions to act in various ways. It is of course not random who wears earrings. This phenotype has a quite high heritability because women wear them often and men don't (men and women differ genetically of course). And there is non-random genetically linked variation within the sexes.
I realized after writing that that "wearing earrings" has a red herring quality, since it's influenced by sex, which is genetic but not inherited.
TIL a better illustration of how heritability differs from inheritability: "speaking English" is partly heritable (though "speaking Turkish" is not).
This comes from dynomight's article on heritability which is the best explanation I've seen (only like 15% unclear): https://dynomight.net/heritable/ (thanks to Niclas for the link)
The problem with twin studies on allergies seems is that classical twin studies assume genetic contribution is fully linear. Height is linear and has optimum for given environment -- easy trait. Allergy is exponential answer to small inputs and is never good, so linear prediction from genes works bad. So it shows too low environment influence than it really is.
> Nurturists argued that the twin studies must be wrong; hereditarians argued that missing effect must be in hard-to-find genes.
Sorry if this is a stupid question, but when you say "genes", do you mean just the actual genes (or maybe even just CDS) ? Because there are lots of regulatory regions that influence gene expression (e.g. promoters or silencers), but aren't strictly speaking parts of genes.
The paper analyzed whole genome sequencing; that would include the genetic material that encodes for promoters and silencers. If they had only looked at the transcriptome (DNA that is transcribed into mRNA) then the regulatory regions would not have been included.
How much of the biomedical data could be explained by SES or life history more broadly? Rich people are likelier to exercise, eat healthier, and get better medical care (in the US). They’re also less stressed and negatively selected for disabling health outcomes (can’t be a CEO if you have schizophrenia). We know nutrition affects height; why not IQ or white count?
Don’t people do assortative mating on health and therefore indirectly on biomedical things like white blood cell count. Note, I don’t have much expertise in this area, so if this is a really stupid question, I’m sorry, but this is the obvious doubt I had.
Whenever there is a post on Heritability I recommend people read Dynomight's "Heritability puzzlers", which illuminate what heritability even is (which is not what you might intuitively think!)
Am I... reading this right? In the first post, did he say, basically, that if genetic heritability is high, then changing the environment (probably, usually) won't do much, but in the second post, he shows that deliberately changing environment in certain ways will make heritability swing to whatever arbitrary value you want?
I suppose the easiest way to reconcile the two is by pointing out that, in practice, "changing environment" in such a way as to get you whatever arbitrary heritability value you want is really really difficult.
Yes. As the first article points out, heritability only tells you about how the trait varies in the current typical environment. So if in your current environment, some people eat more fish and others eat more wheat, but very few people eat yogurt, heritability tells you nothing about the effects of eating yogurt. Importing yogurt for everyone could change the heritability of some trait.
This is a population-vs-individual-level thing. In the first post, he's saying that changing an individual's environment, within the normal range that exists in that population, won't do much. In the second post, he's talking about changing the environment of populations (all the "islands" examples) and that can change the heritability a huge amount.
The important thing to understand is that "heritability" as a concept is only defined with respect to some population.
I've always had problems with that word because people seem to want to define it in a certain to win a fight. I think I have a handle on it now. Highly recommended.
Seconding that everyone should read this before forming an opinion on the topic. Heritability is a very precise and unintuitive statistical concept. It is _not_ the percentage that genes contribute to a trait, like many people think. That's not an approximate description, it's just incorrect.
OK, so if I understand correctly, the study shows that heritability is in a range that tells us an important fraction of many traits is inherited, as the hereditarians have claimed, and the amount we can predict is limited, as the nurturists claim - but either way, everyone is agreeing about the discovery of the missing heritability. And it's not like we're doing something like measuring size of elementary particles or the pull of gravity, where we have specific concrete theories that would be falsified if they are actually 0.001% higher than predicted. And (now that we know the range pretty well, at least,) I simply don't think the exact number for heritability of different traits matters for basically anything practical.
It's unfortunate (but demonstrably the case, given how science is done today,) that we can't do truth seeking in science without factionalist approaches that don't change anything about the substantive conclusions we should come to. But if we need to talk about it, I really wish we didn't talk about the resulting clarity as being about which side won, even if they can't stop doing so.
Exactly the point. In areas like physics, the specific numbers are relevant enough that knowing details leads to new insights and changes what we believe about the world. Unlike here.
Then we got "dark energy" which turned out to be even more significant. To me, who is not in the field, it seems like the law of gravity might need a revision rather than more patching. Perhaps something on the lines of MOND?
No, heritability does not tell us what fraction of the trait is heritable! Heritability is not a measure of how strongly traits are inherited. The number of fingers you have is damn near 100% inherited and has heritability ~0.
Historically, popularizers of certain kinds of intelligence research (Charles Murray and Emil Kirkegaard are the biggest offenders here) have lied about this because they expected the heritability of IQ to be high and wanted to convince people that this means intelligence is robust to environmental intervention, so the confusion is understandable. I think that understanding why this is not what heritability means will shed light on why people care about the differences in estimates in cases like this one. (The upshot is: agreeing about the number is less important than figuring out which kinds of methods estimate the number correctly, and the latter is much more substantive.)
EDIT: Apparently Kirkegaard is in the comments here, so I should substantiate this accusation! Here he is in this thread claiming that hard-coded traits are necessarily highly heritable: https://www.astralcodexten.com/p/the-good-news-is-that-one-side-has/comment/183846734. I have explained this to him before and he has acknowledged it (more accurately he's claimed that every time he makes incorrect claims he actually implicitly means the correct ones and anybody who doesn't substitute in correct claims for all the incorrect claims he makes is an idiot so it's not a problem if he's technically lying, but same difference), this is how I know he's lying and not merely wrong. Of course since this happened years ago on a different platform I don't have receipts, so anybody reading this is welcome to believe that Kirkegaard is merely wrong, I can't convincingly prove his intent to an outside observer here.
Seems like I phrased this poorly, but I think you're misunderstanding the point being made. When I said "heritability is in a range that tells us an important fraction of many traits is inherited," I should probably have said "heritability is in a range that tells us an important fraction of many traits varies based on inheritance."
But you went off on a tangent, and seemed to imply that if we have really good methods, the number doesn't matter. That seems clearly wrong - no matter what the new methods are, unless this work is using bad methods, we've bounded the possible range to where no one should care.
Also, you can complain about other people's bad faith updates of their views and refusing to admit they were wrong if you want, but it seems unhelpful, especially when being included as a response to what I said.
I agree that the edit makes my comment much less relevant to you. I added it because I didn't want to make an unsubstantiated claim about someone other people were likely to come across if they were reading the comments sequentially but it's not particularly substantive to my response to you, sorry for the tangent.
I think you still have significant misunderstandings about what heritability is. Your new phrasing is correct as a definition of heritability, but is not a plausible summary of hereditarianism. Hereditarianism is the claim that intelligence is "innate", in the sense that it's unlikely to change drastically under environmental modification, which is basically a claim about what the mechanisms of intelligence look like. Heritability is deliberately mechanism-agnostic (this is part of what makes it a good thing to care about! Mechanisms are really really hard in genetics, it's nice to have something that works without knowing them), so knowing the heritability alone just doesn't say anything about which side is correct.
The point about methods is that because we don't have access to counterfactual rollouts of the same genome in different environments we can never estimate "true" heritability, and if you define heritability extensionally then technically every trait has heritability 1 because people have unique genotypes. So to estimate heritability, given that we don't have access to counterfactuals, we (roughly) break genetic influences on a trait down into their components, make assumptions about which influences can be safely ignored, then design studies that accumulate the non-ignored components into a number. Different methods miss out on different things, this is particularly true for intelligence, but as we use more methods and accumulate more estimates of heritability, we become more able to piece together the relative importance of different types of genetic influence.
As a simplified model, assume Method 1 for determining heritability treats nonlinear gene-gene interactions as negligible and Method 2 doesn't. Say that both methods agree for Trait A but they disagree for Trait B. We can conclude that gene-gene interactions probably aren't important for Trait A, but they might be important for Trait B, at least in the environment we were investigating. Unlike the raw number this *does* say something about the mechanism, which is the thing we really want to get at.
(In practice things are harder than this even for relatively simple traits and intelligence is a really damn complicated trait so everything is underdetermined, but the point is that the differences in estimates yielded by different methods matter a lot even in cases where we don't care exactly what the number is.)
"...is not a plausible summary of hereditarianism. Hereditarianism is the claim that intelligence is 'innate', in the sense that it's unlikely to change drastically under environmental modification, which is basically a claim about what the mechanisms of intelligence look like."
I don't think that's true, at least as written. No-one is arguing that lead exposure or iodine insufficiency or repeated head trauma doesn't change intelligence. You seem to be making a stronger claim, or at least need to make such a claim to disagree with the hereditarians - you need to say that there is significant room for variation in the environmental *on the positive side*, and that we can increase intelligence from the current level among even the richest people by changing things. (Or you need to say that it's functionally impossible to do so with genetics, which requires hereditability to be effectively zero, which is not true.)
"I don't think that's true, at least as written. No-one is arguing that lead exposure or iodine insufficiency or repeated head trauma doesn't change intelligence."
Sure, I'm being a bit imprecise here. Maybe I should say "plausible environmental modification under shared environment distributions" instead. What I mean to say, and what I understand to be the hereditarian position, is something like: "Individual differences in intelligence are largely up to genetics in typical cases. Of course there are interventions which have important effects on intelligence, you can give anybody brain damage. But these effects are only so subtle and only so strong. Exposure to lead changes intelligence a lot and is pretty easy to pin down compared to the genomic project we have on hand. Access to better schooling has a smaller influence and is harder to pin down. We should expect that, the harder an environmental factor is to pin down specifically, the smaller its influence on intelligence. Moreover, even if we catalog and control for all possible environmental influences, significant differences in intelligence will persist and will be explainable by genetics. On the policy side, we should expect interventions meant to address group differences in intelligence to fail unless they address genetic differences."
"You seem to be making a stronger claim, or at least need to make such a claim to disagree with the hereditarians - you need to say that there is significant room for variation in the environmental *on the positive side*, and that we can increase intelligence from the current level among even the richest people by changing things."
I agree with the first half of this - if the hereditarian position is false, then either intelligence is not meaningfully measurable at all (which I don't believe) or differences in intelligence can be to some extent addressed by reasonable environmental interventions, and in particular it must be possible for reasonable environmental interventions to basically close intelligence gaps to the maximum at the group level. I think this is probably the case, but genetics is hard and we're not sure how intelligence works - I'm something like 75% confident that hereditarianism is false in this sense.
The second half, the claim that a rejection of hereditarianism entails the possibility of uniformly-beneficial environmental interventions (not just closing gaps by bringing all subpopulations to the highest standard, but also raising the waterline even at the top) is complete nonsense and I'm not sure where you got it from.
"(Or you need to say that it's functionally impossible to do so with genetics, which requires hereditability to be effectively zero, which is not true.)"
Again I think you just don't understand heritability and need to read more of the standard explainers. High heritability and high genetic influence, and even causal influence, can coexist with a complete rejection of hereditarianism! It could be the case that people who look weird have less access to social settings where they become smart, and since appearance is largely genetic, this provides a causal mechanism by which genes influence intelligence. Genetic intervention could succeed if this were true, we could change the distribution of appearance genes so that nobody ends up weird-looking and this would eliminate the intelligence gap. But this would be a disproof of hereditarianism: a genetic intervention could make the difference here, but an environmental intervention (just treating people the same regardless of how they look) could have had the same effect! Heritability does not tell you the mechanism and hereditarianism is dependent on claims about mechanism.
EDIT: I think maybe I misunderstood your last parenthetical and should have given the converse objection instead? Nonzero heritability doesn't mean that genetic intervention is needed to change a trait, but low heritability (even zero) doesn't mean the opposite of this either. Heritability is only defined up to a distribution on environments - it's entirely possible for hereditarianism to be true even when heritability is low, because the current distribution on environments might suppress intelligence to far below its genetic "potential", making people look equally-smart even when genes play a big and robust role in intelligence.
Right, so the important and meaningful part of the hereditarian position, the part that has implications for actions and policies, is the question of what could increase intelligence, *both in lagging subpopulations and in the most intelligent subpopulations*. That's what the hereditarians claim to care about, and what I think matters - though different people focus on different parts of the question. Your example of appearance being a moderator is correct, but I agree that it would be irrelevant, as the actual argument isn't about technical heritability.
If intelligence at the high end is unchangeable by genetics, but at the low end is changeable directly, some hereditarians will declare victory, others will declare defeat. (And I guess if it's very changeable at the high end but not at the low end, the reverse will occur, but that seems implausible mechanistically.)
But overall, if plausible environmental changes can change intelligence by significantly less than genetic changes, and genetic changes can have a large effect, I think both groups of hereditarians can and will declare victory, in terms of the implications they care about.
I must say I'm baffled by where the battle lines have been drawn. The natural lines would seem to be a more literal reading of nature vs nurture, that is, heredity+noise vs nurture, but the actual debate seems to be between heredity vs everything else?
Since we have no effective interventions to increase g beyond stuff like avoiding malnutrition, the possibilities on the table are "Your IQ depends on the lottery of who your parents are" and "Your IQ depends on a bunch of other lotteries as well", which practically don't seem very different to me.
In an already rich society? Can you give me a list of the interventions (it's a bit late for me and my kids, but could use it for the grandkids one day).
OK, but this is a short list so far. We have: 1. Avoid smoking, and 2. Increase motivation (who's motivation for what? Isn't intrinsic motivation for one's good behaviour also partly genetic, sth to do with dopamine levels? Or is it some other motivation?), 3. Something else that is not direct.
This doesn't help me much, neither does it help the Health Minister who wants to raise or equalise the population g-scores. Is there something I misunderstood?
A good example of this would be something like fingerprints. What your fingerprints look like depends on highly stochastic factors within the womb, like the precise flow of the amniotic fluids and whatnot. It's not genetic however; you're not going to pass your fingerprints on to your children. Even identical twins have different fingerprints. Yet, your fingerprints are just as immutable as any genetic trait. Even if a large portion of heritability is environmental, some significant portion of it could be like fingerprints in that it is nevertheless lottery-based and fixed.
The next dystopian essay I ever want to read ought to be about using artificial wombs to engineer custom fingerprints. Possibly with product logo placement. No more dystopian news is allowed until I read that.
Once upon a time, people started using artificial wombs.
Some concerned onlookers warned that it could stunt normal neurological development and human attachment. They pointed to studies of adult adoptees talking about the trauma of separation from their birth mothers. They pointed to advocates against gestational surrogacy who worried about similar separation trauma plus other problems like human trafficking. They pointed to studies of NICU babies who did much better with more skin-to-skin contact than with less, even when all their basic medical needs were taken care of.
But these concerned onlookers didn't have the funding to conduct long-term studies, so the science was never settled one way or the other.
As AGI got better, it claimed most jobs. Most goods in society became cheaper, and salaries lower - more or less a wash. But for the upper classes who could still find highly paid jobs, finding new and expensive ways to impress their peers was as popular as ever.
Designer babies came into fashion. Who wouldn't want to be able to pick only their smartest, healthiest, most attractive, most conscientious offspring? Here was something other than real estate that was worth saving up for.
But gestation, birth, breastfeeding, and parenting were a harder sell. Those highly paid jobs weren't keen on letting people out of the workforce for a few months, much less a few years. Taking time off for mothering would make them fall behind the ever-increasing pace of change. No company would hire them back.
So artificial wombs, formula, and nannies came to the rescue. Consumption decisions multiplied: Customize your child's fingerprints! Add extra omega 3s to your formula! Hire a trilingual nanny!
And as those became glamorous among the upper classes, the shreds of what was left of the middle class aspired to them as well.
Artificial wombs were expensive. That's one thing that made them such a status symbol. With declining wages, they were a real stretch for many.
When the first company started offering corporate logo placement in custom fingerprints in exchange for steep discounts on artificial wombs, can you guess what the reaction was?
Trick question. They didn't just start offering them right away. They focus grouped the hell out of it first. With AI participants, of course.
What they found was that simulated participants found a steep discount insulting. It watered down the status signal. A corporate logo would broadcast "my parents are second-class strivers, and this is the only way they could afford this procedure."
Slight discounts? Less insulting, but still not worth it for most interested consumers.
Zero discount? Opinions were mildly positive. Several AI respondents said they liked how it allowed them to signal affinity for the vibes of a company while also demonstrating the ability to commit.
When the focus groups were wrapped up, the final decision was to offer corporate logo fingerprinting for a 50% upcharge over the standard custom fingerprinting charge.
Yes, on the other extreme is good looks. It's obviously extremely genetic (see identical twins), but only somewhat heritable (in the precise sense that beautiful parents can have ugly children and vice versa). But the exact ratio of genetic vs heritable vs stochastic is much less interesting than how mutable the trait is. And people seem to be aware of this when discussing looks, hence we have a lot more discussion about interventions like skincare and plastic surgery than about heritability. Intelligence just seems to attract bad discussion, perhaps not helped by the fact that there are no serious post-birth interventions to discuss.
In the case of good looks, it's because of non-linear genetic effects rather than stochastic environmental factors, where it's not just about having the right genes but having the right combinations of genes. This is why ugly parents can beget good-looking children and good-looking parents can beget ugly children. Because while the genes themselves are passed on, specific combinations of those genes are not.
Although, from what I've read, the genetic causes of intelligence specifically are mostly linear. Non-linear effects are minimal.
There are serious post-birth interventions that protect intelligence. They're just not low-hanging fruit in the developed world anymore. But developing countries see gains from them. E.g. decent nutrition and avoiding lead exposure.
Regarding beauty, I seem to recall it was found to have a substantial component of symmetry to it. So if your womb and circumstances can produce a very symmetrical child, you're a winner in this regard.
As for intelligence, people seem to forget that even those damn hereditarians think there is a 20-50% environmental component to it (at the moment).
A person with 4 good looking grandparents and 2 good looking parents is quite likely to be good looking. The thing is, one can look like a grandparent or have a unique combination of the parents' features. But it is sort of similar to IQ: good looking parents have better looking children on average.
No “who your parents are” is NOT synonymous with “heritability”. That’s actually entirely specifically NOT what heritability is supposed to measure. Who your parents are include this huge messy mass of confounding variables like wealth and status and education and the like.
What heritability is *trying* to measure is the specific genes you inherited from your parents, and their effect on your life. Independent from all the other factors your parents contributed to your life! Maybe you’re aware of this distinction but just misspoke.
I am aware but trying to be (perhaps overly) brief. In any case, my point is that we have no effective interventions and nobody proposing any potential interventions. In that case, what exactly are the stakes of the debate? Why do we care if IQ is 20% genetic and 80% other luck as opposed to 80% genetic and 20% other luck?
From the perspective of the individual, those other factors can be considered luck. But from the perspective of a society designer, there's actually quite a bit of control there. There are real policy implications here.
Perhaps there's an implicit "optimising" assumption in all this -- so less about intervention post fact and more about planning reproduction??
For a simple example, if IQ is 80% heritable and highly immutable, AND YOU CARE ABOUT PRODUCING MAXIMUM IQ CHILDREN, you'll be much more careful about who you reproduce with in terms of IQ specifically. If IQ is 20% heritable and the rest is various more or less random but some non random luck (unlikely but not impossible candidates off the top of my head: intrauterine environment, consumption of certain chemicals at the time sperm was produced, egg quality, stress during pregnancy, nutrition during pregnancy, breastfeeding until the age of 2, avoiding childhood infections/ inflammation, cosleeping until 6mo, never being shouted at ever, exposure to trace pesticides before 1yo, having Mozart played during labour, mother under 21, bilingualism, high fat diet, etc etc etc) then IF YOU CARE TERRIBLY MUCH ABOUT OPTIMISING OFFSPRING IQ your decision process will be completely different.
And that "optimising drive" is not obvious at all, because while avoiding sub-normal IQ seems very reasonable, aiming to maximise it strikes me as not obvious a personal or societal goal at all.
But maybe there's also something relevant to interventions in already existing extreme cases or designing education system (can't see how heritable immutability would differ from any other tho).
Yeah lol I was confused my impression of you was not that you’d make this misunderstanding so I added the potentiality at the end.
Why do we care? Because it impacts policy of course. People are getting rid of accelerated courses in schools because they think it’s all socially cultivated. Countless other examples.
> People are getting rid of accelerated courses in schools because they think it’s all socially cultivated.
That's not a coherent statement. If you think that faster progress in accelerated courses is "all socially cultivated", that doesn't provide an argument for eliminating the accelerated courses. The faster progress still happened! It's an argument for eliminating the non-accelerated courses.
People are getting rid of accelerated courses in schools because they want to prevent some students from getting ahead of other students. This is something that can be partially achieved by getting rid of accelerated courses, even though the opposite, preventing some students from falling behind other students, cannot be achieved by getting rid of the fingerpainting courses.
Sure, that's true, but in this case people are happy to declare in public that what they want is for white children to stop outperforming black children.
I've never seen someone make the argument "the gifted class automatically helps whichever students are admitted to it, and that's why no one should ever be admitted to it".
(I should note that in doing research for this comment, I did find mention of Dallas adopting a new policy of enrolling students in "advanced" or "ordinary" classes automatically according to their standardized test scores. The fact that this was new does lend a good amount of support to the argument that "it's all socially constructed".
But here the solution was to stop doing something stupid and do something reasonable instead, not to stop doing something stupid and do something worse instead.)
Yeah lol I was confused my impression of you was not that you’d make this misunderstanding so I added the potentiality at the end.
Why do we care? Because it impacts policy of course. People are getting rid of accelerated courses in schools because they think it’s all socially cultivated. Countless other examples.
> Why do we care if IQ is 20% genetic and 80% other luck as opposed to 80% genetic and 20% other luck?
Because "other luck" can be isn't a literal random number generator, it's potentially environmental stuff that can be changed and optimised.
And of course all of this debate is taking place in a social context where we have certain genetic clusters within society, and some of these clusters perform better economically than others, and this is something that a lot of people care deeply about.
> And of course all of this debate is taking place in a social context where we have certain genetic clusters within society, and some of these clusters perform better economically than others, and this is something that a lot of people care deeply about.
This is in fact something I just investigated and wrote about - yes, there's an obvious PMC / non-PMC divide in America, but it goes much deeper than this.
Amazingly, if you cut by ~6 income tiers across Americans to create 6 clusters of people, there's a significant amount of differentiation in the overall stats (education, BMI, spousal BMI, time spent on phones, hours worked per week, number of close confidantes, diet quality, and much more) between those 6 tiers, and if you take a combined "education and / or income" measure of homogamy, these castes breed with ~93% homogamy!
What is the "actual Indian caste system" homogamy? About 95%.
America has essentially ALREADY differentiated into castes!
One thing I don't understand though, is why there is no emulation between the castes.
If environment *actually* mattered, then the caste just below the next one should be able to observe what they're doing different, and do it, and thereby raise their own children's attainment. Even in a noisy and not-rigorously-executed world, if environment ACTUALLY mattered 50-70%, which is the proportion that a Gusev or other anti-hereditarian would advocate for, then that's an absolutely massive amount of arbitrage and lift you should be able to tap into, in an area a lot of people really care about (offspring's success and attainment).
But you essentially never see this happening successfully, anywhere in the world. Why is that?
Once people are solidly stratified into castes, they often care more about themselves and their kids being at the middle or top of their caste than about advancing to somewhere near the bottom of the next caste up. If you live in a middle-class neighborhood where most people didn't finish college and make less than $100k, you might be satisfied to be the guy with a state school degree making $150k, especially if the alternative is to go into debt to attend a fancier school with less fat people who have different tastes and will scorn you as a prole, and then stretch to afford housing in the right district and always be the poorest one in every social gathering, etc.
This is not an iron rule and of course there is still some intentional class mobility in categories like immigrants, parents who screw up their own path and underperform but try to set their kids up to rise, remaining shreds of actual meritocracy, etc. But lots of people are trying to win within their caste rather than change everything.
Great insight. Though I will say even slight IQ differences as you go up the class ladder combine with the environment/wealth advantages of those already in the elite classes to become a formidable barrier for any aspiring new entrant.
> If it is mostly genetic, it means eugenics programs actually have a chance of working.
What's funny is that in literally any other species, this is just plainly obvious and not controversial at all.
If we'd been taking positive eugenics seriously in humans, we'd have had ~8 positively selected generations since Francis Galton was advocating for eugenics. 8 generations isn't much, we're definitely hampered by being long-lived and slow breeding and developing, but you can actually accomplish a surprising amount.
What can you do in only 8 generations?
> Between 2000 and 2016, US dairy cattle breeders, by applying selection pressure to increase the productive life, achieved an increase of about 10 months
> After eight generations of selection, the percentage of dogs with an excellent hip quality score (as assessed by an extended view hip score) increased from 34 to 93% in German Shepherd Dogs and from 43 to 94% in Labrador retrievers.
> In dogs, it's generally thought that it takes ~7-8 generations to get a new measurably distinct *breed* entirely
> More generally, traits with a heritability of at least 15% are considered good candidates for genetic selection. Essentially everything we care about in humans (intelligence, height, strength, conscientousness, neuroticism, mental illness, health, etc) is way above 15% heritability, and even the Gusevs and other anti-hereditarians will tell you that.
But suddenly when it comes to HUMANS, oh no, selection for traits couldn't POSSIBLY work.
Now, just imagine if we had applied that level of selection at a large scale to a significant human population for 8 generations. That population would enjoy significant buffs in things like intelligence, conscientiousness, and health.
Yes, if everyone is tall and smart and pretty, we're theoretically less "diverse." But that's the bad kind of diversity - we want the kind of diversity where healthy and highly capable people all specialize in what they most care about, and self-actualize at higher levels than would be possible if they were sicker, dumber, and less capable.
Eh... intelligent people are harder to align, are more capable of being a threat to the collective, and are generally harder to please. Ideally we'd want different castes specialized for different tasks, sort of like an ant hive.
It's interesting that you associate yourself grammatically with the selected population, saying WE'D be less diverse, WE'D have had 8 generations of buffs, etc. Maybe you identify more with the higher-functioning end of humanity by default - and not with the people whose genes would be extinguished under the kind of eugenics we reserve for animals.
How would you convince someone whose in-group/out-group associations were different than yours that "we" should all want a eugenic future?
> How would you convince someone whose in-group/out-group associations were different than yours that "we" should all want a eugenic future?
I mean, if I were god-emperor, I'd have made gengineering legal 10 years ago and we'd already be putting SNP edits into kids, and I'd make some amount of those publicly funded.
So you wouldn't convince them, but you WOULD let them choose from a menu of healthier, naturally muscular, prettier, higher IQ, and whatever, and the state would pay for some number of those genemods being put into their kids, depending on cost.
In other words, let the parents decide!
And there's SO many SNP's that do great stuff that we could literally do today if all the governments of the world didn't suck:
In the US there are education gaps (which translate to wage gaps) in different populations. Not only the usual racial stuff, but also between eg working class vs intelligentsia. Is this caused by social causes (eg discrimination) or is this innate?
Then migration is an effective intervention for a gene pool of a country. Does it matter for an English colony if future immigrants are German, Irish, Chinese, Latin-Hispanics or Somalians? Or are people blank slates?
The short answer is that it’s obviously about immigration.
The longer answer is that there are no lotteries. There’s plenty of things that influence which genes pass to the next generation. That’s what life has been about for most of its history. For individual communities being smart about “immigration” mostly trumps everything else, but there are other ways, which apply in a global scale as well. (“immigration” here can also refer to things like the Darwin family mostly marrying within themselves. Not that I am in favor of that sort of tactic specifically.)
Also, the other effects are only lotteries to the extent that we don’t know what they are. If we understood them better, we would certainly try to control them.
I don't understand why "heritability" of a trait is expected to be remotely consistent in different populations and studies. Neither the amount of genetic variation, nor "environmental" effects, nor measurement errors seem like they ought to be universally consistent. If two different studies in different countries with different recruitment strategies find people have different average heights, should scientists divide into factions believing "people are height X" and "people are height Y"? I would understand the confusion if the estimates differed by 100x.
Yeah, I think this is a key point - heritability is a ratio between genetic and environmental factors, if you use two different populations with different environmental variance, you should expect different heritability ratings.
I'm not sure whether I'm missing something obvious or everyone else is missing something obvious... it feels like everyone is trying to deduce the singular 'true' heritability value for factors like intelligence, without reference to environmental context. But heritability is a ratio between genetic and environment, there *is* no 'true' value independent of environment.
It's like saying 'I want to measure the TRUE universal temperature, independent of the context of season.' The temperature depends on the season, measuring it at different times will give different results and that's not a mysterious anomaly or a mistake in your methods.
A lot of EA people believe they are genetically superior to others (in terms of IQ - however they want to measure that) and that’s why they should rule the world to best mold it. That’s where most of this comes.
If you can say a sizable enough portion is due to environment then they don’t have the “I’m superior” argument anymore. They have “I was born into a better environment” argument - which you could put any baby in. I’m sure the more white supremacist part of rationalists would disagree with that part too but that’s just the world we in.
Also, why would my claim of superiority depend on the cause of my superiority? If Alice is smarter than you because of the wonderful childhood environment she was raised in, and Bob is smarter than you because of his two Nobel-prizewinning parents, is there any reason for Alice to feel *less* superior to you than Bob?
Pretty sure even this blog has said the same stuff, man. There was no lack of articles a couple years ago about “actually our autistic children are superior and will take over the world.”
That’d be fine tbh. I don’t think anyone has qualms about that. However, a lot of people here want a more genetic/inherent IQ measurement. Not one that relies on how well you studied a test or prepared or whatever. The goal posts will always move because people want IQ to show that they’re genetically superior to you - not that they just studied a lot more. The point of these studies is to prove the validity of an IQ test being mostly genetic and therefore if you’re better at said test then you’re genetically superior and therefore better qualified to rule over others and be treated better and be the authority of mankind.
> The goal posts will always move because people want IQ to show that they’re genetically superior to you
Maybe there is a simpler explanation.
1. There is a thing called intelligence.
2. Some significant (maybe not half, but certainly more than a tenth) portion is genetic.
Those are pretty straightforward facts, but there are some people who refuse to acknowledge 1 and/or 2, and it's maddening to lots of smart people. Not because they want to be homo superior, but because people are saying wrong stupid things. The same way it's maddening to smart people when someone says vaccines cause autism or that the moon landing was fake.
Saying dumb wrong things is a very effective form of nerd sniping.
Because we started off with claims of staggeringly high - 70-80% - heritability found by some twin studies, and implausibly low, <20%, heritability found by GWAS on small numbers of SNPs. This is a classic case where bits of the educated public feel they really have to keep fighting even when the gap has shrunk to almost nothing. But at least one person in these comments is going to try to defend the old, high twin study numbers. They always do.
It's just that trying to reverse engineer something which was not even made to be comphensible to humans to start with, is difficult.
In many ways news studies are much worse than the old ones. Twins studies used twins registries, and researchers went to all of them and asked them to participate, and gave them a real 60-minute IQ test, not a crappy 2-minute test (and them claim it as "IQ" on social media). But for biobanks, volunteer bias is huge -- average person in biobank is richer and more educated than average Briton, people with undergraduate degrees is almost as frequent as those with high school. Vast majority of these people who even showed up, do not take even this crappy 2-minute test or measurements for many other traits. It just happens it's not important for height, but important for others traits, esp. mental diseases (people with some kinds of mental diseases will try to hide from UKBB and the others will participate more) is rampant. They don't even try to correct for it.
No, they're not bad. But I think we're forced to conclude they included something that is not simply additive inheritance as if it were.
Forget intelligence for a moment, because all the data is not yet in there, and consider BMI. I am cribbing from Sasha Gusev here, of course. Classic twin studies give you 65-96% heritability. Sibling regression gives you 39-55% and whole genome GREML gives you 28-34%. So, true additive heritability we can be confident is at most 34%, then you have another 10-15% that's probably GxG interactions with some amount of environmental confounding to get to the SR numbers. But then to get to the highest twin study numbers you need to find another 25%+.
We have really run out of places where 25% heritability could be hiding. Its not in ultra-rare variants because we have large whole genome samples. It could be in very high order GxG interactions, but then shouldn't lower order interactions that show up in SR also look significant? If its not that, it has to be environment. We don't know how, but we've run out of other places to look, haven't we?
And since the existing gap between GWAS and twin studies exists across all traits, wouldn't you expect the twin study estimates to be similarly high across all traits?
No, we don't even bother applying such simple things such as adjusting for volunteer bias and unreliability. If we are not doing such simple things, we are also not doing other more complicated things.
We are not trying to include on how mother genes interact with foetus genes in womb.
It's not like doing WGS magically gives you information about how rare variants affect phenotype. GWAS is underspecified problem - it has more free variables than samples. They can count number of rare variants in an individual and add it to predictor with some weight but true effect of these is larger.
Also, we actually know the mechanism how BMI changes to environment inputs.
I think the point is, what we're really interested in is a causal estimate of genetic vs environmental factors: for a given set of environmental changes that we think of as reasonable/plausibly in our control, what effect do they have on the distribution of iq.
Under certain assumptions about what kind of environmental variation is captured by these studies, they function as a proxy for that question.
So the "heritability" isn't really what's at issue; what's at issue is how much of the set of plausible environmental interventions is ruled out as having a material causal impact.
You can have impactful environmental interventions with 80% heritability or fail to have them with 30% heritability. If you want to know about environmental interventions, then study environmental interventions.
(To take an example from the dynomight post linked above, say you have a dictator that injects every redhead with carcinogens, and this is the only source of cancer. Then cancer is 100% heritable, yet an environmental intervention could be 100% effective.)
I agree, and if you wanted to say 'what is the true heritability ratio of IQ for everyone born in the UK between 2000 and 2020', that's absolutely a real and singular number that you could measure.
All I'm saying is that it will be a *different* number from 'what is the heritability of IQ in this set of 200 twins' or etc., and that this isn't mysterious or requiring of an explanation.
Sure, but I guess most people think that the sources of environmental/genetic variation should be prettttty similar in both cases, so either it's a little mysterious if they're too different, or it's a little mysterious what the differences in the sources of genetic/environmental variation are.
Isn't it pretty straightforwardly likely that, even in cases where twins are separated at birth, they have more similar environments than what you find across a sample of 347,630 people?
This is what I keep being mystified that we're not talking about... there is no singular 'true' heritability value for any factor. Heritability values are always the ratio between the influence of genes vs the influence of environment, and so every population you study will have a different 'true' heritability value based on how influential their environment was.
Ad absurdum to illustrate the concept: Imagine you live in a society where, every time twins are born, the one that comes out second is immediately given an icepick lobotomy.
In this society, twin studies find that the heritability of IQ is around 1%; first-borns have IQs 50 points higher than their twins on average, it seems like genes don't even matter. But when we look at a genetic study of the general population, we find that heritability for IQ is 50%! Who is correct, the twin studies saying 1%, or the genpop studies saying 50%? What could possibly explain this mysterious difference in results?
(The answer is icepicks.)
That's an exaggeration, but you see what I mean - we shouldn't *expect* those two studies to produce the same heritability value, because they are studying different populations with different environmental factors. Two heritability studies on two different populations should almost always have different 'true' heritability values. That's not a mistake or a mystery, that's just a normal result of this being a ratio between genetics and environment, and different populations having different environments.
Some twin studies look at twins raised together, so that the environment is controlled for, giving very high heritability ratings.
Some twin studies look at identical twins separated at birth, controlling for genetics so that we can get a measure of just the environment across their two circumstances.
But even when identical twins are separated at birth, the adoption agencies have standards for who they allow to take kids, and the people who want to adopt are more like each other than like the general population. They will still have less variance in environment than you'd see in a genpop sample of 350,000
So, none of this is very mysterious to me. It just seems like you're measuring something using different methods that you'd expect to give different results, and getting different results. I'm not sure I understand what is 'missing' beyond those factors.
It feels to me like people are trying to reify some universal notion of the singular 'true' heritability of some factor, independent of context from the environment. Which is something that obviously just doesn't exist, since that measure is a ratio between genetic and environmental contribution.
If that's *not* what people are trying to do, then I'm missing something about this whole project.
Right, but why do you expect the environmental variation in twin studies to be identical to the environmental variation across 350k genpop members?
Saying 'well all of those people are in 'normal' environments, so they should all be the same' is begging the question.
If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
The relative influence of variance in the environment to variance in the genes is exactly the thing we are *trying to measure* here.
You can define what population you care about - 'all British people', 'everyone above the poverty line in a first-world nation', 'twins raised in the same household', etc. - and get the 'true' heritability value for that population.
But each of those populations will still have *different* heritability values, because they come from difference environmental distributions.
Again: if they didn't, every heritability value would be 100%.
(now - if you put people in 100% identical environments, and also had a 100% perfect way of measuring genetic impact on IQ, you might find that the heritability of IQ and the genetic contribution of IQ are two different numbers, because there's some amount of biological 'noise' that happens in development independent of either genes or environment. This could be an interesting measure of 'missing heritability', if you could measure it... but current studies can't measure it, because the environments aren't actually identical!)
It is a fairly natural assumption, maybe people in twin studies are more likely to get head injuries or members of the UK biobank are guaranteed to get sufficient Iodine as toddlers, but your prior should be no significant difference.
The relevant metric here is not whether there's a statistical difference in the average, it's whether there's different levels of variance.
You don't need to get a head injury to change you IQ. Whether or not your parents read to you as a child can have an impact.
And again, the question is not whether one of these groups has their parents read to them more often than the other group. The question is whether one population has more variance across the millions of tiny environmental factors like this one that could potentially have some small cumulative effect. Two population can have the same average with wildly different variance.
My impression of the literature is that that's not really the case, but rather it's that the effects are small and not universal, and only apply to certain metrics,
But having small and inconsistent effects on a limited number of metric for a single factor like this is very important to heritability calculations, if the effect is nonetheless real and casual. Remember that there are ~thousands of similar environmental effects, and remember that we are concerned with population variance among those effects rather than the cumulative end result they average out to.
If you read the literature on childhood reading and said 'this sounds like a small and tenuous result, I'm going to dismiss it as an important part of the question of how to raise a child', that's correct.
If you read the literature on childhood reading and said 'this sounds like a small and tenuous result, I'm going to dismiss it as an important part of the question of heritability', that's wrong.
The environmental contribution to heritability values is *sometimes* on large factor like malnutrition or brain damage, but usually it's thousands or millions of tiny individual factors with cumulative effects.
Looking at any one factor and saying 'the effect of this one factor is small, therefore I can dismiss it, therefore I can dismiss all small environmental factors, therefore environment can't explain heritability differences' is begging the question. By this logic, no river can ever wash away a stone, because the river is made of individual small droplets which are each incapable of moving it on their own.
> If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
No, not at all. If all environments were completely identical, variation would still come from genetics and chance. You seem to be assuming that chance doesn't exist, which is hugely false.
>(now - if you put people in 100% identical environments, and also had a 100% perfect way of measuring genetic impact on IQ, you might find that the heritability of IQ and the genetic contribution of IQ are two different numbers, because there's some amount of biological 'noise' that happens in development independent of either genes or environment. This could be an interesting measure of 'missing heritability', if you could measure it... but current studies can't measure it, because the environments aren't actually identical!)
Chance exists, but in a standard heritability calculation it is included under 'environment'.
Or, more precisely, the heritability value is the ratio of the variance in a trait explained by genetics to the variance explained by everything else, and both the environment and chance are combined under 'everything else'.
Fro Wikipedia:
>The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance? Other causes of measured variation in a trait are characterized as environmental factors, including observational error."
Really, I think everyone involved in this comment section should read the Wikipedia entry for 'Heritability' if they haven't yet, it explains a lot of important things clearly:
>Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment.
> Really, I think everyone involved in this comment section should read the Wikipedia entry for 'Heritability' if they haven't yet
Did you read it? Here's some text you quoted from wikipedia:
>> The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?
And here's some text you provided yourself:
> If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
These two statements are, obviously, incompatible with each other.
> Chance exists, but in a standard heritability calculation it is included under 'environment'.
There are plenty of (experimental) circumstances where we can comfortably say that the environments are in fact 100% identical and we nevertheless see wide variation in phenotype between cloned research organisms. It is not the case that if you put people in identical environments, measured heritability for all traits would be 100%. That is not a part of the definition of heritability, and it also isn't true.
This is a very important point to make, because people commonly assume that if something isn't determined by genetics, it can be changed by some kind of intervention. You just stated that people making that assumption are right, which -- again -- isn't true.
... Ok, I think the issue here is that you think I'm using the word 'environment' to mean 'specific factors relating to nurture and life experiences' and excluding things like random chance and observer error. Ie. you think there are at minimum three parts of a heritability calculation - genetics, environment, and chance. Is that correct?
If that's the issue, read the quote from Wikipedia again:
> Other causes of measured variation in a trait are characterized as environmental factors, including observational error.
In common technical parlance, the word 'environment' here includes *all* non-genetic factors, *including* random chance and observer error.
It also includes what we call the 'individual environment', which includes things like which side of a petri dish a cell is dividing on... no actual real-world experiment has or could have 100% identical individual environments, you'd need atomic-level precision.
Saying 'a 100% identical environment' *means* 100% identical random chance and observer error and individual environment, along with all other types of environmental factors.
In which case, yes, you would get 100% heritability, because nothing but genes would explain any of the variation.
Now, if your takeaway here is '..but it's stupid to call chance and observer error "environment"', then I agree, but this is a historical accident of how the topic was first discussed by early scientists, and we're kind of stuck with it. It's indeed confusing, and I suspect is causing a lot of confusion here in this thread, but... that's the way the semantics worked out.
If you think your point is *not* related to a semantic miscommunication of this type, and you think you are making a precise mathematical argument that contradicts my claims, then I'm unfortunately going to need you to describe it in much more detail and specificity than 'These two statements are, obviously, incompatible with each other.' Based on my understanding of the semantics and the math, they are not at all incompatible, so I'd need you to teach me what you think I'm missing.
People living in Britain born between the 1930s and 2020s is NOT a “normal” environment and is NOT a random selection human environments generally. The sample is all living in a WIERD environment with standardized education and mass media and etc…
The representative sample for the past 10,000 years of selection was an asian, autocratic, agrarian, illiterate, and poor society.
Consider height and BMI. These clearly have a strong genetic influence. But in an AAAIP society the children with low WEIRD BMI and high WEIRD height are dead. Right? Traits that look attractive in one context are lethal in another context. Also a different set of genes are responsible for height and BMI genes that make children sympathetic beggars are directly tied to caloric intake and probability of survival in an AAAIP context. Genes optimized for near-starvation-level consumption of rice and cabbage have different relevance in a British diet of dairy, sugar, and meat.
British children have a narrow environmental context because they all live in WIERD Britain. Inside that sample of highly correlated environments the major component of variance is left to genetics. But when the sample is increased and globalized you get a wider and wider range of environments.
The point is that heritability *always increases* for small, non-random, non-diverse samples. Many important environmental factors are standardized and mass-produced. Heritability systematically increases or decreases depending on the sampling.
The point is that heritability is not a property of individual, but is a property of some specific population (or sample) in specific environment. Btw, why 10000 years? Why not 200 mya?
Btw, you forgot that this study includes only White British, not all people who live in Britain. Include all british residents (and avoid volunteer bias), get much higher heritability
Its also true that if you have really high order GxG interactions, that required 4 or more distant SNPs to take effect, they are only going to show up in identical twins.
Isn't it pretty straightforwardly likely that, even in cases where twins are separated at birth, they have more similar environments than what you find across a sample of 347,630 people? // no, you made it up.
Also, the number of people out of these who took "FI" (not a real IQ test) is much smaller, probably about ten thousands.
> Either people are somehow assortative mating on blood pressure, or else these remain the strongest evidence of some deeper problem.
Everything is correlated with everything else, isn't it? Maybe I'm assortative mating on blood pressure because the people with the best senses of humor have the most correctly pressurized blood.
Or like, isn't high blood pressure much more common among black Americans? (Not sure about Black people more broadly). I'd also guess it correlates with diet and smoking, which in turn correlate with income and education. So if you assortively mate based on race, income, and education--pretty much the canonical variables for assortive mating, surely?--would you not expect to see the effects of assortive mating on blood pressure as well?
What you're discussing as the environmentarians' new argument is not about missing heritability, it is an entirely unsubstantiated argument that they've somehow won some debate no one has been having, about a difference between biometric methods rather than between biometric methods and molecular ones. Missing heritability is about the latter, not the former.
But to even make the former argument credibly–if that's even possible!–means mustering new facts. Proponents need to add meat to the idea instead of simply retreating to a seemingly irrelevant argument: they need to show that the pedigree estimates have moved biometric estimates down, or are anomalously low, or whatever other permutation of their argument that they wish to settle on. Noting that other studies find different results fails to do that. They'd need to show this discrepancy in one sample, because phenotyping (inclusive of trait measurement, sampling, etc. See: https://x.com/cremieuxrecueil/status/1938391982667116808) has major consequences that can plausibly explain any discrepancy they're claiming (but not showing) exists now.
For example, BMI heritability moves up from childhood into adulthood and then down again into old age. The UKBB sample is an older one, and so it could have lower heritability for the trait in actuality. Age-related heterogeneity could also impact estimates from both the pedigree and molecular methods in the study, likely adding noise. This is one of many potential issues that has to do with the novel issue of the absolute level of estimates–not the missing heritability problem per se (unless people are positing sources of bias that drive a wedge between biometric and molecular methods).
I'm not an expert in this subject, but if the gene-variants identified predict literally zero percent of any phenotypic trait how would you compare the biometric predictions with molecular predictions?
Look... I don't have a masters in molecular genetics so I'm sure a lot of this is going over my head, but the first page you linked to (https://yanglab.westlake.edu.cn/software/gcta/#Overview) opens with "GCTA (Genome-wide Complex Trait Analysis) is a software package initially developed to estimate the proportion of phenotypic variance explained by all genome-wide SNPs for a complex trait, but has been greatly extended for many other analyses of data from genome-wide association studies."
I don't see a specific bullet-entry for GREML-LDMS (the method used by Yengo et al) on the same page, but the entry for GREML does define it as "estimating the proportion of variance in a phenotype explained by all SNPs". The first result I get searching for GREML-LDMS (here- https://www.nature.com/articles/ng.3390) opens with a paragraph saying that ∼17 million imputed variants explain 56% of variance for height and 27% of variance for BMI.
Are you saying these methods estimate the percentage of phenotypic variance that *would* be predicted by a PGS, given perfect knowledge about the genome and gene expression, but that the specific variants in question and/or their effects are not actually identified?
> Are you saying these methods estimate the percentage of phenotypic variance that would be predicted by a PGS, given perfect knowledge about the genome and gene expression, but that the specific variants in question and/or their effects are not actually identified?
I can't speak for Crem, because I'm not sure I understand the argument he's making. -)
But you are correct that the GREMLs estimate the potential explanatory power of a PGS without actually building a PGS or pinpointing causal variants. (And in case you're wondering, GREML doesn’t build a polygenic score because it calculates the variance of all SNPs collectively. It's not designed to estimate individual SNP effects at all.)
Basic GREML usually gives a lower estimate of heritability that all SNPs can collectively explain, while GREML-LDMS typically gives higher estimates, and (to the best of my understanding) that estimate is the *upper limit* of the heritability that SNPs can collectively explain. Saying that the GREML-LDMS results that place IQ heritability at the lower ranges which twin studies give, and thus support the twin study estimates, isn't quite correct. Rather, some GREML-LDMS studies say that heritability *may* approach the lower end of the twin study ranges. But it doesn't really confirm they reach that range.
But GREML-LDMS IQ heritability estimates vary widely across studies. Because (a) different studies measure IQ using different tests, (b) samples differ in age (remember IQ tends to go down with age), and (c) ancestry (e.g., some studies include family data and others don't). Overall, IQ heritability estimates from GREML-LDMS-type methods are higher than older GREML estimates, but less consistent across studies (than, say, compared to the results we get for height).
I hope this helps. I've tried to be very careful with my wording and edited it a few times to improve clarity. Any mistakes are either due to my clumsiness as a writer or a misunderstanding of the underlying theory (but I think I understand the theory in broad strokes). I'm perfectly willing to be corrected, but please provide some links if you do (so I can improve my own understanding).
Crem is wrong on many of the points above but regarding variance explained by polygenetic scores the current state of the art is about 16% per Herasight's new paper
I appreciate the link, but I'm more curious about how well the PGS's produced by this specific paper would predict associated traits. (Or, if PGS are not involved here at all, how biometric vs. molecular comparisons are even possible.)
Whether or not Crem is wrong (and I'll admit I don't really understand his point), Yengo et al use GREML-LDMS. The various GREMLs calculate the variance of all SNPs collectively — without actually building a PGS or pinpointing causal variants.
Herasight uses PGS which they mix into their secret sauce and call it GogPGT, and they claim, "our PGS achieves a standardized regression coefficient with fluid intelligence of β = 0.406 (SE 0.009), corresponding to an R2 of 16.4% (95% confidence interval: [15.1%, 17.9%])."
A β=0.406 is a moderate-to-strong relationship in behavioral genetics. But an R2 of 16.4% means the genetic score explains about one-sixth of the differences in fluid-intelligence scores among people in their very large UK Biobank sample.
I'm withholding an opinion on Herasight because I haven't tried to pick apart their paper yet.
Yes, isn’t it the case that, a couple of years ago, nurturists would point to GWAS studies and suggest a heritability of IQ of about 15%, whereas now they are accepting ~ 30%? In other words, the lowest value that one can plausibly argue for seems to be to have shifted upwards markedly.
~30% is still incredibly far from the claims some hereditarians make about things that common sense would suggest are important (such as values you are raised with, how you are treated as a child, behaviors and mannerisms of family that raised you, etc.) not mattering at all such that adoption is a bad idea at best, parenting isn't important, etc. etc.
Those conclusions would only be false if the shared environmental component of such traits were shown to be large. There are also non-systematic, non-shared environmental influences on these traits which are neither nurture nor nature, but as such it's unlikely they can be manipulated or improved. (This could include everything from random hormone fluctuations in the womb to personal choices in adult life, which would probably appear random to external observers. Although I guess personal choices could also *be* random hormone fluctations.)
But at the same time, last time this was discussed here people were still waiving around very old twin study estimates of 80%+ for IQ as if they were plausible.
No-one has yet produced a convincing explanation of why such an estimate is wrong. Twin studies are also the most examined and critiqued of the various methods. And the various newer methods have not yet settled down to producing consistent and replicated values.
There are lots of potential reasons why they’re wrong. Given that all other methods are converging on lower values, the only question is which reason is correct.
There’s not really “convergence” of the other methods yet, there is unexplained scatter. And yes there are suggestions as to why twin studies are wrong, but as yet no suggestion is properly convincing.
Right, but there's unexplained scatter among twin study and between them and other lineage based methods also. When I talk about convergence, I mean that if there was any purely genetic component left out of whole genome GWAS, we would see a gap between GWAS-WGS, RDR and SR. GWAS will not show any non-additive effects as heritable, where SR will show most GxG interactions as heritable, with RDR somewhere in the middle. But we don't see that - for the traits where we have a full three-way comparison between those methods we have very close results, and the very high numbers from some twin studies are an outlier.
There are three possible explanations here:
1. There is some problem with twin studies, but we don't know which of the many problems identified it actually is that's causing the discrepancy.
2. There is some very high order GxG interaction that means very closely related individuals (ie. twins) share far more traits than you'd expected based only on lower order ineractions that will show up in SR.
3. Some traits, intelligence in particular, just so happen to really be 80% heritable as shown by older twin studies, even though we can now show that other traits shown by older twin studies to by 80% heritable are only ~30-50% heritable.
Until this week, I thought the likely explanation was 2. But recent SR studies show lower order interactions aren't important so it seems a bit of a reach to supposed that higher order interactions are. 3 is ... very implausible. So right now I am defaulting to 1. Where am I wrong here?
That depends on the method used to arrive at 30%. Within-family controls are the only way to truly deconfound environmental factors and they typically cut the genetic component by something like half (compared to just using ancestry components and other measured covariates). So we’d back to 15%.
Isn’t the study Scott is talking about based on GREML? If so, the 30% still contains environmental influences. Are there RDR/SR studies that also find 30% for IQ?
Within-family removes some but introduces others, such as sibling rivalry and makes volunteer bias much worse. I wonder if they ever correct for birth order effects.
>(compared to just using ancestry components and other measured covariates)
which is already overcorrection, as these variables aren't independent of genetics.
Somehow Markel et al. 2025 finds 75% of IQ (averaging different tests actually).
Another brilliant piece. Thank you! Just love your topics, research, analysis and style.
Gender may also influence results. My daughter studied Behavioral Genetics at The University of Colorado Boulder. Her master’s thesis was on alcoholism. I’m oversimplifying, but her twin studies uncovered that alcoholism in men was, for the most part, stress related or socially induced. Victory for the nurturists. On the other hand, they found genetic markers in women which predisposed them to alcoholism. Hereditarians rejoice.
Seems to me that height is the one real outlier here, and that should be for the fairly obvious reason that absent serious nutritional deficiencies or very advanced age, there is almost nothing environmentally that's going to effect this. My mom, sister, aunts, and female cousins are all roughly the same height, despite drastically different lifestyles and diets, etc. I assume this study did not use subjects living in places where nutrition might actually stunt height?
But everything else on here is definitely effected by lifestyle, even if you have different genetic baselines. Diabetes and cholesterol depend a lot on your diet. White blood cell count depends a lot upon your level of exposure to viruses and bacteria...I would guess if you work in a kindergarten your WBC is going to be higher than if you work alone in a room on a computer. I can't see a single thing here that I wouldn't expect to be highly modifiable by environment and lifestyle, other than height.
The one that really stands out to me as surprising is how low the heritability for neuroticism came out. That's kind of a shocker. But also I assume based on self report rather than directly measurable, so I'm going to guess that one might not be particularly reliable?
> My mom, sister, aunts, and female cousins are all roughly the same height
Not to nitpick, but unless your family is severely inbred, that your mom, sister, aunts, and female cousins are all the same height doesn't tell you much information about heritability because your genes are expected to vary quite a bit too, especially with more distant relatives like aunts and cousins. You only share 25% of genes with your aunts and 12.5% with your cousins.
IDK, we all all share the same grandpa and he was 6'5" back in the olden days when most people were short. We are all just a titch shy of 5'10" which is about 98th percentile for female height...to me this just shows that the tall genes from grandpa are very strong and hard to dilute. I should have added my nieces too, this far. No one has married a relative, but grandpa's height genes seem to have passed down basically untouched. Though all of his progeny going down to great grand children are female, so idk if that makes a difference.
Prenatal environment seems highly plausible here as a reason why twin studies would consistently show higher estimates for heritability across the board. Identical prenatal environment for twins should almost certainly have some impact that would bias all of the results for heritability upward in the way shown here. Especially in older studies or studies involving adopted children where FASD may be more common.
One of the major types of twin studies is comparing identical (same genes, same prenatal environment) to fraternal twins (1/2 same genes, same prenatal environment). Adoption and twin studies and comparing them has been one way to look at the effect of the prenatal environment. Strictly speaking the prenatal environment may not be exactly the same between the two cases (identical twins are much more likely to share a placenta for ex.), but they are probably quite similar relative to the population at large.
Could it be the case that IQ (and some of the other traits examined) is more heritable at the extreme ends than in the middle? Suppose that the 32% of people who score outside one SD from the mean (over 115 or below 85) are inheriting nearly all of that whereas the people scoring within one SD of the mean are not and their variation within that range is largely environmental.
If that hypothesis were true, what result would we expect to see from a study like this?
Why would this be the case though? If you have two 100 IQ parents, shouldn't IQ be as heritable( whatever that % is) for you as it is for someone with two 120 IQ parents?
I think what you're really getting at is that because of long-time class barriers and assortative mating, the 140 IQs in the population had a higher mean to revert to compared to the average population one of 100 IQ.
It could be the case because outlier results are genuinely different in some way, is what I'm saying. I don't think every point of IQ is the same, a 90 and a 110 are more similar people than a 115 and a 135 are.
Some people are amazing endurance runners, and some people have heart disorders that make sustained cardio impossible or dangerous. Both of these fringe outcomes seem to be very heritable. But if you didn't inherit either of those things then you're just a normie whose ability to run a half-marathon in a certain range of times is mostly due to conditioning, technique and a bunch of minor physical traits that don't correlate with each other. So the ability to run a marathon below 2:30:00, or not to be able to run one at all even with training, could both be heritable in a world where the difference between a 4:45:00 and 5:30:00 (common range for amateurs who trained up to one) is NOT heritable. In the absence of the genes responsible for the rarest outcomes, other things dominate.
Likewise, perhaps there is some group of genes that pass on being a genius or an imbecile, but if you don't get either of those then you land within 1 SD of the mean and where you end up in that range isn't highly correlated to genetics. In that world, if my theory were correct, two 140 IQ professors having a child who ends up being 115 is explained by A) child did not inherit the genius trait, but B) wound up on the high end of the normie grouping due to good upbringing.
Schizophrenia does seem to be common in families of geniuses, like Einstein's. Obviously Asperger's is also quite common among geniuses themselves, and often their family members. But being an imbecile( IQ under 80?) is not common in families of geniuses among those people who don't have a genetic disorder.
IQ tells you who is more likely to be a genius in certain intellectual fields, but it obviously doesn't guarantee it. For music, one study even found a 97 IQ person who had spectacular performance on a brain function related to proficiency in music. But even without IQ, the specific strong areas of certain groups are quite evident if you are paying attention. Like say China's engineering proficiency.
I see what you mean but I still don't think you're correct. I could be wrong, but from what I've read what matters most to your potential IQ range is the scores of your parents/grandparents and ethnic group mean.
In some groups, a 145 IQ is quite rare; in others, it's not uncommon. I don't know the exact variance possibilities, like whether two 90 IQ parents can even have a 145 IQ, or if it is possible but just extremely unlikely.
But we do know that regression to the mean depends on what IQ your parents and grandparents were. For example, two 140 IQ parents are likelier to have their four kids be in the 128-140 range, with the skew toward the low or high end of that range depending on that of the grandparents. Imagine two of the 4 grandparents were 140s, while the other two are 120s. This would skew the grandkids toward being around 130, despite their parents both being 140s. This is kind of a simplified explanation, but shows how certain ethnic groups develop different average IQs, rather than simply reverting to some ancestral average that is common to all humans.
I don't see your substantive issue with what I wrote. Inbreeding's results depend a lot on who's doing it. It's bad en masse, but in some groups it leads to conserved differential IQ levels and more birth defects/genetic disorders as well.
Yes, this sort of thing is possible, and it's part of what makes heritability hard to interpret–you can get different heritability numbers depending on how the study participants were selected.
However I'm not sure if there's any particular reason to believe IQ should be more heritable at the tails than in the center.
I think my naive assumption was that the heritability might look like an M shape. The most extreme fringes could have gained/lost those last few points as the result of some environmental stimulation, the normies would have a lot of variance as well, but that significantly above or below the norm people are sort of their own breed. That would account for the world I've seen around me in my criminal law practice, you seem to observe consequences of a child's home life in ordinary folks, but two educated professionals adopting a trailer trash baby from a multigenerational clan of hillbillies reliably find themselves 15 years later with a dimwitted teenager stealing their valuables to sell for drugs. I don't know if that IS the case, but I've learned that heritability can be higher or lower for different quintiles in some other things (reading proficiency was the one I saw) and was curious if it applied here.
If I remember correctly, twin studies produce estimates close to 'broad-sense heritability', ie all genetic effects that make MZ twins more similar than DZ twins (additive effect of variants but also dominance, and epistasis). On the other hand, other methods, including for example sib-rregression, usually produce estimates of 'narro-sense heritability', i.e. of only the additive effects of variants. It is therefore expected that almost all other methods produce lower heritability estimates than twin studies.
Also, assortative mating decreases the heritability measured with twins, it does not increase them. Twin study heritability measurement is based on the comparison of the ressemblance between fraternal and identical twins. With assortative mating for a given trait, fraternal twins share more than 50% of their additive genetic variance on average for that trait, but identical twins are still ~100% identical genetically for that trait. The difference between the ressemblance of fraternal versus identical twins is therefore reduced, which will in turn reduce, not inflate, heritability estimates. And population stratification does not really inflate heritability either (though it really must be taken into account for many genetic analyses).
I actually asked Sasha Gusev about this yesterday. He pointed me at simulation data that shows sibling regression captures 2nd order GxG interaction completely, and very high amounts up to the 4th order. That's obviously less than twin studies, but SR is also less confounded by environmental factors than twin studies. So if SR shows lower heritability, its either very high order interactions or environment. But the other thing we should consider here is high enough order GxG interactions are basically only going to occur in identical twins anyway, so are they really heritable at all?
I'm just happy that even the "anti-hereditarian position" is apparently now ca 30% heritability. Before in these kinds of online discussion, that used to be the low end of the hereditarians, while the anti-hereditarians argued for only negligible heritability, if not going full blank-slate.
Why does it make you happy? Wouldn't it be better if it was closer to blank-slate, because it would mean any problems can be fixed with simple environmental changes rather than complex racist programmes or gene hacking? Kind of giving away the hereditarian hand here.
Imagine two sides have a long term argument about deaths in WWII. Maybe it's about number of people killed in the Holocaust.
There is some new consensus on some minimum number of deaths. The side that was arguing that they believed *more* died in the Holocaust says "I'm just happy that the argument has shifted ..."
Is this happiness from joy that lots of people have died?
Or is the happiness that the reality they previously conceived is correct? (And thus the deaths are being acknowledge?)
Being happy about final agreement on the facts does not entail happiness about the facts themselves. You just seem awfully eager to attribute malice to hereditarians.
Of course I am, a large plurality of them are racist and want me and my friends to be segregated or forcefully removed from the gene pool. It is perfectly rational to attribute malice as a first reaction.
I doubt very much that you have any persuasive evidence of that. Your personal experiences aren't really persuasive given the drive for algorithmic engagement bait.
Here describes a powerful network of these types of people, headed by Emil kirkegaard, who is mentioned in this article and posts on this blog. Not sure why you are pretending not to understand things.
Are these studies taking into account that gene A might increase educational attainment in isolation but decrease it in the presence of another gene which itself had little or no impact? We know that increased IQ is correlated with both educational attainment and drug experimentation. If B is associated with addiction then B itself could have little independent effect but could toggle the polarity of the impact of "A." Also, presumably not all genes are additive. We know that many are since we end up with a normal distribution but undoubtedly some are multiplicative. I am just curious if their model is flexible enough to expect percentages more accurate than what we see already.
They almost always assume a linear effect. That said there's some justification from an evolutionary standpoint why linearish effects are more likely for selected widely poly-genetic traits.
I absolutely agree that linear(ish) effects are going to dominate. That is why the distributions are "normal." My point was that the nonlinear effects could easily explain why the model still has marked room for improvement.
The article implied that this was the first testing looking at the entire genome rather than the obvious 0.01% of responsible genes. This was enough to nearly double the observed effect. It seems plausible that nonlinear effects in these previously unexamined genes could further increase the observed effect. Not claiming that it is likely to end the discussion but it seemed worth discussing.
See New World (i.e. Americas) monkeys where in most species only female heterozygotes have trichromatic vision. Unlike us, they didn't get duplication of longer-wavelength color receptor and therefore stuck in situation where strong selection pressure can't make all of them have trichromatic vision.
There's an additional problem that heritable traits can still be affected by nurture. I don't believe the tale that IQ tests aren't effected by education, the problem is that they are mostly affected down.
In 10 years of teaching I have seen over and over again that smart kids can become stupid if they are surrounded by stupid people and cultures, but very few stupid kids can become smart no matter where they are raised.
So a genetic test could easily result in findings of a high heritability for inteligence that doesn't correlate to tested IQ.
Heritability is a function of environment. Heritability of height will be different someplace with a lot of endemic parasitic infections and malnutrition than someplace without those things, because more of the variation in height will be caused by parasite load or lack of food while growing. It's not a constant always and everywhere.
That's what I'm saying, but I just want to add that it only really goes one way. A person who would be 6'4" in ideal circumstances can end up shorter, but a person who would only be 5" won't get taller.
I think the way this shows up in our data is that as our societies get richer and more functional, heritability goes up. Which seems backwards at first, but makes sense when you understand that what happened was that we fixed the broken stuff that was stunting peooles' development. Einstein raised in a mud hut and never shown a book doesn't discover any new physics, and neither does Bozo the Clown. Einstein given a very enriching environment and lots of opportunity to learn and explore physics can discover some new physics; Bozo the Clown will not manage this no matter what opportunities he is given.
> Nurturists argued that the twin studies must be wrong; hereditarians argued that missing effect must be in hard-to-find genes.
I mean, I feel like the nurturist whole starting point is just wrong/bad. It seems very difficult to accept that the twin studies are wrong, they're straightforward and should be fairly accurate.
Whereas trying to _find_ the cause of the heredity is naturally very difficult, fraught, and likely to be wrong.
I don't see that there's really much room for the nurturist argument even before this study.
Agreed; for this very reason, I predicted—in the comments for Scott's last post on this topic—that we'd find most of the missing heritability sooner or later. I still predict that we'll find e.g. IQ to be at least ~50% heritable, and probably more.
Asking for clarification: doesn't assortative mating (AM) mean that twin-studies estimates of heritability are *underestimates*, that is, lower than the true value?
That's because AM means that DZ twins are genetically more similar than otherwise, so the MZ-DZ difference is less, to the phenotypic difference is caused by a smaller genetic difference, so the genetic influence is actually higher than calculated?
This is at odds with the above quote: "But this same argument can be deployed against the nurturists’ favorite explanations for high twin study numbers: population stratification and assortative mating. These could be expected to affect socially-relevant and environmentally-mediated traits like educational attainment. But nobody assortative-mates on white blood cell count, ..."
In contrast, Scott in the previous "more than you wanted to know" post says:
"But this [AM] is the opposite of what you would need to “discredit” twin studies - if this bias is true, then everything is more genetic than twin studies think.
"I’m only mentioning this one here because some anti-hereditarians argue that you can’t trust twin studies because of assortative mating, without mentioning that this can only bias them down."
Yeah it seems that assortative mating might increase general gene IQ correlations in the population but *within family* (like twins) it should lead to underestimating genetic contribution (because family members vary less genetically than you’d expect)
I think I mostly update towards hereditarians here - my impression was that as a study this one isn't that different from the range we've seen before (so estimates should stay within that prior range), but its main distinguishing feature is the test for rare gene effects, in which it supports hereditarians.
>my impression was that as a study this one isn't that different from the range we've seen before (so estimates should stay within that prior range), but its main distinguishing feature is the test for rare gene effects, in which it supports hereditarians.<
That's a good way to think about it (that I didn't think of & haven't seen anyone else mention)—thanks for pointing it out!
Epigenetics has negligible effects overall. It gets far more press coverage than it deserves, because it's cool and breaks the rules. But most epigenetic effects are just going to be that your cells methylate some DNA, then clear that all up before turning into your kids. Relatively little of it is passed down generationally, and even then most of that is in sperm-activated vs egg-activated genes (AKA the reason you can't get a healthy mammal by fertilizing an egg with another egg).
> studies overestimate this because of assortative mating and population stratification. This affects biomedical traits like white blood cell count just as much as behavioral traits, because shut up.
Is it obvious that this is wrong? Couldn't WBC count and heart rate be strongly correlated with stratification and sorting traits like class, overall health, having hobbies that improve health, etc? Like we know that health and SES are correlated, right? In fact this doesn't seem *that* surprising.
< So they gave these people a short crappy IQ-like test with a lot of random noise. Past studies estimated the reliability of this test at 0.61 (low). It’s easy to statistically correct for this;
I don't understand how it is easy to correct for the crappiness of this test. I wouldn't bring this up just to nitpick stats. I think most people would be startled by the idea you can statistically correct for test crappiness. If it were, you could just dash off a short crappy test of anything you want to measure, then statistically correct later for its flaws. And it seems like correcting for what was wrong with the IQ-like test makes a significant difference in what you can conclude about IQ heritability here. Never mind the math, and the kinds of reliability that are calculated when professionals validate a test -- just do a thought experiment. Say you have a yardstick made out of very stretchy elastic, and that yardstick also swells and shrinks with the amount of humidity. You can calculate its reliability by measuring the same person's height 2 days in a row. So say you find that across subjects a person's height measured with the yardstick on day one only predicts their measured height on day 2 with 61% accuracy. So we know that 39% of the info about their height is getting obscured by noise, most of it due to the yardstick's stretchiness and sensitivity to humidity, but there is no way to figure out what information was in the lost 39%. Our yardstick only captured 61% of the info about people's height. How can you compensate statistically for that?
A couple things I think, though I’m no expert. First this isn’t the only study done, so it’s typical to compare the results of a study to more precise and reliable known tests and past studies. If your results are abnormal, and there’s a good reason to think they’re abnormal, you can totally “statistically correct” for a noisy test like the one they did.
Secondly Scott actually gave one clear reason to think the test was abnormal right after this snippet, being the “healthy volunteer effect”.
I don't think you're taking into account the way this test is abnormal. If it differed systematically from some good test we had, then we could use the info about how it differed to correct its score. In terms of my yardstick analog, that would be a situation in which we knew our yardstick, which was supposedly 100" long, was actually 110 inches long. Then we could correct all its results just by multplying them by 100/110. But the yardstick -- and the crappy test -- do not vary systematically, they vary in unpredictable ways -- they are full of noise.
In statistics one common way to test for reliability of a measure is to use it to measure 2 things that should be the same, such as a single person's height 2 days in a row, and see how well the measures agree. If the measures agree poorly, say only 61% of the time, all we know is that the measures are unreliable. They do not differ systematically in a way that's correctable, they're just full of noise.
You're correct that the missing information can't be recovered on an *individual* basis. However, on a *group* basis, it can be possible to recover group-level statistics if the distribution of the noise is well understood.
In your yardstick example, suppose you know that the yardstick added noise with (a) mean 0cm (b) standard deviation 5cm (c) no correlation to what's being measured. Then, by using the properties of how mean and variance combine, you can infer that the real heights would have the same mean but standard deviation of sqrt(stdev^2 - (5cm)^2).
However, the assumptions about the noise are quite important. If the statistical properties of the noise aren't well understood, you can't infer much about the de-noised distribution.
So if I'm understanding you right, if the noise in the crappy IQ-type test has the right statistical properties we could know mean and standard deviation of the IQ scores for the subject population as a whole. But how would that be helpful in answering questions about how closely each subject's genetic data correlates with their IQ test data? We need both genetic and IQ test data about each individual to do that. Or am I missing something here?
For correlation and heritability, it's still possible to subtract out noise, but the math is more involved and the assumptions need to be stronger. (When Scott says it's "easy" to correct for noisy data, that should be interpreted as "easy for a working statistician".) In particular, you'll need some assumption about the noise being uncorrelated to both IQ and genes. This might not be exactly correct, but it's usually "correct enough" to still give useful results.
I apologize I'm not sure I can give a good explanation of the math without actually writing out the formulas. But from a high-level view, measurement noise will always bring correlation/heritability towards zero, and the exact size of the effect can be calculated if you know enough properties of the noise and data.
Yes, no chance to recover any individual information. But the point here is that noise will smooth out any correlations. I.e., if you have two traits that are perfectly correlated, but each of your measurements is noisy, you'll get a correlation less than one between them based on your noisy data. But if you roughly know how noisy your measurements are (assumptions about noise, mileage might vary etc.) you can calculate what the maximum correlation that you can measure is. If you then divide your observed correlation by the maximum or whatever you'll get something like the actual correlation of the traits, if measured perfectly. Not foolproof, but should work reasonably well if the noise isn't too large.
Let's take a simpler example, because real statistics is too complicated for me. Say you have two binary variables A and B and you measure one of them entirely accurately while the other one is 60% reliable (symmetrically). If your measures agree 60% of the time, you know that A = B, if they agree 40% of the time, you know that A = !B, and between those two points, the true concordance is a linear function of the measured concordance.
I'd always though that, just because of the way developmental biology works, there ought to be a lot of non-linearity in behavioral influences from genetics in ways which would not be fully captured by a GWAS, even one including rare variants.
Why is the assumption that if the “true” heritability was higher, it would be genetic variability instead of missing heritability? It seems that the variance explained by genetics is fixed regardless of interpretation?
Something that rarely comes up in these discussions is that genetically determined traits affect parts of the environment that then affect outcome. For example, physical attractiveness affects how kids are seen & treated by peers and teachers. Of particular note is the halo effect of physical attractiveness -- attractive kids are seen as smarter, for instance. Various chronic health problems that are heritable, such as asthma, affect school attendance, energy level and attention, and those surely affect life attainments.
To be fair, good looking people do have advantages in life, but when it comes to pure brain horsepower, many top achievers and brainiacs are definitely not very much above the average, if at all. You can easily do a quick check of say the people in the Manhattan Project: no one is hideous, but there are few who could've had an alternative career path in Hollywood.
I have been looking for this type of summary, even if it is not conclusive, and I thank you.
Over the last decade or two there has been increasing emphasis on non-shared environment, which has a random element. It has been give a 50% number. If this random/non-shared element is 30-70% depending on the trait, that would thread the needle on this. That's not evidence in itself, but it gives a place to look for the keys being where they were dropped rather than under the streetlight.
Decent summary. I wouldn't say this settles anything but it's a great start to hopefully more WGS GREML like studies. I hope they include structural variants next study. Maybe look into the reliability of the biomedical measures too, they are not necessarily highly reliable (in retest sense) because many such values can have large short term fluctuations, sometimes daily. I don't know what measures the family studies used.
For blood pressure and the like is there potential for assortative mating based on smoking (and perhaps some other lifestyle traits, though that would be a big one)? Or did they correct in some way for those? Anecdotally, smoking preference is a pretty big filter on mating preferences and is pretty strongly associated with blood pressure and various other bio-medical factors (though I guess how much of smoking preference is genetic?).
Transgenerational epigenetic inheritance and the correlations of separated twins aside (likely to live in same region/country, etc):
People fail to understand how determinant the gestation is, the age of the parents, their metabolic health, the quantities of oligoelements, the duration of the gestation, etc. They impact the total quality of the body that is being generated (nephron number, etc) see e.g. neural tube defects and autism studies on vitamin D showing statistically eradication https://vitamindwiki.com/pages/autism-risk-is-reduced-by-vitamin-d-early-pregnancy-or-chlldhood-umbrella-review/
Transgenerational epigenetic inheritance is, AFAIK, a non-starter, *in re* heritability / genetic-components-of cognitive traits: it's not going to affect anything.
And what makes you believe that.. For starters there are more than 228 imprinted genes in humans and many known transgenerational epigenetic disorders in humans known among other things as imprinting disorders. Albeit rare their mere existence and the fact that partial imprinting exist make it an underresearched api surface. https://en.wikipedia.org/wiki/Genomic_imprinting
IIRC, due to the Weissmann barrier, it is very difficult for stable inheritance to result from (environmentally induced) epigenetic changes—although I think it's possible *in principle,* it would, as said, have to be difficult/rare—and there is very little evidence (again, AFAIK; I know a few studies claimed to find some a while back... but the years have not been kind to their conclusions, I think, heh) that anything significant herein occurs in humans.
Imprinting is indeed an epigenetic mechanism (so I was wrong to say that TEI isn't going to affect *anything*); that said, imprints are *not* environmentally acquired, but rather (the "plans" thereof) are genetically hard-coded—sc., variation in imprinting will be captured as broad-sense heritability by twin studies—and I don't think we expect much smooth variation in complex, polygenic traits (such as *g*) to result from variation (read: errors) in imprinting anyway: those seem to *mostly* cause binary, you-have-it-or-you-don't disorders, as you mentioned.
I am a moderate hereditarian. But also, I think twin studies are highly problematic, because twins are rarely "separated at birth" in the relevant sense.
Imagine a pair of identical twins who are given up for adoption in the Midwest. One ends up with a family in, say, Illinois, the other in Wisconsin. Then they're raised independently, right? No, not at all! Notice that one did not end up Burundi. Certainly one did not end up in 10,000 BC. Not only did both remain in the U.S., they likely were placed with families with similar socioeconomic and cultural backgrounds. This is very far from statistically random in the grand scale of human populations over history.
It is accepted that a heritability estimate only pertains to the range of environments sampled in the study (indeed that is necessarily the case). But that is anyhow what we are mostly interested in: what factors matter for the kids in the range of environments that are typical in a given nation at a given time.
But there's an equivocation here, and my claim is that this equivocation leads us into trouble. On the one hand, sure, "heritability" as a statistic is defined only relative to a specified population, not humanity as a whole. On the other hand, this means that "heritability" cannot be straightforwardly interpreted as "genetically caused," which is what people intuitively want it to mean. It only means "genetically correlated in a specified population," which is a quite different thing.
You can compute a heritability statistic for "receiving a PhD," and you'll get a non-zero number. But it's simply obvious from first principles that receiving a PhD is not genetically caused: humans with those exact genes in a different environment (say, in 500 AD) would not receive PhDs. People tend to say "well, of course," and dismiss this consideration, but they haven't really reckoned with how it confounds reasoning about genetic causation.
Nothing ever has “a” single cause, there is always a mixture of multiple causes. But it is still meaningful to talk about the relative influence of genetic factors versus environmental factors in a given place and time.
“This is already well-known. It’s part of the definition of heritability.”
Except that doesn’t stop people, including Scott, from talking as if heritability measured genetic causation, when it only measures genetic correlation, resulting in analytic mischief. I call that a "difficulty."
I dunno, man... I don't think anyone thinks that saying that "obtaining a PhD has a heritable component", or "...is partly 'genetically caused'", means or implies /causation in the sense that "even in AD 500, your genes could make you get a PhD"/—"causation" is probably a bad term to use here in any case, but the relevant sense thereof would maybe be more like "propensity" or "ability".
I gave the PhD example because it's so obvious that it's hard to deny. The point is that this is a characteristic of the heritability statistic in general, even for less obvious examples. It's fundamentally a measure of correlation, not of causation. It measures the proportion of phenotypic variation that correlates with genetic variation, but the latter need not be causal.
That's not true though, is it? The interest is in determining what factors matter for global populations. That is why hereditarians are always discussing IQ differences between nations with entirely different geographies and histories.
That's /another/ interest, rather than "*the* interest", I'd say; but any proposed relevance of "heritability as determined for environment A" to those in environments B & C is probably going to be predicated upon the possibility that some portion of the former is robust to the changes in environment (e.g., while different soil will certainly change—lower—the heritability of various traits within some particular crop, relative to the heritability of those traits as assessed in identical soil, teosinte selected for ear-size will nevertheless produce larger ears in either case than will totally wild teosinte, and in either case it is—to some extent—a heritable trait).
Right. Only rarely are identical twins raised in really different environments, like the two pairs of identical twins that got mixed up in the maternity ward in Colombia, with 2 being raised middle class in the capital and other 2 being raised in a village in the jungle. They came out pretty different on their IQ tests.
Thanks for diving unto this breach again. I'm going to set IQ and EA aside for the many confounding factors you list in the post but I think it's worth expanding on your final question about biomedical traits for which a sizable molecular/twin gap still exists. Let's take the possible explanations one at a time:
1. Assortative mating. Twin studies (and Sibling-Regression and RDR) are deflated by assortative mating and pedigree/GREML studies are inflated by it. So Assortative Mating cannot explain the gap. Also most biometric traits are not under strong assortment (e.g. mate correlation on biomarker traits in Horwitz et al is <0.1 - https://pmc.ncbi.nlm.nih.gov/articles/PMC10967253/).
2. Gene-Gene Interactions. Twin studies are inflated by GxG, Sibling Regression includes GxG, RDR/GREML do not include GxG. The fact that Sibling Regression on average produces estimates still much lower than twin studies on average suggests GxG does not explain the gap (on average).
3. Gene-(Shared) Environment Interactions. Operates the same as [2] except for "fixed" shared environments (e.g. across your entire cohort) which inflate every estimate.
4. Equal Environment Assumption Violation. Twin studies are inflated (or at least biased) when this assumption is violated, whereas the other methods are not. For medical traits, this would include something like one MZ twin finds out she has a lump, so the other goes in for screening and both are diagnosed with breast cancer while DZs are less likely to do so.
5. Measurement Error. Since twins tend to be measured together, this would deflate the non-twin estimates. For biomedical traits, age is typically already a covariate and instrumental measurement error is low. So this would imply that more specific timing (time of day, season, age) of the measurement adds a substantial amount of environmental noise.
6. Publication Bias / QRP / Replication Crisis. Many twin studies were published in the "bad old days" where research practices were less strict.
For 5, wouldn't that apply to fraternal twins as well?
For 2, in what sense are twin studies inflated by GxG? If there's a high-order GxG reaction that basically only occurs in identical twins, are you counting that as inflating heritability since it's not really relevant to most people?
Finally, how can you quantify how much heritability from rare variants you expect to be missing? If this study closed some of the missing heritability gap with more thorough sequencing, when should you be confident that you've found it all?
Yes, for (5) I do mean that applies to both sets of twins, and therefore reduces the influence of environment in the twin study. For (2), see here for the details (https://theinfinitesimal.substack.com/i/169938925/in-twins-epistasis-makes-the-shared-environment-look-like-genes) but the basic idea is that the twin ACE model assumes a linear decay from MZs to DZs but GxG produces a quadratic decay, which the ACE model treats as *extra* genetic variation. In the example in the post, if you have a trait with 20% additive heritability and 40% epistatic heritability (so the true *broad* sense heritability is 20+40=60%), the twin ACE model will estimate an additive heritability of 80% -- i.e. an overestimate even of the true broad sense h2.
With respect to rare variants, Sibling Regression includes the effect of all rare variants and RDR includes the effect of most of them; both methods produce estimates that are much lower than twin studies *on average*. Separately, various evolutionary models have attempted to estimate the selection parameter on a given trait and, from that, extrapolate the expected rare variant heritability component; these estimates are also generally very low (~10% of total heritability).
Thank you for engaging in the comments here! I hope it is ok to add some questions:
Regarding (2), is there a good comparison of Sibling Regression and RDR/GREML estimates that might allow us to interpret the difference between them as the extent of GxG and GxE interaction effects?
General question: In the second graph shown in this post, why are the categories labeled: Common/Rare/Missing/Non-Genetic? Shouldn't the last one at most be interpreted as "Not directly genetic", because of the possibility of the above mentioned possible interaction effects?
Sure thing. Table 1 from Young et al. 2018 (https://pmc.ncbi.nlm.nih.gov/articles/PMC6130754/table/T1/) investigated these methods in simulation. You can see that when there is (10%) epistasis, Sibling Regression estimates the total broad sense heritability (50%) whereas RDR estimates the total narrow sense heritability (40%). The same principle applies for GxE but is highly dependent on the structure of the environmental interaction, so it is harder to simulate. And yes, depending on how you think about GxE, it could account for some of the so-called "non-genetic" component.
Is there a specific reason you speak of Gene-(Shared) Environment Interactions in (3)? Are Gene-(Non-Shared) Environment Interactions not possible for some reason?
How do they account for grades (and to an extent educational attainment) being judged on a curve? It's easy enough to compare your height to your neighbor's, that's measured separately and objectively. But if your neighbor gets the top spot in the class and their pick of graduate schools, you don't and you may not.
The amount of shared environment of identical twins and even fraternal twins is massive. These results don't seem all that surprising given that context. For physiological things the example of blood pressure is directly related to obesity which obviously has a lot of environmental effects. The whole debate seems to be about what extrema things lie on and the truth is about midway and both sides are declaring victory, which is a sort of boring result. The most interesting thing seems to be that neuroticism is a lot lower than twin studies estimate. Maybe that indicates that depression is a lot more tractable than it seems, or at least that there's some kind of childhood intervention which could make it less likely.
One thing that struck me when reading this was that while the sample was large, it was all British people. I'd naively expect the average amount of shared environment between two randomly-selected British people to also be pretty large. They'll both be living in a developed country with universal healthcare and education, live within a relatively small distance from one another in similar climates with a similar set of environmental contaminants. I would naively expect the effect of environment on outcomes of two randomly selected humans from anywhere on the planet to be quite a bit larger.
If you included people in papua new guinea in the sample the degree of heritability of educational attainment would probably look a lot less. But I should have made clear that all types of twins share in utero environment which is extremely difficult to tease out from genetics and that isn't shared with more distant relatives and that specifically may account for a lot of the different results in these studies, indicating that heritability is actually on the lower end of the currently disputed range
Oh, God, now I feel dumb, NOT Soren Kirkegaard, "Kirkegaard's adjusted number" just sounded like "Russell's teapot" or some principal I wasn't familiar with and google wasn't helping
"[T]win studies find that most traits are at least 50% genetic, sometimes much more."
What I find interesting about these studies, when applied to social phenomena (ie - "success", intelligence, etc), is how many confounding variables there are that don't get addressed. Which makes sense. It is functionally impossible to properly stratify the data to do this comparison correctly.
The only thing we can do is put our interpretation of necessarily incomplete data.
Interesting that the only trait to be more heritable in this study than twin studies was height, which I'd expect to be the most heritable of those metrics.
I know everyone here knows heritable is not wholly genetically determined, as it was classically known.
And our host was using heritable as shorthand for genetically heritable. Which is fine, we all get it.
My personal belief with gene/culture interaction there is a large error band for say academic achievement. Put someone with high genetic potential for IQ in a boiler room academic environment...that will be expressed if they have any interest in putting one over on their peers. Likewise, where being well read is seen as nerdy, you ain't reading a lot of books.
"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education? A certain sort of contrarian might relish this conclusion. "
Many high-IQ pupils end up drop-outs because of the immense tedium of being shackled to a classroom of people of less ability.
When we talk about heritability of IQ, do we correct for measurement error?
A single person tested with different tests repeatedly on different days under different conditions will not get the same score each time, they might get scores that vary by ±10 points or so. So even if the underlying g were 100% heritable, you'd still expect to see some kind of substantial black bar in the actual observed IQ test scores... unless this is already corrected for somehow.
Associative mating on white blood cell count seems very plausible if white blood cell count correlates with any behavioural traits not well captured in the other measurements. My sense was that everything would correlate with something. Has anyone ever tested directly for assortative mating in a weird-seeming trait like this? Who knows, maybe white blood cell count correlates with phlegmatic personality
According to 23andMe and their enormous data set and thousands of questions they ask their users, they find assortative mating on virtually every possible trait: more than 97%. In fact there were only two things they could find where people did the opposite of assortative mating, which was that night owls are more likely to pair up with early birds, and that people with a poor sense of direction tend to pair up with people with a good one. Every other thing tested, married couples are more alike than random chance.
Could you cite or explain your "virtually every possible trait" and "more than 97%" claims? This ( https://blog.23andme.com/articles/23andme-couples-correlated ) seems to be the source of your overall statement, but the specific claims do not appear to be supported.
It might be the last refuge of a hereditarian scoundrel, but it's important to note that this is still just short-read sequencing data. If most of the effects are from longer scale inversions then we'll need long-read (Nanopore/PacBio) data to conclusively sort this question out.
Here's my hypothesis for why longer scale inversions are an especially interesting variable to keep an eye on:
"Only 10 - 20% direct causal heritability, which would be a total nurturist victory."
Not sure about that. Anything other than zero is already a bit edgy. If you put a sign that says "in this house, we believe that 20% of the variance in educational attainment is due to genetic factors" in your front yard, what would people assume about you?
Even the nurturist win seems like a hereditarian win to me. 20% of IQ being heretitable is a lot. And if I'm understanding things right, it'd be especially relevant for the right end of the bell curve. If to get a 1st percentile IQ, you need near perfect genetics and near perfect environment, then only people with parents who have the good IQ genes will even be the in the 1st percentile. That's the core of the hereditarian argument most of the time as I usually hear it.
This just isn't what heritability is. To wit: if you believe that racism based on skin color explains most of the variation in intelligence between people (you shouldn't believe this but it's possible), then you'd expect intelligence to be highly heritable, because under this model the genes that code for different levels of melanin would be the IQ genes. Heritability is mechanism-agnostic and cannot typically settle arguments between competing mechanistic explanations.
"This approach [GREML] does not have the advantage of using within-family variation, and therefore will include any environmental influences that are correlated with genetics, such as familial factors or stratification. That means GREML-WGS estimates are essentially untethered from narrow- or broad- sense heritability because they can include entirely non-genetic variance."
"This affects biomedical traits like white blood cell count just as much as behavioral traits, because" - infectious disease history, diet, exercise? These are a few obvious possibilities that an extreme non-expert like me can spot instantly.
“does this require us to believe that IQ only barely affects success in education?”
This is unremarkable. No educational attainment data discerns between an education degree from Swamp U and a physics degree from Cal Tech. The modern push for everyone to get “some” degree hides the signal.
I agree that physics likely requires more calculative intelligence than education, but in terms of institution I feel like that's heavily swamped by the other factors which go to who gets accepted. We can imagine someone who is more than intelligent enough to perform well at a high level program but doesn't get in because they don't have the right family connections/political views/high school/wealth/ethnic background/whatever.
I'd imagine capability is a variable, but I'm not sure of the signal to noise ratio
Physics is for sure one of the most cognitively demanding fields. Even very competent verbal ability people( i.e. lawyers) can struggle immensely with relatively simple problems.
Oh boy let's call it ~50/50 and move on. It's both, if 40/60 that doesn't make much difference to me. I'm reading Thomas Sowell's "Black Rednecks and White Liberals" and one of his points is that ghetto black culture makes a difference in life outcome. Let's ID those cultures that help people succeed and encourage those. (Or visa versa, discourage those that lead to failure.)
I'm about 1/2 way through. Culture is important. We knew this but he brings it home. The first Sowell book I read was "A conflict of visions" which I really liked.
That was the one I read too. I can't remember how, exactly (it was a few years ago now) but I remember having the impression it kind of lost its way from being a relatively interesting book of political philosophy and became more of a "and this is why my political views are best". But I wadnt exactly in the best headspace at the time so its plausible it misread it
Yes he does show a preference for the constrained vision. I wasn't bothered too much by this because I thought he did a fair job of explaining the unconstrained vision. (And I also find myself feeling some preference for the constrained vision.)
You're offhandedly dismissing assortative mating based on white blood cell count as absurd, but I think quite a lot of information about those sorts of biochemical factors might actually be available through scent and other sensory impressions available during pre-sex intimacy, which then influences relationship decisions subconsciously. At the very least, otherwise-invisible blood pressure problems can certainly cause erectile dysfunction.
The entirety of academia is compromised. I don't see why the opposition should be trusted less. Obviously they shouldn't be blindly trusted, but no one should be.
I don't think you get to cancel him for belonging to a white nationalist organization. I am capable of discussing research findings regarding issues I care a lot about without turning into a lying sleazebag. Quite likely you are too. How do you know a someone who has committed to the white nationalist point of view is not also capable of that? Because his beliefs are Wrong and Bad? There are people, not all of them dumb or demonic, who think *your* beliefs are wrong and bad. And I myself have discovered several times that some stuff *I*believed was wrong and bad. I appear not to be made out of a different and purer substance than Kirkegaard is. What, you never made a discovery like that about yourself?
These issues are directly connected to his political agenda, and one side of this argument is dramatically more useful to him. That vested interest would make his words suspect even if his political agenda was not nakedly evil. And it is.
We're not discussing lawnmower brands. On an issue like that it probably doesn't matter how awful his politics are. But this issue is at the heart of his racist project.
We can't listen to people who put forth data and arguments regarding things directly related to their aganda? Scott posts about AI risk, EA and other topics directly connected to his agenda. How come you're reading his blog?
If Kirkegaard is posting lies about research findings or interpreting true findings in a ridiculous way, point that out here. If he does it in the present discussion, point that out in replies to his posts. If you can't rebut him, why are you so sure he's wrong?
And besides, it is evident from the comments here that there are quite a few posting who are quite interested in the subject and know a lot about genetics and statistics. If you (assuming you are knowledgeable and not just having an ick response) can't rebut Kirkegaard, watch to see if others do. Call on them to.
This isn't name calling. This is a very simple, but very sound, argument backing up the claim that he should not be trusted or listened to.
Do you disagree with that, or do you just think it's terribly impolite to point this sort of thing out? Do you think we're under some sort of moral obligation to pretend a snake is not a snake?
Because I don't. I think that, if you know someone is not to be trusted, you should treat them as such. You should not pretend otherwise. Pretense is an enemy to the truth.
If you want my position on how hereditary various personal traits are, by the way, I have no clue. Not my field, and not a topic I'm terribly interested in. Usually wouldn't comment on a post about it at all, if not for the obvious.
>These issues are directly connected to his political agenda, and one side of this argument is dramatically more useful to him.<
This applies just as much to the modal academic working in this area (population genetics, quantitative genetics, social psych, etc.)—or, if you have a slightly rosier view of the Academy: to at least a 𝘴𝘶𝘣𝘴𝘵𝘢𝘯𝘵𝘪𝘢𝘭 𝘱𝘳𝘰𝘱𝘰𝘳𝘵𝘪𝘰𝘯 thereof—as to Emil, though.
E.g.: I have, more than once, looked up the author(s) of some paper or another, and found statements such as: "...racism is 𝘴𝘰 𝘦𝘷𝘪𝘭 that, if the evidence turned out to undeniably support the existence of some heritable between-race cognitive differential, we should suppress it." (That one in specific is from Turkheimer—but he's not alone; this sort of sentiment doesn't seem to be at all uncommon.)
I think Emil clearly knows his stuff—just look at his actual work;¹ you'll find no raging race-hatred, only solid science (& much better statistical practice than is typical!)—but even if he was manifestly a poor researcher, or an outright propagandist, his work would probably 𝘴𝘵𝘪𝘭𝘭 be useful as an antidote to the prevailing bias.
(But then, I am 𝘢𝘭𝘴𝘰 a terrible person who believes that e.g. "Blank Slatism" is, of a near-certainty, wildly wrong; and—though perhaps my ability to even notice such things has been lost due to my moral turpitude—I've never once seen Emil say anything like "death to all other races!" or "round 'em up in camps, boys!", or the like... so I'm likely to be a bit biased in his favor, myself.)
¹(rather than, esp., his Wikipedia page; I contribute a fair amount on Wiki, and cherish it, and everyone I've interacted with on the site has been swell... but I must admit: there's a 𝘭𝘰𝘵 of left-wing partisanship to be found thereon. it's rather disappointing; I just stay away from any topics likely to be contentious—which is sort of "ceding the field to the enemy", as it were, I know; but.. I just don't have the heart for it, any more, if you know what I mean.)
Yeah, this has Isolated Demand for Rigor written all over it. How should I evaluate economics papers supporting free trade from think tanks that have a broad position of supporting free trade? How about papers showing benefits from universal pre-K from center-left think tanks?
For that matter, how should I evaluate academic papers from professors at universities that have taken a lot of left-wing or progressive public stances on political issues?
Approximately nobody applies these standards to the arguments from their own side.
Sure it is possible racists can be objective about science, this is all good and well, until he becomes involved with something where the racism actually does matter (like, working for a white nationalist organisation that tries to influence public policy), at which point your defence doesn't really hold up.
So, since approximately every university, think tank, and funder in the world is opposed to racism in any form, how should we evaluate the research coming from people receiving funding from those organizations when they discover racial discrimination in policing or employment or education?
"Ad hominem attacks are totally legitimate when they're aimed at people I don't like" and "anyone I don't like is a white supremacist" is a fun combination.
(I don't know what organisation he actually works for, but the term "white supremacist" label is so degraded these days that I don't deem the accusation worth looking into unless it comes with supporting information and from a trustworthy source who goes out of their way to show they're working in good faith.)
Shrug. Even if he's an actual, literal Nazi who has a shrine to Hitler in his bedroom, that may well make him an odious person but it doesn't mean his argument is wrong.
If you have evidence the study was performed poorly or fraudulently, present it. If you have evidence he has substantially misrepresented it, present it. If you have specific rational errors he has committed, point them out and describe why they matter.
His politics and general moral failings are not relevant. I guess you could argue that people of certain beliefs are substantially more likely to lie, but I'm certainly capable of citing specific examples from my own personal experience of academics who describe themselves as anti-racist lying to approximately the same level of severity, so that also seems like a dead-end, logically speaking.
The only reason you'd mention his character unless that is what was being discussed (like say I was saying he's a great guy and he's an actual literal Nazi) is because you are trying to discredit his character. Which is not what is being discussed here.
Man is literally a lobbyist. It's his job. He should not be treated as an equal participant in a scientific debate; he should be treated as a political spokesperson, or an advertiser. This would be the case even if the thing he was advertising for wasn't, well, evil.
Pretending that bad faith is good faith never works.
I notice you have yet to actually point out an actual lie or error or substantial misrepresentation he's actually made on this specific topic, continuing to make ad hominem points.
(I'm happy to stipulate he's a bad person. But the annoying thing is moral odiousness is not the same thing as incorrect on any specific topic.)
Do you have any actual specific errors or lies/misrepresentations? If yes, this whole pointing out his odiousness seems irrelevant. If not, it's ad hominem fallacy.
It's not about whether he's a good person or a bad one. And it's not about the points he made, either.
If I see him talking up a specific brand of lawnmower, fine. And if I hear a lawnmower salesman giving his opinion about genetics, fine. But you can't trust the lawnmower salesman about lawnmowers, and you can't trust the racism salesman about genetics.
Almost every person in academia has been turned into political activist, just for the left-wing. If they speak against left-wing, they risk losing their job. Why should we assume all academics are arguing in good faith?
Eric Turkheimer (1990) wrote "If it is ever documented conclusively, the genetic inferiority of a race on a trait as important as intelligence will rank with the atomic bomb as the most destructive scientific discovery in human history. The correct conclusion is to withhold judgment"
So he straight admits he will lie. And somehow people like Turkheimer are "good faith" to you.
I appreciate that you put "Counter-Semitism", "Ethno-Nationalism", and "Eco-Fascism" in the little description that shows up when I mouse over your name; lets me know where you're coming from.
Would be nice if Emil and his arguments came with such clear warnings.
Well, it's because the stuff he writes is generally pretty well-supported & precisely articulated. You won't tend to find anything worthy of a caveat on the modal EK post, IME.
In case nobody has mentioned these popular something-somethng hypotheses, it's all caused by microbiomes and/or breastfeeding and/or childhood exposure to allergens and/or some seemingly innocuous parenting choice that is actually ruining your kid.
This appears to me to be classic "nobody agreed ahead of time what evidence would be sufficient to upgrade/downgrade their belief on whatever topic is actually being argued about"
Firstly, of all the traits to track, IQ as tested in an IQ test is, I would suggest, a poor example because it requires both an innate intelligence (probably inherited) PLUS a childhood that at least did not suppress it. I would suggest that some parents may find precocious children annoying and from a young age,suppress or punish children that display high intelligence. Some of those children will be damaged and fear displaying their intelligence for life.
Secondly, i have always assumed the 'logic of genetics is similar to a circuit board logic; "If this, this and this, and not this, then that." In other words, a dozen genes may be required to be "on" and three more "off" to produce a certain trait. So tracking the "on" or "off" of individual traits would not give you the i formation about the final trait unless the combinations were tracked too.
In short, it is more complex that our science can currently work out!
IMHO best explanation is that both sides are missing an X factor that is currently undetected. I'm going to commit the narrative fallacy because otherwise this comment is too vague, but my loony idea about what is happening is that there's semi-heritable parasites that we can't detect that are influencing development.
Genes are only half the picture of genetics; you can't implant a human fetus into a live-birth shark's womb and expect it to work, because the instructions are, in and of themselves, incomplete. You also need a "compiler" which matches those instructions; pregnancy involves things like adjusting hormone levels at different points of development, which in turn change which parts of the instructions are being read (sort of). Is this process part of "heritable" or part of "nurture"? It's both.
“But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education?”
Over half of Brits attend universities, most of which are accreditation mills for the gullible and illiterate. My uncle is an English literature lecturer at two and complains that he’s forced to pass kids who lack basic KS3 (11-14 year old) writing skills.
Others exist because the government decided to buff the numbers by forcing degree requirements on professions that used to be devoid of them (nursing and policing, for example).
In other words: educational attainment is a very noisy indicator of anything, and we shouldn’t be surprised by low heritability scores, or dramatic generational variance in heritability.
"Emil and Cremieux argue that we know why this study found low heritability of IQ. It’s because you can’t give 347,630 people a full-length IQ test. So they gave these people a short crappy IQ-like test with a lot of random noise. Past studies estimated the reliability of this test at 0.61 (low). It’s easy to statistically correct for this; when you do so, you find that if the test had been better, this study would have estimated the heritability of IQ at 55%."
wait, what? "It's easy to statistically correct for" a short-form IQ test by making a guess about what it would have been if the exam had been more comprehensive? You're doing a conjecture-based ballpark estimate and certifying it as AA?
Thank you! I was wondering the same thing while I was reading Scott's take. How can you make a correction and just assume the data will move in the direction of your hypothesis? It sounds like we can conclude is that the CI could include the 55% claim, but we should be clear that this 'correction' doesn't actually provide evidence of that claim? (I'm still not exactly clear on the correction/assumption being made.)
Did you look at the example he gave where this assumption breaks down? Are there any obvious factors present in that example which should have led us to expect a strong likelihood the assumption to fail even before analyzing the results, factors which clearly wouldn't cause problems for the 55% claim here?
'Just to provide an illustration of what I mean, Willoughby et al estimate the heritability of IQ using two measures: the ICAR-16 and the vocabulary subtest of the WAIS-R. The vocabulary test has a higher reliability (0.93) than the ICAR-16 (0.81), so intuitively, you'd expect its heritability to be higher than that of the ICAR-16.
But that's not actually what they find. The vocabulary heritability was just 12% while the ICAR-16 heritability was 42%. So in this case, the less reliable cognitive assessment had the higher heritability.'
Ah, I should caveat that technically the reliability measurements I cited are a bit apples-and-oranges here.
The vocab reliability was measured using test-retest correlations while the ICAR-16 reliability was measured using item correlations. I wasn't able to find an exact apples-to-apples reliability measure that was available for both tests.
I don't know what's going on with Willoughby but it seems more plausible that there's a systematic flaw in the experimental design than that two independent sources of noise are correlated.
Sure, but in order for this apples and oranges aspect to fully/mostly account for the ICAR-16 test having higher heritability than the vocabulary test, I'd think you'd have to adopt a hypothesis where not only would an apples-to-apples test reverse the relationship in reliability (it would show that ICAR-16 actually has higher reliability in predicting IQ than vocabulary), but would reverse it by just the right amount to account for the 42% heritability of ICAR-16 vs 12% for vocabulary. That certainly could be true, but unless you assign pretty much all your credence to that hypothesis, you should be assigning some significant amount of credence to the idea that there's something else going on.
You seem to be saying in your last comment that any alternative (besides some other systemic flaw different than the apples-to-oranges reliability comparison) would be the unlikely idea that "two independent sources of noise are correlated". But I would imagine there could be plausible explanations other than random noise for why each of the tests are only partially reliable as IQ predictors, which might allow the relative heritability of each test to differ from their relative reliability. For example, some models of IQ suggest it's about nonlinear relationships between a lot of distinct mental abilities (including learned tips and tricks in solving certain kinds of problems), a bit like the network theories of mental disorders discussed in an old SSC post at https://slatestarcodex.com/2016/12/14/ssc-journal-club-mental-disorders-as-networks/ . In this case, it might be that lack of reliability is due to a kind of one-sidedness where the test gets at certain abilities relevant to IQ but not at a sufficiently diverse swath of them. Then two different tests could be one-sided in different ways, but the one-sided abilities in test A might be more heritable than the one-sided abilities in test B even if test B was more correlated with IQ overall.
edit: I realized I was mixing up reliability and accuracy in the last paragraph above--if we just want to know how well a test predicts g, that would be its accuracy as a measure of g. But doing some quick searching, the page at https://www.accuracyresearch.com/blog/accuracy-vs-precision-vs-reliability/ says that reliability "encompasses both accuracy and precision", where precision would just measure how consistent a person's scores would be on a test if they retook it a bunch of times (with the analogy of a bunch of arrows that are tightly clustered in the same region of a target even if it's far from the bullseye). Does anyone know the definition of "reliability" that would have been used here, whether it could be influenced by how accurate the two tests are as a measure of g, as opposed to just their precision?
Some other sources seem to say that reliability is sometimes just used as a synonym for precision. But even in this case, I think it could be that a less reliable/precise test is more accurate as a measure of g--in terms of the arrows/target analogy, one group of arrows could be more tightly clustered but with an average farther from the bullseye, while another group could be less clustered but with an average closer to the bullseye. So this might still allow for something like the idea I talked about above where both tests are one-sided in different ways, but the one-sided abilities in the less precise test are the more heritable ones.
Maybe dumb question - but have they considered mother-child epigenetic influences during pregnancy? This would show up as heritability in twin studies, but would not be captured in GWAS
> But they couldn’t do a twin study, because most people in their sample did not have twins.
Why not? Their sample is 350,000 people. With identical twins at 1 in ~400 births, there should be around 800 people in the sample who have identical twins. With fraternal twins being more common than identical twins... is a 1,500-person twin study just not a viable concept? How many people have been in other twin studies?
The UKBB probably has many sample members who are one of a pair of twins. But it doesn’t sample the other twin (except by chance, and that’s too rare to be useful).
> (except by chance, and that’s too rare to be useful)
What is the role of chance here? I calculated for another comment that the sample includes about 0.6% of the population of England. A bit more than one in every 200 people.
At that level of coverage, if inclusion was random, I wouldn't expect much in the way of family links between sampled people, but that doesn't seem to have been an issue?
Sure, I'm not challenging that. I'm saying that coverage appears to be nonrandom because the study had no troubles finding relatives within the sample base to do their measurements on.
I guess if we assume that everyone has 5 "relatives" and coverage is uniformly random, 3% of people in the study will have a relative in the study and we can break that down into ~5000 pairs of relatives.
Definitely non-random - it’s focused geographically around a small number of health centres. But even if the 0.6% figure were say 6% because of this, it would still be too small.
If it's driven by volunteering, I might expect identical twins to be much more overrepresented than that on the grounds that what seems like a good idea to one identical twin is overwhelmingly likely to seem like a good idea to the other one too.
(I'm not saying this applies, but it did seem at least somewhat relevant.)
> That is, if I and my neighbor are 50.001% genetically similar, and I and my other neighbor are 49.999% genetically similar, how much more do I resemble my first neighbor than my second neighbor?
Yes, I'm aware in broad strokes of what you're referring to.
Mostly I'm just making a joke. However, in any context where "50.00% similarity" might actually hold between two humans (with "similarity" being contextually defined), I'm pretty sure that those humans must have a parent-child relationship.
(Full siblings are 50% similar (in a sense) on average but not exactly. A parent-child relationship involves exactly 50% similarity (in the same sense, but not in every possible sense).)
Assortative mating on blood pressure doesn't strike me as that crazy if you assume it's actually getting at neuroticism, SES, diet, or some other variable.
What makes us think people do not assortative mate based on white blood cell count and blood pressure? These are health indicators! Many of them flow directly into beauty, family life, economic outcomes. And having a healthy partner makes a huge difference for your quality of life.
"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education?"
sqrt(0.10/0.55)=0.43, which seems like a pretty healthy effect size?
Very much not any sort of expert on this, but does this account for epigenetics and the possibility that my parents' nurture (as opposed to my direct nurture) might affect my outcomes?
"The Good News Is That One Side Has Definitively Won The Missing Heritability Debate, reports Scott Alexander @slatestarcodex. Actually, the real debate was won long ago: heritability of intelligence is substantially above 0. There can't be an exact "correct" estimate, because heritability is a proportion and it mathematically depends on variation in the sample. Also obvious (at leas to me): GWAS has to underestimate heritability compared to adoption & twin studies, since the latter estimate effects of the entire genome with all its interactions, & the former is capped by the number of measurable genes. (Still nice to see the gap closing.)"
The main issue is what are we going to learn / have learned after all this patient work
I'd argue not a lot more than. different traits are differently heritable with the exact numbers still to be nailed down. Or equally different traits are differently non heritability conditioned with the exact numbers still to be nailed down.
But if we did nail the exact numbers down do any of us expect the percentages on most traits to be game changing in practical or research terms.
As expected, Scott got lost in minutiae while missing the Crux of the debate:
- Nurture's observed positive impact on variation is at its MAXIMUM, rich parents are spending everything on anything to boost their kids. If a factor like height is 20% Nurture, that is roughly the MAX which Nurture could ever add to height
- Heritability is generating the MINIMUM possible variation, because of Regression towards the Mean among dozens of genes, numerous alleles interacting; we are basically mutts-descendant-from-a-genetic-bottleneck, which means we do NOT display the upper-bound of heritable variance; our heritable variations are at their minimum.
So, if Nuture can, under ideal circumstances, only generate +10 IQ per standard deviation observed, while rare alleles are at MINIMUM generating +5 IQ per stdev, that leads to the question: "What is the MAXIMUM variance possible, when heritable traits have been selected in plants and animals?" How different is a cow from an auroch, in standard deviations of auroch?
I'm still confused by the reliability adjustment. The test in question is the UK Biobank Fluid Intelligence Test. Based on it's supposed to measure fluid reasoning, not g[1]. In factor analysis, fluid reasoning is a separate factor from g with separate, oblique loadings. It is thought to have its own heritability. This breaks the assumptions underlying the reliability adjustment, namely that all variance is explained by either g or noise. The test likely loads on g and fluid reasoning.
It's also worth noting that the test lasts 2-minutes and contains 14 question. It's entirely plausible that it's loading on many other higher-order and test-specific factors.
Note: My understanding of g and factor analysis mostly comes from a cursory review of John B. Carroll’s Human Cognitive Abilities: A Survey of Factor-Analytic Studies (1993). Happy to be corrected if I've misinterpreted anything.
> Twin studies, adoption studies, and pedigree studies overestimate this because of assortative mating and population stratification
And presumably because of partially-controlled environmental variation: families from the Minnesota Twin study families would be more environmentally similar than randomly-selected American adoptive families, American families writ large, or families worldwide
The article sets up a false dichotomy that the main difference between the "hereditarian" and "nurturist" positions is the degree of heritability. The true "nurturist" position is this: Behavioral characteristic differences among people may or may not be influenced by heredity. However, family, twin, and adoption studies are based on false assumptions and other major problem areas, and are therefore unable to detect possible genetic influences. Claims that causal genes have been found at the molecular genetic level are questionable. "Heritability estimates" are highly misleading and should be abandoned in all areas of human behavioral research. Much more, but I will leave it there.
The elephant in the room is that despite nearly all governments in last century being anti-hereditarian yet the gaps are still here.
Why are anti-herederatians are busy arguing? Do they need to attract general population votes so their interventions be made in practice? No, they already had total political victory and had got almost every law they wished and gaps are still here. So they aren't even trying to find interventions that work and concentrate on badmouthing hereditarians.
If anti-hereditarians were right, they would be instead of ridiculing National IQ lists be busy producing lists of which Baby Einstein Toys provide best value for the dollar spent.
"A certain sort of contrarian might relish this conclusion." Yup. Count me in. There are soooo many problems with IQ tests. Starting with the fact that they do not measure intelligence (because we don't have an independent objective definition of what intelligence is).
"The biomedical traits confuse me the most; it’s still hard to square the twin studies with the sib-regression and molecular estimates. Either people are somehow assortative mating on blood pressure, or else these remain the strongest evidence of some deeper problem."
I have a suggestion. All these people are still testing for main effects: the influence of heredity vs. non-heredity. They aren't going to pin this down until they start testing for an interaction. The formal definition of an interaction is "When the impact of one variable on the outcome variable depends on the impact of another variable." Impact is defined here as the correlation of a change in value in the independent variable with a change in the dependent variable. How correlated the DV is with a change in an IV depends on a change in another IV. That's an interaction.
In other words, heredity is more important when you hold environment relatively stable. When it isn't ("I grew up in a war zone") then it declines in importance. This means you have to test across childhood in different extreme environments. I suspect that those long black bars indicate variation across environment (I'm guessing some variety of poverty). The sample was restricted to "British people", did they include immigrants? I predict that if you extend the sample to more diverse regions around the world the black bars would grow. Even Height might fall to somewhere in the 60's range (due to variations in diet).
Note that I am *NOT* claiming that diet is more important than genes in determining height. I'm saying that diet influences how important genes are in determining height (and vice versa). If you are starving, it doesn't matter who your ancestors were, except to the extent that you are the tallest corpse in the mass grave (and your kids are dead too, so no one is inheriting anything).
This implies that genes should be becoming more important as time goes by, because standards of living are improving globally. One day the hereditarians will be right, but it hasn't happened yet.
I'm not gonna be convinced by either side of this debate until someone shows me an IQ test that is universally applicable across language groups, easy to administer, and does not respond to having taken similar tests in the past.
There is almost certainly some sort of general intelligence that is to some degree determined genetically, I just find it hard to believe that any of the IQ tests I've seen measure it. EG, I've participated in some number of studies involving IQ tests because I returned to college as an adult in a place where people care about this and signed up to all the database programs at said college, so I kept getting invited to these studies and: I notice that you can boost your scores significantly by
Being well rested
Having just the right amount of caffeine in your blood
Having taken a similar test before
Having the lights in the room be less distracting
Having the test printed out on a printer with a certain DPI/On an OLED vs. LCD screen
Having enough background noise but not too much background noise
I don't know, Venus being in retrograde or something
These tests are noisy as shit; they probably are not phrenology in 2025, but they aren't anywhere close to eg. a blood panel in terms of predictive power. As it stands, I don't think they are good enough to definitively isolate signal from noise.
Retest correlations for IQ are > 0.9 so those factors can't plausibly matter much. Yes IQ tests don't measure intelligence perfectly but don't pretend like intelligence isn't real and doesn't vary between people. Some people are very clearly much smarter than other people and it's not because they're well rested. There is no set of environmental interventions that could ever ever ever turn either one of us into Einstein or von Neumann. That's just common sense. Squabbling over exactly how accurate the tests are is missing the forest for the trees.
In my view comments like yours are the equivalent of sticking your fingers in your ears. IQ is real, significantly genetic, and predicts many many life outcomes. If you're going to argue against that then you might as well be a creationist.
I am not sure your retest metric is accurate is the thing. I have never seen any truly adversarial work on the subject but I also don't care enough to look deeply. If you have such a study please show me and I promise I'll at least read the abstract and the methods. As it stands, IQ testers testing their tests and getting back an A- doesn't move me.
I've only gone through the first study so far, and it fails to address my objection: every person chosen in the sample shared and language and cultural group (as far as I can tell, I had to find the study elsewhere and they have not published their demos or raw as far as I can tell), the tests chosen were exactly the tests I objected to so it is unsurprising the results I think are bogus were bogus together, and the administration was shared across each test and each testee (hahaha balls) in the same time unit, their methods were not pre-registered, and they did some statistical fishing that is ok in a soft science ala sociology or economics but I would find fucky in a hard science.
This next bit is unfair and not being considered: Sample size == way too small. I think this is just how it is though, current IQ tests are way too expensive to run well to get a significant size. I am electing to ignore this, but in any other situation I would reject out of hand a sociological study with +-500 participants as insignificant.
This paper is not so bad it prejudices me to the concept, but it fails to move me from my position. I am done with my coffee though, so the next paper will have to wait.
>every person chosen in the sample shared and language and cultural group
Yeah this is a standard bad-faith trope that people who have an ideological opposition to IQ use. Cultural bias is precluded by ensuring that measure invariance isn't violated. If you really want to understand this read "The g factor" by Arthur Jensen. This has been debated by people smarter than you for 100 years. You're not going to come up with an objection that hasn't been thoroughly investigated and dismissed.
> it fails to address my objection
What objection, where? Your initial objection was that retest reliability is low. You said nothing abut cultural fairness, which is addressed with measure invariance. This is clear goalpost moving and confirms my suspicion of bad faith on your part.
Like I said, this is missing the forest for the trees. You can nitpick various things about IQ but that's all you'll be doing. It's the single most well-validated measure in all of psychology and it's very clearly consistent with the large-scale social patterns that one finds in the world. If you don't want to accept that then as far as I'm concerned you can go sit with the creationists. Reality doesn't go away just because you refuse to believe in it.
>Yeah this is a standard bad-faith trope that people who have an ideological opposition to IQ use. Cultural bias is precluded by ensuring that measure invariance isn't violated.
Yeah, and smarter people than you have thought it was a valid objection for just as long.
You can't just say "This effect has been controlled for" and then not explain how, and then expect me to just ignore the obviously visible results of said effect.
Re. the rest of your post: If I pick all the nits of a study and then it blows away in the wind like a ghost, whoops, it turned out those were load bearing nits, pardon me if I don't construct my entire value structure about it.
This is banged out on a 15 so It wont be deep but listen: I am not a heritability denyer, on account of having eyes. I also am not a credulous IQ hyperbooster, also on account of having eyes. if you want me to get on the IQ determinist train, you are gonna have to provide more evidence than one questionable study of less than 500 people and an un-preregistered meta-analysis.
I'm struggling to see why there is this qualitative distinction between "hereditarians" and "nurturists" if everyone agrees that most traits are somewhere between 30%-70% hereditary. Like, yeah if we're doing quantitative utilitarian calculations on whether a given environmental intervention will be worthwhile then a difference between a particular trait being 30% environmental and 60% environmental is going to give us a 2x coefficient on the possible effectiveness. But broadly it seems like you some of each trait can be improved with environmental changes, and some of it can't unless you resort to eugenics or gene editing.
Why would people categorize themselves or others using these labels? I would expect a "hereditarian" to be like 80%+ or a "nurturist" to be 20%-. If everyone's in the middle then why is there so much fighting?
Because it's a proxy battle for the culture war. Hereditarians like to say that racial differences are unalterably genetic so all these expensive education programs are a waste of money and blank slaters like to call hereditarians racist so they can justify their redistribution programs. If everything is genetic then progressive arguments about racial oppression fall apart.
Why should one care for racists, if hereditarians are right, its good news, we can just incentivize smarter people from poorer countries/ groups to have more kids over 50 to 100 yrs. Smart people having more kids is the solution and the reason for smartness to increase in human evolution . Breeding works even in animals and plants. More smart, more non zero sum game dynamics and no one will have to care for racists.
Virtually all accusations of racism are made in bad faith. It's become little more than a blanket defense for bad behavior. The concept has no place in productive conversation.
you’re correct that understanding race is real doesn’t mean we have to go full Hitler, but it’s absolutely assbackwards to then find the conclusion: “this means we just need MORE redistributionism!” No, we need eugenics. Conscious, explicit, ethical, and thorough eugenics.
It’s also hilarious that you basically said everything racists say is right, but being racist is still bad (?)
Well there’s the question of whether intelligence is really what we want to optimize for.
There’s a good argument (and is the one most accepted by psychologists who use IQ in diagnosis), that IQ differences are so small that they only really serve for negative diagnosis not positive ones.
So that, the difference between someone with a 110 and a 130 isn’t the same as the difference between a 80 and a 100.
I really don’t understand this debate. We have separated twins at birth. Despite a wide difference in rearing, they are still found to be nearly identical. The only thing they share in common is their ancestry… which, therefore, is the important factor.
That’s it. There’s your conclusion. Anything else is ridiculous until proven otherwise.
I am autist enough not to care for what racists have to say on this topic. To me, its quite common sense that eugenics works in plants and animals. we call that breeding. We need more numbers of bio computer to solve problems, we can do with more smarter people . Without a single new idea , even 100 yrs ago, if we said, ok, smart parents should have more kids from all poorer communities, would it be bad?. It can be done even now. People want equality among groups, it can be done. Lefts denial of genetics is troublesome as eventually science will catch them with their pants down . And the history that will be written later will be one of discrediting them entirely and of potentially keeping humanity backward for couple of generations due to moral panic. Coming from India, Here the DEI is insane enough and getting worse every day. And no one believes in IQ tests/working memory tests, one doesnt know why, thats easiest test requiring least preparation to identify talent universally across populations.
Huh, I wouldn't be surprised if white blood cell count somehow correlated with how often and for how long you get sick (though I dunno in which direction), and it doesn't sound that absurd to me that people would assortatively mate based on that to some extent; and blood pressure correlates with diet and anticorrelates with physical activity and people pretty definitely do assortatively mate based on that. (I'd expect the author of "Society Is Fixed, Biology Is Mutable" to have figured this out too! Or am I missing something?)
I remain confused by heritability studies, including this one. How do they account for prenatal influences? Isn’t it a misnomer to call it heritability?
Why not perform this research on dogs? There are hundreds of recognized breeds with distinct characteristics: size, color, coat, conformation, even personality (retrievers, herders, and trackers have very different instinctive behaviors). These characteristics are unquestionably hereditary - the very definition of a breed.
The characteristics assessed in these studies could be measured for dogs (except smoking, education, and number of children). Dog generations are much shorter than human, so inheritance could be measured. Breed-mixing has become popular (e.g. "labradoodle", "bullwhip", "cava-tzu", "schnocker"), which offers lots of experimental data.
While I personally think that would be an interesting study, I'm not sure it would help much to resolve the human question. The genetic effect is only half of heritability (the numerator, in particular). The other half is environmental effects. Even if one can assume dogs' genetic effects are similar to humans', it would be too much to assume the environmental variance is similar.
Dog breeding is a pretty good demonstration of the ways I'd expect effective human eugenics programs to go badly. (None of the ones we've ever had operated at enough scale to do anything but let some bureaucrats mistreat some unfortunate people.). We probably end up with people who are 6'5" and blonde and chiseled and get great SATs, but are missing all kinds of other things that weren't optimized for / were traded off against to get there, the way Dachshunds are *really Dachshundy* in ways that often cause them back and other problems.
B) "The heritability of IQ at a particular concert on November 2, 2024 might be about 15%."
The answer is B: because the stated implication is that the statistic is for a known population at an unrepeatable event, although the idea that someone was running around giving IQ tests is implausible. A, meanwhile, is completely wrong, always, whether written or spoken, because it carries an inference that a rule might be found for all human beings at all times. Heritability is a measure of differences (if there are no differences, then the heritability of that trait is 0%) in a trait's expression for a particular population (and only that population) at a particular moment in time (the time of measurement, and only the time of measurement) that can be attributed to genetics (and no-one knows how much of an IQ score is attributable to genetics, and heritability studies provide zero information on this). It is NEVER generalizable. The misunderstanding of the term "heritability" isn't just a source of confusion, it's the foundation of the entire political project to find out if one group of people is genetically smarter than the other. It's such a badly named concept that it ought to be retired. A good source of clarity on this is the short book The Mirage of a Space Between Nature and Nurture, by Evelyn Fox Keller.
I haven't particularly followed this debate, but shouldn't the people attempting to find "explanatory genes" (what year is it, 2003?) have come to the extremely obvious conclusion of gene interactions dominating the traits, where no particular collection of QTLs linearly regresses onto height in the same way you can't get a faster civic by replacing your airbox with your mom's lycra lampshade?
Of course people assortative mating on blood pressure. Blood pressure correlates with which activities you prefer, what drinks you like, how good you are at sports, all sorts of things. And it's affected by diet so partners will be even closer then they should be according to their genes.
Scott, is it possible that heterogeneity across researchers—particularly in how “nurture” and “heritability” are defined and operationalized—is contributing to conflicting conclusions?
I remain unconvinced in the absence of clear, reproducible demonstrations of explanatory and predictive power—ideally evidence showing the ability to reliably engineer predictable outcomes under controlled conditions to account for the "non-heritable" factors. To date, I am not aware of such evidence. Without standards of prediction, control, or constructive validation, these claims risk resting more on interpretive frameworks than on rigorously testable theory. In that context, I worry that methodological and philosophy-of-science standards may be weaker than acknowledged, and that disciplinary siloing plays a larger role than is often admitted.
What is the mechanism I ask? It can't possibly be hand-waving.
But this still doesn't answer the important social questions:
1) are the differences between populations in socially relevant traits like IQ due to genetic differences or environmental differences?
2) are the differences between populations in social outcomes like education level or crime rate due to genetic differences or environmental differences?
These two questions are what people really disagree on and this paper does not advance that debate. Many hereditarians will see that individual differences in IQ are ~30-50% genetic and then suggest that differences between groups are explained to similar or same degree by genetics.
This may be either really stupid or really obvious, but has anyone tried doing one of those population simulations (like those wolves vs. sheep vs. grass things) for how much heritability vs. nurture works best? It seems to me like a "tradition vs. progress" type thing, where heritability helps to keep useful traits that were hard to achieve, and nurture helps to obtain new traits to adapt to the current situation. I guess I'm arguing for a "inherited nurturability" type thing.
"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education? A certain sort of contrarian might relish this conclusion. "
Success in education is quantized (you either have a 4-year degree or you do not) and noisy (all 4-year degrees are the same ...) right? Certainly compared to IQ measurements?
When you say "retarded", what do you mean? As in diagnosed as having a learning disability, or just not very smart?
Great point!
I don't know why I should find this as implausible as Scott suggests. Yes, surely what grade you get in a STEM class is highly correlated with IQ, but how many years of higher education you manage to stick to.. I wouldn't be *that* surprised if that was not correlated or even negatively correlated, with IQ.
Modern-day education might have these problems, yes.
I assume that of sociology students.
Not if there is more than one type of IQ, and sociology taps into a different one than STEM.
You could convince me that the day to day practice of psychology or psychiatry taps into a meaningfully different form of intelligence than a STEM education.
But I challenge you to make even a facially plausible argument that academic sociology requires a meaningfully different form of intelligence than a STEM degree.
It could be negatively correlated at one range of the spectrum but certainly not across all of it. We know that fewer morons become doctors. We know that PHD tend to be above average. Perhaps you meant that the correlation might be weaker than linear?
The technical definition of "moron" is someone with an IQ between 50 and 70. This is somebody with the mental age of between 7 and 12. None of them are doctors. Not sure what you mean by midway but Google/AI suggests that the average IQ of MD's is between 120 and 130. I would probably agree with this estimate. It seems silly to describe a group centered between the top 15% and the top 5% as morons.
Using the colloquial definition of "moron" because the whole point is to question the utility of the IQ score in this context.
So, you think the colloquial definition of moron includes IQ 120-130???
Don't you need a rather impressive SAT to get into medical school?
No I mean it might also be negatively correlated. What are there most of, people who get a five year degree in the humanities getting straight Ds, or people getting three year degrees in STEM getting straight As? The former have "higher educational attainments" than the latter.
I'm not really saying it *is* negatively correlated, but I wouldn't be surprised, and the 55% to 10% drop isn't even a little bit surprising. Like, at all.
That is what I meant by a particular range of the IQ distribution. Probably, nobody with a IQ below 75 or above 125 is in either of those groups. Also, not sure what you mean by 3 year vs. 5 year. A BS in STEM usually takes 4 (or 6 for some of us). What humanities degree is someone getting in 5? And are we sure that the ranking for "educational attainment" might not rank a BSEE above a MS-basketweaving?
Here in Europe, all BSc. degrees (typical of STEM) are three years, and master's typically five.
I am just assuming they measure educational attainments in number of years, regardless of field and what grades people got, but I have no idea, I guess we could check.
I think they use highest obtained diploma because that’s the data that is easily obtainable. It does translate roughly to years of education on average even though it is obviously a very stupid metric, especially since fields of studies have become so varied and often very disconnected to objective measurement.
I think nowadays education is a meaningless metric and it is largely linked to many of academia’s issues and also the terrible state of politics (for people who obtain absurd amounts of power/legitimacy while being rather dumb, look nowhere else than political « science »).
The guy above seems to think that education has a high pass filter but I have seen a study showing that actually there are some people with masters that are around 80-90 IQ.
is educational attainment measured in years? I know many who studied a bit longer, because they had enough money and student life is great, or did an Erasmus semester (studying abroad, with the goal (by the EU itself) on having a good time and connecting with other students).
I agree about the bottom end of the distribution, but are you really saying no one with an IQ over 125 gets top grades in a bachelor's STEM degree and then leaves academia? They all go on to do graduate degrees rather than going into (better paid) industry?
Probably most of my friends are counterexamples.
I am absolutely not saying that. I said that the result is generally not linear (a doubling of IQ does not guarantee a doubling of educational attainment). And not negative across the majority of the IQ distribution (It is possible that dumber people get more of some particular degree over a narrow range of IQ but not across the entire IQ distribution).
My understanding is that most "years of education" measures would treat me as having a 4 year degree even though I graduated early, the same as a student (who I briefly partnered with in my first semester) who started years before me but graduated at the same time.
How do we know this? I have a Ph.D. and I can assure you that native intelligence had little to do with completing the program. Work ethic (a completely different trait on the "Big 5" scale) is far more important.
If you have a Ph.D. then you should have enough reading comprehension to understand what I wrote. Were there numerous people in your Ph.D. program that was too dumb to make it out of 3rd grade? A few who would have found bagging groceries to be intellectually challenging? If not then educational attainment is not negatively correlated with IQ across the entire spectrum of IQ.
You're right, I didn't read your post carefully enough. What you are describing is a floor effect -- someone must have a minimum IQ just to get into higher education, but above that academic performance would be uncorrelated. That seems plausible.
I don't think that it would be truly uncorrelated. I suspect that it would be more weakly (but positively) correlated. The original comment that I objected to just said that smart people were more likely to forgo higher education and thus IQ and education were negatively correlated. I suspect that the correlation is always positive but that over narrow ranges of IQ effects like that described (forgoing higher education) likely attenuate the correlation (it is possible that it goes negative over narrow regions but I would guess that it does not).
Yes, surely what grade you get in a STEM class is highly correlated with IQ, "
no. class grades are normalized on the sample of students in the class and class tests are highly unreliable
In Europe it's not common to normalize grades. For instance one of the harder maths classes I took had an extreme almost perfect exponential distribution.
Also that tests are unreliable is not actually a problem for statistical correlation. Lots of processes are highly stochastic, yet have strong measurable correlations.
It's like the old joke: What do you call someone who graduates at the bottom of their class in medical school? Doctor.
There's going to be a huge amount of noise added when "Scraped a passing grade in a Communications BA at their local state school" and "Top of their Harvard Theoretical Physics class" are averaged into one group. Enough to make a 55% heritable trait look like a 10% heritable trait? Maybe.
who knows! I still haven't been able to "solve" that Raven's Progressive Fuckery that Scott posted many moons ago, yet I have a post-honor dropout from a prestigious CS program and working in "tech" and even I'm usually scared how fast I notice things on the screen when there's a shitton of boring server logs. Not all patterns (or anti-patterns?) are made equal apparently :)
I have not looked into it much but I think Raven Progressive Matrices are an example of over specification/specialization.
You can’t define intelligence by simple evaluating performance at a single narrow task, otherwise the guy from Rain Man who could memorize a phone book would be considered extra intelligent when it’s not really the case.
There is a correlation between various tasks performance but the reason to use multiple test subjects is precisely to average out weird outliers (that may come from genetics or environmental pressures).
Also, the IQ tests in general exist solely in academic/schooling oriented form for efficiency/convenience but I believe you could come up with various « real world » tests (solving problems with whatever you have at hands) and they give an even better picture.
Yes, the formal version of this is that robust IQ tests need ~as many different kinds of intellectual tasks as they can reasonably fit into paper or electronic format, and g-factor is strongly but not overwhelmingly correlated with IQ.
Also, heritable means generational, and college enrollment has nearly doubled in the last 30 years.
You'd expect low heritability for college enrollment just because of historical trends that make one generation very different from the next on this measure at a macro level.
Remember, heritability is always a ration between genes and environment. If you're measuring something with a large environmental contribution, heritability will be lower.
Yes in 100 years the percentage of US population that had a bachelors degree went from 2% to 38%. These populations are very different on most scales.
Yeah and this study used a British sample – over 50% Brits go to university.
I am not sure that quantization matters if your sample size is large enough.
In general you should expect to see educational attainment become a less useful signal over time as college has become more of a class marker and screens less.
If the study's "educational attainment" is mostly just whether or not someone has a 4-year college degree, then I'm guessing that the most significant factor by far is how much their possibly adoptive) family and their high-school teachers feel it is imperative that they have a four-year degree. Even someone with a moderately low IQ can get a four-year degree in *something* if they're pushed and supported enough, and on the flip side there are probably a lot of high-IQ kids from blue-collar families who wind up putting that to work in a trade or running a small business.
So, yeah, not surprising if this sort of "educational attainment" is mostly environmental.
Might be illuminating if you could break it down by, e.g., of the subset of people who enrolled in a four-year college, how many graduated with an intellectually challenging degree, graduated but only after 5-6 years, graduated but with a degree in underwater basketweaving or whtanot, and how many just dropped out. There'd still be a strong environmental component, I'd expect, but also a stronger correlation with IQ.
> Might be illuminating if you could break it down by, e.g., of the subset of people who enrolled in a four-year college, how many graduated with an intellectually challenging degree,
To MM's point above, you can get roughly back to the "top 2%" threshold that a degree used to mean 100 years ago if you just filter by "STEM grad degree" from any school. This is <4% of the pop, and at least half those are foreign students.
And if you go by "T20 undergrad degree," which still is a genuine quality filter, it's ~0.5% of the pop.
Plenty of brilliant people not in STEM, and plenty of cheaters in STEM, any more. Especially among those foreign students.
So maybe. But I dunno.
That's quite a myth. 100 years ago land grant unis already existed, less-than-rigorous courses already existed (pick up a stats or econ course from the 20s if you wanna have a laugh), and education was much less math-heavy at all levels. Sure, it's kinda impressive these guys were somewhat proficient in Latin (much less than you'd think if you do not know Latin: they were translating Ceasar, not Lucretius), but you could get in without knowing any calculus or even precalc.
I think that this idea that if x% went to college, then it must have been the smartest x% of the population is just projecting backward something that's very contingent to our society. 100 years ago, nobody would have thought that. Sure, there was a floor (but again, hardly a prohibitive one) to get a degree, but beyond that it was mostly a matter of social class and interest. You could get almost any job, even being an engineer or a lawyer, without a degree, which obviously dramatically changed the calculus for a smart lad.
Heritability of IQ increases with age. Years of education is heavily determined when people are young and their parents have more ability to make them go to school when they don't want to.
Are you implying that once you are free to not “educate” yourself your IQ goes down? That detracts from IQ as a useful measurement doesn’t it?
IQ scores are less reliable at younger ages. They are still useful, and were developed to detect people with learning disabilities, but usefulness increases closer to adulthood.
Lots of speculation here about what “educational attainment” actually refers to. In these UK GWAS papers it almost always comes from UK Biobank Field 6138 (“Qualifications”) (https://biobank.ndph.ox.ac.uk/ukb/field.cgi?id=6138
), which is just the highest credential someone reports (they have MA/PhD in some versions, I think).
Researchers then convert that credential into “years in education” using a fairly confusing system that doesn’t correspond to real schooling length. For example, GCSEs are coded as 10 years, and a Master’s degree gets coded as ~21 years.
Pretty terrible, but this is the best you can do with the dataset. If they measured grades, university attended, and subject, educational attainment would start tracking IQ far more closely.
Surely there's a bunch of studies out there already showing a strong correlation between IQ and educational attainment, we don't need to make weird roundabout guesses?
Yes!
If this is the UK it’ll be 3 years, and if you’re not distinguishing between universities then age will probably swamp all genetic factors (% going to university has doubled since the early 90s). I’d also expect it to track class fairly closely, particularly in older generations, which will create genetic confounders and make it look more heritable than it is.
If it’s looking at postgraduate education, I’m not sure they’ve picked the right country. Academia is broadly looked down on in the UK so PhDs will mostly negatively track employability among the moderately intelligent, and non-matriculation MAs/MScs will be disproportionately concentrated among people going who struggled in the graduate job market first time round.
You have to take seriously that the modern educational system may be actively selecting against high intelligence overall.
It is obviously selecting against disagreeableness which seems like a requirement for an IQ above 120. How much nonconformity can you get away with and still make it through a master’s degree?
A part for that, there are so many other factors influencing educational attainment. Even if you buy that all good things are correlated, the more factors matter, the more the importance of each shrinks. Sure, conscentiousness is correlated with IQ, but it's hardly collinear, so the smart-lazy and dumb-diligent lower the correlation. Parental income might be correlated (bc parents' IQ affects both), but again, hardly collinear, so smart-poor and rich-dumb dampens the correlation further. Add a good measure of random shocks, mental illness, truly orthogonal factors (like your state's education policies), and it's not unbelievable that the correlation might not be as big as one expects
It doesn't strike me as particularly close, the hereditarians won. Imperfect studies show most but not all of the effect they predicted, as expected.
It still suggests a lot of "missing heritability" relative to typical results from twin and adoption studies. I'd be interested to see if there's any way to compute estimates for shared environmental effects from this data, though, which is what nurturists should be looking for.
It feels very “God of the Gaps” at this point.
Having a huge argument about whether things are 30% or 50% heritable seems like a pretty misguided fight.
The fight isn't really about that, it's a status competition to determine whose assumptions generally deserve to be taken as the null hypothesis (among the narrow circle of contrarian autists who don't just defer to their tribe's priesthood).
Actually, it's about which set of assumptions should be taken as the basis of public policy. Those are huge stakes.
I think the people arguing for 30% are often people who were arguing for 0% but realised it wasn't credible.
The rejoinder to your point is so obvious that I won't bother stating it.
No you need to state it.
I guess it's 30% heritable, 20% environment, and 50% divine intervention.
True. That or it calls into question the whole genome analysis technique. One or the other
It is a fairly new technique with all the flaws that come from that.
I do wonder how much of the mitochondria DNA they analyzed.
Mtdna is very small and usually doesn't have much to do with regular phenotypes. At least that's what most people think.
Except, at least for social phenomena, they don't provide any actual mechanisms.
There is not a single hereditarian in the world that I can go to and get an answer to which gene, or genes, are responsible for the difference in intelligence between von Neumann and Donald Trump (insofar as there is a genetic explanation for that difference). While that is one specific example, this is a globally valid critique too.
Seems a classic isolated demand for rigour.
Uhm... no. This is the entire point.
If don't know what genetic material is causing the phenotype, then by definition you can't know the variation that exists in that genetic material. If you don't have the genetic variation, you can't make a claim about heritability.
If you don't know what genetic material is causing the phenotype then by definition you can never make any claim about heritability.
A claim no one has ever made about height and no one made about intelligence till a couple of years ago.
Okay?
(1) I very explicitly am restricting my comments to social phenomena. It is literally the very first thing I said in this thread.
(2) The newness of a claim has precisely 0 to do with whether or not the claim is correct.
Humanity has known about quarks for less than 0.001% of our existence. Does that mean quarks don't exist? I am sorry knowledge progresses and that this progress makes you uncomfortable.
Well, to be fair, if you want to privilege environmental explanations, you have to provide an environmental mechanism. We have one, upbringing. Certainly goes a long way toward explaining the performance difference between von Neumann and Trump.
"A claim no one has ever made about height, etc." is Oliver explaining why it is an isolated demand for rigor, not a positive argument for the hereditarian case /per se./
(There *is* a pretty obvious story to tell here about discomfort—but I don't think it's the one you're telling.)
(2) Suppose Bob has assembled a bicycle. Alice is skeptical on quality of that bicycle and says "your pile of junk will break really soon". If Alice makes this claim on day 0, does it have same probability of being correct as at day 100 when Bob already traveled 1000 miles?
It is possible to narrow a cause down to a large area without narrowing it down to a smaller area.
I can know that my car accelerates when I press the gas pedal without knowing the internals of how my car works.
I can know that my feet hurt when I wear a particular pair of shoes without knowing what it is about those shoes that causes the foot pain.
I can know that having higher population in a Civilization video game increases science output without knowing which bytes in the executable file perform that calculation.
Your claim that we can't know that a cause is genetic without pinpointing the exact responsible genes seems obviously false to me. I also note that you didn't actually provide any supporting reasoning; you just repeated the claim 3 times with no support.
(My model of stupid internet arguments is warning me that you are likely to motte-and-bailey with a silly definition of "know", e.g. claiming that you meant "know with literally 100% certainty". If you choose to reply, please say something smarter than that.)
We could and did measure heritability before we even knew what genes were.
It's more that if we have two competing explanations of why pressing the car pedal causes the car to move faster, the more detailed and plausible mechanism is probably correct. This corresponds to a claim that environmental mechanisms are fairly well mapped out, but the genetic ones aren't (this in spite of the fact that genetic measures are far more precise than environmental ones).
Are you saying that environmental mechanisms influencing IQ are fairly well mapped out?
>environmental mechanisms are fairly well mapped out, but the genetic ones aren't
If this was true, the outcome gaps between demographics would be already closed by now.
AIUI, it is widely agreed that both environment and heredity have non-zero effects, and the argument is about the _strength_ of the effects. You seem to be arguing as if there were a presumption that only one of the effects is real? Even if we agreed that a certain mechanism was "more plausible", that is not the same as being stronger.
Also, Jared was trying to say the claim was categorically disallowed, but you seem to be making an argument about probabilities. If we're going to allow the claim at all and advance to the question of likelihood, then shouldn't we be adding up all the evidence on both sides from years of scientific experiments, rather than squinting at vague high-level patterns like "how detailed is this hypothesis?" I would think the debate is well past the point where this sort of observation has serious relevance.
But, regarding the specific claims:
Humans often have an intuition that detailed stories are more likely than less-detailed ones, but this is actually false; more details mean more ways the story could be wrong, and therefore lower intrinsic likelihood. (See: conjunction fallacy.)
Now, if a hypothesis makes a lot of detailed predictions, and those predictions are later verified, then THAT would lend support to the hypothesis. But that's impressive specifically BECAUSE those details make the hypothesis unlikely, all else being equal.
Regardless, that seems different from the question of whether the explanation comes from a domain that is well-studied. Studying a domain has a chance of uncovering evidence that would support a related hypothesis. However, if you study the domain and fail to find such evidence, that actually makes the hypothesis LESS likely, because you had a chance of finding evidence and didn't find it. (See: conservation of expected evidence.)
And of course, if studying the domain DID uncover specific evidence, you should be citing the actual evidence, not merely the fact that the domain is well-studied.
But, again, all of this seems pretty irrelevant to a long-running debate about the strength of these effects.
The demand for causal granularity will never end, they will claim we can't model particle physics well enough to prove the causality of protein encoding.
Can you identify all genes, and only those genes, that are responsible for the difference in intelligence between von Neumann and a hamster?
If no, must we believe that the difference between human and hamster intellectual capacity is socially determined?
Thank you for supporting my position that we can't make a claim about heritability given our current understanding of genetics. I do appreciate it when people provide support for my positions.
(1) "Can you identify all genes, and only those genes, that are responsible for the difference in intelligence between von Neumann and a hamster?"
No.
(2) "If no, must we believe that the difference between human and hamster intellectual capacity is socially determined?"
No.
Are you seriously saying you are uncertain on the question of whether the difference in intelligence between humans and hamsters is socially or genetically determined, and you need more information before you decide one way or the other.
The difference between hamsters and humans is an interaction between genetic influences and environmental ones, an interaction effect that has been called "Natural Selection." The problem is that this process cannot explain the differences between individual human beings (for theoretical reasons). It could have potentially explained the differences between certain human populations (race) but it didn't. But given the importance of natural selection to the phenotypic characteristics of a species, including humans, some sort of interaction between upbringing and inheritance should probably be given additional weight.
I think you're conflating two questions: (a) "what determined the current difference in intellectual capacity between any particular human & any particular hamster: upbringing, or genetics?" and (b) "what made things that way in the first place?"
(a) is "it's the genes", and supports the point Jake makes.
(b) is "natural selection", and isn't germane.
One could make some sort of argument along the lines of "well, you can't explain why different groups of people would evolve different intellectual capacities, so it can't be due to any heritable element"—but that's a different argument (and also wrong).
Common anti-hereditarian answer is that humans and hamsters are different species so therefore argument is invalid. (Silently throwing away notion that each species originated with accumulation of small changes)
If I were given nothing but a running wheel and an oversized straw water bottle, I would develop an acute desire to sprint in place! Us social determinists are far more intelligent than retarded hereditarians. Our environments must have been far more conductive to intellectual growth.
Your joking, but if there were in fact a consistent and reliable IQ difference between the two groups of researchers, that would have dome interesting implications for the two theories.
Hereditarians are smarter but both sides are probably at least +1 st dev. Most people don’t have basic reading comprehension.
You remind me of the academic virologists who were furious back in mid 2020 that anyone would claim that a viral mutant was more transmissible than wild type without also knowing the biochemical details of why it was more transmissible. Good old Vince Raccianello.
Hereditarians at least know you can clone these individuals and get their traits. Yet human cloning is fordibben, for deceit of anti-hereditarians could not be exposed.
Cloning von Neumann's environment is not forbidden. Yet you can't have average couple, give them book "How to raise a genuis" and other stuff and get a genius later. What we do have though, is some cherry-picked studies of educational interventions that give goodharted points than usually fade after several years after end of interventions.
You don’t think that if we clone Trump the twin-Donald wouldn’t be very similar?
Theists used exactly this argument to dismiss evolution, eg. "missing" fossil records, the "irreducible" complexity of the eye, etc. The flaws of this response are hopefully obviously by now. That the precise factor is not yet known is not sufficient to refute our knowledge that it must exist.
Re "are people assortative mating on blood pressure":
1. maybe not, but on diet and other related lifestyle variables.
2. according to Young assortative mating can't explain differences between ACE twin h2 and SR/RDR estimates https://bsky.app/profile/sashagusevposts.bsky.social/post/3m6cjpcocrs2o
personally, for me it's not case closed on the most-investigated traits like education and IQ, but Gusev has convinced me there's some bad twin-studies out there, and I am starting to wonder about the personality results. Twins can plausibly affect each other's self reports in weird ways more than test scores.
Didn't Laura Baker find stunningly high heritability estimates for childhood antisocial behaviour in the Southern California Twin Registry, using a multi-rater approach to reduce noise in the signal? That can't all be pinned on self-report effects.
But it can be attributed to social interaction between the two twins' (raised separately) households, resulting in contaminated data.
How often do the twins' families know eachother enough for this to rise above the background noise of any other social interactions?
We don't know, and that's the problem (if we did, we could correct for it). It seems that this question could be answered with more research, but I am not aware that anyone has done that research.
While the there could be some assertive mating on blood pressure, but it is incredibly unlikely to be this strong.
My mom and her relatives eat like saints and still struggle with out of control high blood pressure, which seems to be a heritable family trait.
Mom's diet is Really Saintly. I know she's not secretly eating junk because she doesn't LIKE junk. She was raised in a health food house and she prefers health food.
As I youth, I use these anecdotes to conclude that humanity consistently underrates the influence of genetic factors on high blood pressure. I now know that conclusion was epistemically unsound, but I wonder if I didn't accidentally stumble upon the truth, anyway.
Many such cases
https://www.youtube.com/watch?v=TrnJCP7_aPI
Supposing people assortatively mate on lifestyle (e.g. gymgoers marry gymgoers), and lifestyle affects blood pressure, how would that factor in?
I feel like there's a much better approach to this question: just go through the study measurements and calculate how much assortative mating there is on each trait.
If they have lots of brothers, sisters, and cousins, presumably they also have lots of mothers and fathers.
(They seem to be covering about 0.6% of the population of England, so in the abstract everyone could be unrelated to everyone else, but that doesn't appear to be the case.)
I don’t know the acronyms/jargon. I think the meaning is:
ACE twin h² = the heritability estimate (h²) derived from a classical twin model.
Sr = sibling regression (don’t know what that is)
RDR = Relatedness Disequilibrium Regression. The new method estimating heritability using random variation in relatedness among siblings (not relying on twin assumptions). It tends to give different h² estimates than twin models.
Twin studies systematically underestimate environmental effects because no one is deliberately going to give a baby to adoptive parents who are abusive or poor. You're not comparing Twin A, whose parents paid for tutoring and a house in a great school district, with a Twin B who grew up food insecure and had to start help paying rent at 16, much less a Twin B who grew up hiding bruises and watching their parents bounce in and out of jail. Ruling out any family that's below-average on financial and emotional stability creates a restriction of range effect that greatly reduces the measurable contribution of the environment.
Does assortative mating on biometrics mediated by shared diet make any sense?
I don't know, but ask yourself where variation in diet comes from to begin with. Assortative mating isn't required as an element of the explanation.
I'm a complete idiot on this stuff, so this may be completely confused, but is it weird that population structure could affect hard biomedical traits? If white blood cell count is correlated to some population substructure that people *do* assortively mate on, would that work? Is it just implausible that there is such a substructure that correlates with white blood cell count? Or am I being dumb and misunderstanding the proposed mechanism here?
I'd be curious about this as well. I think Ruben suggested something similar here.
https://www.astralcodexten.com/p/the-good-news-is-that-one-side-has/comment/183800288
Yeah, I saw his comment as I was finishing up mine.
If I recall correctly, there were some results a few years ago where taking polygenic scores for fairly hardcore biomedical traits (my memory says blood pressure or heart disease, but I can't find exactly what I was thinking of) from GWASes that had been performed on Europeans, and applying them to Africans found them to be much less predictive; at the time I remember reading that undiscovered population structure was thought to be a contributing factor.
Is there a reason to believe a similar effect wouldn't apply here? Or has subsequent research found that there was a different explanation for what was going on with those polygenic scores? Or something else?
It's pretty typical that polygenic scores developed from looking at one human population don't generalise as well to others, but I imagine the authors are aware of this problem and took pains to either correct for those effects or restrict their sample to a relatively homogenous population. (There is such a thing as overcorrecting for population structure as well- c.f. the sociologist's fallacy.)
I think Scott once wrote that much of attractiveness is an indicator of resistance to disease. Both Angelina Jolie's lips and Brad Pitt's forehead indicate high sex hormones during adolescence . High sex hormones make you more susceptible to disease, so that attractiveness might well be a stand-in for high white blood cell count (immune system function)...
> boobs = butts and attraction to butts came first, this is evolutionary psychology 101
Butts came first; that's evolutionary psychology 101. The first half of your claim isn't actually supported by anything.
And none of his comment actually refutes the claim being made, either.
I mean, as someone who was very immune to Angelina Jolie's charms, I find this a little bit of a just-so story, but yeah, stuff kind of along these lines doesn't sound totally implausible to me
I feel the same way about her but speaking as somebody who has spent time hanging out with a supermodel, you might be surprised how powerful they are in person.
Tell us more.
I think it's fairly well understood that people look much worse on camera?
If you want to make a career out of looking good on camera, you could try to invest in strategies that reduce the penalty the camera applies to you, but it's easier to just look better, which also improves your appearance on camera.
No, what I want to hear is relevant anecdotes about your encounters with the super model. No irony intended.
If high sex hormones make you more susceptible to disease, wouldn't that mean that is a poor proxy for immune system function?
My understanding of the story presented was that survival of adolescence with higher than normal sex hormones indicated an unusually strong underlying immune system.
Ahh I think you just meant "make you less susceptible to disease"
More susceptible during adolescence when the sex hormones were abnormally high. Back to normal during the time when making was likely to occur (and thus sexual/genetic selection). (edit: If you have high sex hormones but still survive to reproductive age, you must have a top notch immune system.)
No, more susceptible.
A bit like how a peacock's tail is itself a disadvantage (bulky, heavy, etc), so for a peacock to be successful despite that disadvantage demonstrates that he has other valuable advantages.
The OP is saying high levels of sex hormones make you more susceptible to disease, so if you have them but still haven't died of a disease, we can infer you also have a good immune system.
She got cancer, though [Edit: or thought she might. Bad choice, if all this is correct...]
I would have liked to see data on something clearly not genetic included in the study, as a control. Birthday, perhaps? Or favorite Football team?
Yeah, this is a great point. Isn't one of the nurturist arguments that twin studies give high heritability for things like peanut allergies that we know are not genetic?
I'd love to know what this study design would say about that.
Allergies are definitely heritable because the innate immune system is largely hard coded.
My impression (from a paywalled Lyman Stone article that I only skimmed, so very much a low confidence impression) is that twin studies yield implausibly high estimates for peanut allergy specifically, even though it's known that there's a large environmental component.
Maybe some or all of that is wrong, but I'm not sure it's incompatible with allergies in general being highly heritable
Why is it implausibly high? This large Dutch study reported 66% based on self-reports. https://www.cambridge.org/core/journals/twin-research-and-human-genetics/article/heritability-of-selfreported-asthma-and-allergy-a-study-in-adult-dutch-twins-siblings-and-parents/44AFAD907928B4283098FAD160476541
Note that heritability does not mean there cannot be large cohort changes if the environment changes. Since allergies often have to do with exposure (allergy = the immune system has incorrectly learned to attack a non-dangerous targets with large force), this is not so surprising.
Yeah, maybe I'm borrowing too much of Lyman's framing of what's plausible; I think the broad point still stands that looking at a case where we are very confident that there are environmental interventions that can produce broad cohort effects so the causal model shouldn't be "100% caused by genes, no plausible environmental intervention could possibly matter" would help calibrate people on how to think about such a study.
Do we see that environmental pathway showing up in a lower heritability estimate then we get from twin studies? Is the drop bigger than for other traits? Etc.
Isn't the main environmental component that peanut allergies are less likely if the mother eats peanuts during pregnancy? And twin studies measure pre-birth effects as part of genetics, so of course that one's going to come out too high.
(Though, for all I know it could just be that peanut-allergic embryos get miscarried and replaced by ones with no-peanut-allergy genes later, rather than truly an environmental effect.)
I think my saying "implausibly high" is maybe detracting from my main point: peanut allergies are a place where twin studies yield high heritability estimates, even though there can be large environmental effects.
Intuitively, other ways of measuring heritability might be expected to find lower estimates if they are somehow able to capture those environmental variables in a way twin studies can't, but where we already have a decent idea of what those environmental variables are so seeing a lower heritability wouldn't have us debating if it's because we're missing rare variants, GxG interactions, etc... We *know* there's environmental effects, so expect those to explain at least some of the decline in heratibility.
But then we can use that to calibrate our sense of how meaningful the differences are between twin studies heritability and this new method heritability: it's already been mentioned that height behaves differently than other traits, presumably because it is very strongly genetically caused and we're not missing much environmental stuff in our modern environment; ideally peanut allergy could function as an endpoint on the other end: here's what it looks like for a trait where we know twin studies are failing to see a big environmental effect.
Then you take other traits of interest and see, do they behave more like height, or more like peanut allergy, and then that's a good first estimate of whether the missing heritability is because of undetected environmental effects or not.
But without peanut allergy, we're left debating how to interpret what this study says for those non-height traits.
Does that clarify what I'm getting at?
There are genetic predispositions, but the actual allergy is created based on environment. Allergies come from an adaptive immune response.
I'd imagine there is a large gene x environment effect which can show up as heritability.
offtopic: do you believe aves is pseudosuchia in 2025, or is it just joke?
Just a joke.
Allergies are definitely "genetic", but "heritability" is supposed to be a measure of how much of observed variation is correlated with genetics, I think (so that, properly measured, heritability can differ between cultures, subpopulations, and environments). I heard that "wearing earrings" was somewhat heritable but not genetic, which leaves me too confused to say more.
Heritability is a population (and sample) statistic, so it can trivially differ between different populations, times etc. If you reduce non-genetic variance in some phenotype, heritability increases.
Pretty much every behavior is heritable to some extend because they reflect dispositions to act in various ways. It is of course not random who wears earrings. This phenotype has a quite high heritability because women wear them often and men don't (men and women differ genetically of course). And there is non-random genetically linked variation within the sexes.
I realized after writing that that "wearing earrings" has a red herring quality, since it's influenced by sex, which is genetic but not inherited.
TIL a better illustration of how heritability differs from inheritability: "speaking English" is partly heritable (though "speaking Turkish" is not).
This comes from dynomight's article on heritability which is the best explanation I've seen (only like 15% unclear): https://dynomight.net/heritable/ (thanks to Niclas for the link)
The problem with twin studies on allergies seems is that classical twin studies assume genetic contribution is fully linear. Height is linear and has optimum for given environment -- easy trait. Allergy is exponential answer to small inputs and is never good, so linear prediction from genes works bad. So it shows too low environment influence than it really is.
Average letter in their name (as in A=1, Z=26). Distance to ocean.
> Nurturists argued that the twin studies must be wrong; hereditarians argued that missing effect must be in hard-to-find genes.
Sorry if this is a stupid question, but when you say "genes", do you mean just the actual genes (or maybe even just CDS) ? Because there are lots of regulatory regions that influence gene expression (e.g. promoters or silencers), but aren't strictly speaking parts of genes.
I think this is anything that gets inherited with DNA.
Evolutionists' gene: any DNA that codes for something that can be selected for. Molecular bio gene: protein coding region.
The paper analyzed whole genome sequencing; that would include the genetic material that encodes for promoters and silencers. If they had only looked at the transcriptome (DNA that is transcribed into mRNA) then the regulatory regions would not have been included.
That makes sense, thanks !
Excellent! Thanks!
How much of the biomedical data could be explained by SES or life history more broadly? Rich people are likelier to exercise, eat healthier, and get better medical care (in the US). They’re also less stressed and negatively selected for disabling health outcomes (can’t be a CEO if you have schizophrenia). We know nutrition affects height; why not IQ or white count?
Don’t people do assortative mating on health and therefore indirectly on biomedical things like white blood cell count. Note, I don’t have much expertise in this area, so if this is a really stupid question, I’m sorry, but this is the obvious doubt I had.
Whenever there is a post on Heritability I recommend people read Dynomight's "Heritability puzzlers", which illuminate what heritability even is (which is not what you might intuitively think!)
Link: https://dynomight.net/heritable/
Seconding this, along with the article on traits being more than 100% heritable: https://dynomight.net/heritability/
(wow that's quite a similar url, different article though)
Am I... reading this right? In the first post, did he say, basically, that if genetic heritability is high, then changing the environment (probably, usually) won't do much, but in the second post, he shows that deliberately changing environment in certain ways will make heritability swing to whatever arbitrary value you want?
I suppose the easiest way to reconcile the two is by pointing out that, in practice, "changing environment" in such a way as to get you whatever arbitrary heritability value you want is really really difficult.
Yes. As the first article points out, heritability only tells you about how the trait varies in the current typical environment. So if in your current environment, some people eat more fish and others eat more wheat, but very few people eat yogurt, heritability tells you nothing about the effects of eating yogurt. Importing yogurt for everyone could change the heritability of some trait.
This is a population-vs-individual-level thing. In the first post, he's saying that changing an individual's environment, within the normal range that exists in that population, won't do much. In the second post, he's talking about changing the environment of populations (all the "islands" examples) and that can change the heritability a huge amount.
The important thing to understand is that "heritability" as a concept is only defined with respect to some population.
I've always had problems with that word because people seem to want to define it in a certain to win a fight. I think I have a handle on it now. Highly recommended.
According to this article, the technical definition of heritabilty is to the layman definition of heritability as IQ is to intelligence.
I propose that heritability scientists immediately starts using HQ or heritability quotient to avoid further confusion!
I always appreciate these types of posts. It is very easy to get lost in the sauce. This is a good way to come back to first principles.
Thank you.
Seconding that everyone should read this before forming an opinion on the topic. Heritability is a very precise and unintuitive statistical concept. It is _not_ the percentage that genes contribute to a trait, like many people think. That's not an approximate description, it's just incorrect.
OK, so if I understand correctly, the study shows that heritability is in a range that tells us an important fraction of many traits is inherited, as the hereditarians have claimed, and the amount we can predict is limited, as the nurturists claim - but either way, everyone is agreeing about the discovery of the missing heritability. And it's not like we're doing something like measuring size of elementary particles or the pull of gravity, where we have specific concrete theories that would be falsified if they are actually 0.001% higher than predicted. And (now that we know the range pretty well, at least,) I simply don't think the exact number for heritability of different traits matters for basically anything practical.
It's unfortunate (but demonstrably the case, given how science is done today,) that we can't do truth seeking in science without factionalist approaches that don't change anything about the substantive conclusions we should come to. But if we need to talk about it, I really wish we didn't talk about the resulting clarity as being about which side won, even if they can't stop doing so.
We -did- detect gravity being much stronger than predicted. Way more than .001%. Hence "dark matter".
Exactly the point. In areas like physics, the specific numbers are relevant enough that knowing details leads to new insights and changes what we believe about the world. Unlike here.
Then we got "dark energy" which turned out to be even more significant. To me, who is not in the field, it seems like the law of gravity might need a revision rather than more patching. Perhaps something on the lines of MOND?
No, heritability does not tell us what fraction of the trait is heritable! Heritability is not a measure of how strongly traits are inherited. The number of fingers you have is damn near 100% inherited and has heritability ~0.
Historically, popularizers of certain kinds of intelligence research (Charles Murray and Emil Kirkegaard are the biggest offenders here) have lied about this because they expected the heritability of IQ to be high and wanted to convince people that this means intelligence is robust to environmental intervention, so the confusion is understandable. I think that understanding why this is not what heritability means will shed light on why people care about the differences in estimates in cases like this one. (The upshot is: agreeing about the number is less important than figuring out which kinds of methods estimate the number correctly, and the latter is much more substantive.)
EDIT: Apparently Kirkegaard is in the comments here, so I should substantiate this accusation! Here he is in this thread claiming that hard-coded traits are necessarily highly heritable: https://www.astralcodexten.com/p/the-good-news-is-that-one-side-has/comment/183846734. I have explained this to him before and he has acknowledged it (more accurately he's claimed that every time he makes incorrect claims he actually implicitly means the correct ones and anybody who doesn't substitute in correct claims for all the incorrect claims he makes is an idiot so it's not a problem if he's technically lying, but same difference), this is how I know he's lying and not merely wrong. Of course since this happened years ago on a different platform I don't have receipts, so anybody reading this is welcome to believe that Kirkegaard is merely wrong, I can't convincingly prove his intent to an outside observer here.
Seems like I phrased this poorly, but I think you're misunderstanding the point being made. When I said "heritability is in a range that tells us an important fraction of many traits is inherited," I should probably have said "heritability is in a range that tells us an important fraction of many traits varies based on inheritance."
But you went off on a tangent, and seemed to imply that if we have really good methods, the number doesn't matter. That seems clearly wrong - no matter what the new methods are, unless this work is using bad methods, we've bounded the possible range to where no one should care.
Also, you can complain about other people's bad faith updates of their views and refusing to admit they were wrong if you want, but it seems unhelpful, especially when being included as a response to what I said.
I agree that the edit makes my comment much less relevant to you. I added it because I didn't want to make an unsubstantiated claim about someone other people were likely to come across if they were reading the comments sequentially but it's not particularly substantive to my response to you, sorry for the tangent.
I think you still have significant misunderstandings about what heritability is. Your new phrasing is correct as a definition of heritability, but is not a plausible summary of hereditarianism. Hereditarianism is the claim that intelligence is "innate", in the sense that it's unlikely to change drastically under environmental modification, which is basically a claim about what the mechanisms of intelligence look like. Heritability is deliberately mechanism-agnostic (this is part of what makes it a good thing to care about! Mechanisms are really really hard in genetics, it's nice to have something that works without knowing them), so knowing the heritability alone just doesn't say anything about which side is correct.
The point about methods is that because we don't have access to counterfactual rollouts of the same genome in different environments we can never estimate "true" heritability, and if you define heritability extensionally then technically every trait has heritability 1 because people have unique genotypes. So to estimate heritability, given that we don't have access to counterfactuals, we (roughly) break genetic influences on a trait down into their components, make assumptions about which influences can be safely ignored, then design studies that accumulate the non-ignored components into a number. Different methods miss out on different things, this is particularly true for intelligence, but as we use more methods and accumulate more estimates of heritability, we become more able to piece together the relative importance of different types of genetic influence.
As a simplified model, assume Method 1 for determining heritability treats nonlinear gene-gene interactions as negligible and Method 2 doesn't. Say that both methods agree for Trait A but they disagree for Trait B. We can conclude that gene-gene interactions probably aren't important for Trait A, but they might be important for Trait B, at least in the environment we were investigating. Unlike the raw number this *does* say something about the mechanism, which is the thing we really want to get at.
(In practice things are harder than this even for relatively simple traits and intelligence is a really damn complicated trait so everything is underdetermined, but the point is that the differences in estimates yielded by different methods matter a lot even in cases where we don't care exactly what the number is.)
"...is not a plausible summary of hereditarianism. Hereditarianism is the claim that intelligence is 'innate', in the sense that it's unlikely to change drastically under environmental modification, which is basically a claim about what the mechanisms of intelligence look like."
I don't think that's true, at least as written. No-one is arguing that lead exposure or iodine insufficiency or repeated head trauma doesn't change intelligence. You seem to be making a stronger claim, or at least need to make such a claim to disagree with the hereditarians - you need to say that there is significant room for variation in the environmental *on the positive side*, and that we can increase intelligence from the current level among even the richest people by changing things. (Or you need to say that it's functionally impossible to do so with genetics, which requires hereditability to be effectively zero, which is not true.)
"I don't think that's true, at least as written. No-one is arguing that lead exposure or iodine insufficiency or repeated head trauma doesn't change intelligence."
Sure, I'm being a bit imprecise here. Maybe I should say "plausible environmental modification under shared environment distributions" instead. What I mean to say, and what I understand to be the hereditarian position, is something like: "Individual differences in intelligence are largely up to genetics in typical cases. Of course there are interventions which have important effects on intelligence, you can give anybody brain damage. But these effects are only so subtle and only so strong. Exposure to lead changes intelligence a lot and is pretty easy to pin down compared to the genomic project we have on hand. Access to better schooling has a smaller influence and is harder to pin down. We should expect that, the harder an environmental factor is to pin down specifically, the smaller its influence on intelligence. Moreover, even if we catalog and control for all possible environmental influences, significant differences in intelligence will persist and will be explainable by genetics. On the policy side, we should expect interventions meant to address group differences in intelligence to fail unless they address genetic differences."
"You seem to be making a stronger claim, or at least need to make such a claim to disagree with the hereditarians - you need to say that there is significant room for variation in the environmental *on the positive side*, and that we can increase intelligence from the current level among even the richest people by changing things."
I agree with the first half of this - if the hereditarian position is false, then either intelligence is not meaningfully measurable at all (which I don't believe) or differences in intelligence can be to some extent addressed by reasonable environmental interventions, and in particular it must be possible for reasonable environmental interventions to basically close intelligence gaps to the maximum at the group level. I think this is probably the case, but genetics is hard and we're not sure how intelligence works - I'm something like 75% confident that hereditarianism is false in this sense.
The second half, the claim that a rejection of hereditarianism entails the possibility of uniformly-beneficial environmental interventions (not just closing gaps by bringing all subpopulations to the highest standard, but also raising the waterline even at the top) is complete nonsense and I'm not sure where you got it from.
"(Or you need to say that it's functionally impossible to do so with genetics, which requires hereditability to be effectively zero, which is not true.)"
Again I think you just don't understand heritability and need to read more of the standard explainers. High heritability and high genetic influence, and even causal influence, can coexist with a complete rejection of hereditarianism! It could be the case that people who look weird have less access to social settings where they become smart, and since appearance is largely genetic, this provides a causal mechanism by which genes influence intelligence. Genetic intervention could succeed if this were true, we could change the distribution of appearance genes so that nobody ends up weird-looking and this would eliminate the intelligence gap. But this would be a disproof of hereditarianism: a genetic intervention could make the difference here, but an environmental intervention (just treating people the same regardless of how they look) could have had the same effect! Heritability does not tell you the mechanism and hereditarianism is dependent on claims about mechanism.
EDIT: I think maybe I misunderstood your last parenthetical and should have given the converse objection instead? Nonzero heritability doesn't mean that genetic intervention is needed to change a trait, but low heritability (even zero) doesn't mean the opposite of this either. Heritability is only defined up to a distribution on environments - it's entirely possible for hereditarianism to be true even when heritability is low, because the current distribution on environments might suppress intelligence to far below its genetic "potential", making people look equally-smart even when genes play a big and robust role in intelligence.
Right, so the important and meaningful part of the hereditarian position, the part that has implications for actions and policies, is the question of what could increase intelligence, *both in lagging subpopulations and in the most intelligent subpopulations*. That's what the hereditarians claim to care about, and what I think matters - though different people focus on different parts of the question. Your example of appearance being a moderator is correct, but I agree that it would be irrelevant, as the actual argument isn't about technical heritability.
If intelligence at the high end is unchangeable by genetics, but at the low end is changeable directly, some hereditarians will declare victory, others will declare defeat. (And I guess if it's very changeable at the high end but not at the low end, the reverse will occur, but that seems implausible mechanistically.)
But overall, if plausible environmental changes can change intelligence by significantly less than genetic changes, and genetic changes can have a large effect, I think both groups of hereditarians can and will declare victory, in terms of the implications they care about.
I must say I'm baffled by where the battle lines have been drawn. The natural lines would seem to be a more literal reading of nature vs nurture, that is, heredity+noise vs nurture, but the actual debate seems to be between heredity vs everything else?
Since we have no effective interventions to increase g beyond stuff like avoiding malnutrition, the possibilities on the table are "Your IQ depends on the lottery of who your parents are" and "Your IQ depends on a bunch of other lotteries as well", which practically don't seem very different to me.
In an already rich society? Can you give me a list of the interventions (it's a bit late for me and my kids, but could use it for the grandkids one day).
OK, but this is a short list so far. We have: 1. Avoid smoking, and 2. Increase motivation (who's motivation for what? Isn't intrinsic motivation for one's good behaviour also partly genetic, sth to do with dopamine levels? Or is it some other motivation?), 3. Something else that is not direct.
This doesn't help me much, neither does it help the Health Minister who wants to raise or equalise the population g-scores. Is there something I misunderstood?
They're not known by Aristocat either.
A good example of this would be something like fingerprints. What your fingerprints look like depends on highly stochastic factors within the womb, like the precise flow of the amniotic fluids and whatnot. It's not genetic however; you're not going to pass your fingerprints on to your children. Even identical twins have different fingerprints. Yet, your fingerprints are just as immutable as any genetic trait. Even if a large portion of heritability is environmental, some significant portion of it could be like fingerprints in that it is nevertheless lottery-based and fixed.
The next dystopian essay I ever want to read ought to be about using artificial wombs to engineer custom fingerprints. Possibly with product logo placement. No more dystopian news is allowed until I read that.
We look forward to reading yours.
Once upon a time, people started using artificial wombs.
Some concerned onlookers warned that it could stunt normal neurological development and human attachment. They pointed to studies of adult adoptees talking about the trauma of separation from their birth mothers. They pointed to advocates against gestational surrogacy who worried about similar separation trauma plus other problems like human trafficking. They pointed to studies of NICU babies who did much better with more skin-to-skin contact than with less, even when all their basic medical needs were taken care of.
But these concerned onlookers didn't have the funding to conduct long-term studies, so the science was never settled one way or the other.
As AGI got better, it claimed most jobs. Most goods in society became cheaper, and salaries lower - more or less a wash. But for the upper classes who could still find highly paid jobs, finding new and expensive ways to impress their peers was as popular as ever.
Designer babies came into fashion. Who wouldn't want to be able to pick only their smartest, healthiest, most attractive, most conscientious offspring? Here was something other than real estate that was worth saving up for.
But gestation, birth, breastfeeding, and parenting were a harder sell. Those highly paid jobs weren't keen on letting people out of the workforce for a few months, much less a few years. Taking time off for mothering would make them fall behind the ever-increasing pace of change. No company would hire them back.
So artificial wombs, formula, and nannies came to the rescue. Consumption decisions multiplied: Customize your child's fingerprints! Add extra omega 3s to your formula! Hire a trilingual nanny!
And as those became glamorous among the upper classes, the shreds of what was left of the middle class aspired to them as well.
Artificial wombs were expensive. That's one thing that made them such a status symbol. With declining wages, they were a real stretch for many.
When the first company started offering corporate logo placement in custom fingerprints in exchange for steep discounts on artificial wombs, can you guess what the reaction was?
Trick question. They didn't just start offering them right away. They focus grouped the hell out of it first. With AI participants, of course.
What they found was that simulated participants found a steep discount insulting. It watered down the status signal. A corporate logo would broadcast "my parents are second-class strivers, and this is the only way they could afford this procedure."
Slight discounts? Less insulting, but still not worth it for most interested consumers.
Zero discount? Opinions were mildly positive. Several AI respondents said they liked how it allowed them to signal affinity for the vibes of a company while also demonstrating the ability to commit.
When the focus groups were wrapped up, the final decision was to offer corporate logo fingerprinting for a 50% upcharge over the standard custom fingerprinting charge.
Yes, on the other extreme is good looks. It's obviously extremely genetic (see identical twins), but only somewhat heritable (in the precise sense that beautiful parents can have ugly children and vice versa). But the exact ratio of genetic vs heritable vs stochastic is much less interesting than how mutable the trait is. And people seem to be aware of this when discussing looks, hence we have a lot more discussion about interventions like skincare and plastic surgery than about heritability. Intelligence just seems to attract bad discussion, perhaps not helped by the fact that there are no serious post-birth interventions to discuss.
In the case of good looks, it's because of non-linear genetic effects rather than stochastic environmental factors, where it's not just about having the right genes but having the right combinations of genes. This is why ugly parents can beget good-looking children and good-looking parents can beget ugly children. Because while the genes themselves are passed on, specific combinations of those genes are not.
Although, from what I've read, the genetic causes of intelligence specifically are mostly linear. Non-linear effects are minimal.
> This is why ugly parents can beget good-looking children and good-looking parents can beget ugly children.
You don't need any nonlinear genetic effects for this state of affairs to hold.
There are serious post-birth interventions that protect intelligence. They're just not low-hanging fruit in the developed world anymore. But developing countries see gains from them. E.g. decent nutrition and avoiding lead exposure.
Regarding beauty, I seem to recall it was found to have a substantial component of symmetry to it. So if your womb and circumstances can produce a very symmetrical child, you're a winner in this regard.
As for intelligence, people seem to forget that even those damn hereditarians think there is a 20-50% environmental component to it (at the moment).
A person with 4 good looking grandparents and 2 good looking parents is quite likely to be good looking. The thing is, one can look like a grandparent or have a unique combination of the parents' features. But it is sort of similar to IQ: good looking parents have better looking children on average.
No “who your parents are” is NOT synonymous with “heritability”. That’s actually entirely specifically NOT what heritability is supposed to measure. Who your parents are include this huge messy mass of confounding variables like wealth and status and education and the like.
What heritability is *trying* to measure is the specific genes you inherited from your parents, and their effect on your life. Independent from all the other factors your parents contributed to your life! Maybe you’re aware of this distinction but just misspoke.
I am aware but trying to be (perhaps overly) brief. In any case, my point is that we have no effective interventions and nobody proposing any potential interventions. In that case, what exactly are the stakes of the debate? Why do we care if IQ is 20% genetic and 80% other luck as opposed to 80% genetic and 20% other luck?
From the perspective of the individual, those other factors can be considered luck. But from the perspective of a society designer, there's actually quite a bit of control there. There are real policy implications here.
Perhaps there's an implicit "optimising" assumption in all this -- so less about intervention post fact and more about planning reproduction??
For a simple example, if IQ is 80% heritable and highly immutable, AND YOU CARE ABOUT PRODUCING MAXIMUM IQ CHILDREN, you'll be much more careful about who you reproduce with in terms of IQ specifically. If IQ is 20% heritable and the rest is various more or less random but some non random luck (unlikely but not impossible candidates off the top of my head: intrauterine environment, consumption of certain chemicals at the time sperm was produced, egg quality, stress during pregnancy, nutrition during pregnancy, breastfeeding until the age of 2, avoiding childhood infections/ inflammation, cosleeping until 6mo, never being shouted at ever, exposure to trace pesticides before 1yo, having Mozart played during labour, mother under 21, bilingualism, high fat diet, etc etc etc) then IF YOU CARE TERRIBLY MUCH ABOUT OPTIMISING OFFSPRING IQ your decision process will be completely different.
And that "optimising drive" is not obvious at all, because while avoiding sub-normal IQ seems very reasonable, aiming to maximise it strikes me as not obvious a personal or societal goal at all.
But maybe there's also something relevant to interventions in already existing extreme cases or designing education system (can't see how heritable immutability would differ from any other tho).
Yeah lol I was confused my impression of you was not that you’d make this misunderstanding so I added the potentiality at the end.
Why do we care? Because it impacts policy of course. People are getting rid of accelerated courses in schools because they think it’s all socially cultivated. Countless other examples.
> People are getting rid of accelerated courses in schools because they think it’s all socially cultivated.
That's not a coherent statement. If you think that faster progress in accelerated courses is "all socially cultivated", that doesn't provide an argument for eliminating the accelerated courses. The faster progress still happened! It's an argument for eliminating the non-accelerated courses.
People are getting rid of accelerated courses in schools because they want to prevent some students from getting ahead of other students. This is something that can be partially achieved by getting rid of accelerated courses, even though the opposite, preventing some students from falling behind other students, cannot be achieved by getting rid of the fingerpainting courses.
I regrettably inform you the vast majority of people have some sort of incoherent ideology.
Sure, that's true, but in this case people are happy to declare in public that what they want is for white children to stop outperforming black children.
I've never seen someone make the argument "the gifted class automatically helps whichever students are admitted to it, and that's why no one should ever be admitted to it".
(I should note that in doing research for this comment, I did find mention of Dallas adopting a new policy of enrolling students in "advanced" or "ordinary" classes automatically according to their standardized test scores. The fact that this was new does lend a good amount of support to the argument that "it's all socially constructed".
https://www.the74million.org/article/dallas-isds-opt-out-policy-dramatically-boosts-diversity-in-its-honors-classes/
But here the solution was to stop doing something stupid and do something reasonable instead, not to stop doing something stupid and do something worse instead.)
Yeah lol I was confused my impression of you was not that you’d make this misunderstanding so I added the potentiality at the end.
Why do we care? Because it impacts policy of course. People are getting rid of accelerated courses in schools because they think it’s all socially cultivated. Countless other examples.
> Why do we care if IQ is 20% genetic and 80% other luck as opposed to 80% genetic and 20% other luck?
Because "other luck" can be isn't a literal random number generator, it's potentially environmental stuff that can be changed and optimised.
And of course all of this debate is taking place in a social context where we have certain genetic clusters within society, and some of these clusters perform better economically than others, and this is something that a lot of people care deeply about.
> And of course all of this debate is taking place in a social context where we have certain genetic clusters within society, and some of these clusters perform better economically than others, and this is something that a lot of people care deeply about.
This is in fact something I just investigated and wrote about - yes, there's an obvious PMC / non-PMC divide in America, but it goes much deeper than this.
Amazingly, if you cut by ~6 income tiers across Americans to create 6 clusters of people, there's a significant amount of differentiation in the overall stats (education, BMI, spousal BMI, time spent on phones, hours worked per week, number of close confidantes, diet quality, and much more) between those 6 tiers, and if you take a combined "education and / or income" measure of homogamy, these castes breed with ~93% homogamy!
What is the "actual Indian caste system" homogamy? About 95%.
America has essentially ALREADY differentiated into castes!
https://performativebafflement.substack.com/p/america-has-already-differentiated
One thing I don't understand though, is why there is no emulation between the castes.
If environment *actually* mattered, then the caste just below the next one should be able to observe what they're doing different, and do it, and thereby raise their own children's attainment. Even in a noisy and not-rigorously-executed world, if environment ACTUALLY mattered 50-70%, which is the proportion that a Gusev or other anti-hereditarian would advocate for, then that's an absolutely massive amount of arbitrage and lift you should be able to tap into, in an area a lot of people really care about (offspring's success and attainment).
But you essentially never see this happening successfully, anywhere in the world. Why is that?
Once people are solidly stratified into castes, they often care more about themselves and their kids being at the middle or top of their caste than about advancing to somewhere near the bottom of the next caste up. If you live in a middle-class neighborhood where most people didn't finish college and make less than $100k, you might be satisfied to be the guy with a state school degree making $150k, especially if the alternative is to go into debt to attend a fancier school with less fat people who have different tastes and will scorn you as a prole, and then stretch to afford housing in the right district and always be the poorest one in every social gathering, etc.
This is not an iron rule and of course there is still some intentional class mobility in categories like immigrants, parents who screw up their own path and underperform but try to set their kids up to rise, remaining shreds of actual meritocracy, etc. But lots of people are trying to win within their caste rather than change everything.
Great insight. Though I will say even slight IQ differences as you go up the class ladder combine with the environment/wealth advantages of those already in the elite classes to become a formidable barrier for any aspiring new entrant.
If it is mostly genetic, it means eugenics programs actually have a chance of working.
> If it is mostly genetic, it means eugenics programs actually have a chance of working.
What's funny is that in literally any other species, this is just plainly obvious and not controversial at all.
If we'd been taking positive eugenics seriously in humans, we'd have had ~8 positively selected generations since Francis Galton was advocating for eugenics. 8 generations isn't much, we're definitely hampered by being long-lived and slow breeding and developing, but you can actually accomplish a surprising amount.
What can you do in only 8 generations?
> Between 2000 and 2016, US dairy cattle breeders, by applying selection pressure to increase the productive life, achieved an increase of about 10 months
> After eight generations of selection, the percentage of dogs with an excellent hip quality score (as assessed by an extended view hip score) increased from 34 to 93% in German Shepherd Dogs and from 43 to 94% in Labrador retrievers.
> In dogs, it's generally thought that it takes ~7-8 generations to get a new measurably distinct *breed* entirely
> More generally, traits with a heritability of at least 15% are considered good candidates for genetic selection. Essentially everything we care about in humans (intelligence, height, strength, conscientousness, neuroticism, mental illness, health, etc) is way above 15% heritability, and even the Gusevs and other anti-hereditarians will tell you that.
Source: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8450581/
But suddenly when it comes to HUMANS, oh no, selection for traits couldn't POSSIBLY work.
Now, just imagine if we had applied that level of selection at a large scale to a significant human population for 8 generations. That population would enjoy significant buffs in things like intelligence, conscientiousness, and health.
Yes, if everyone is tall and smart and pretty, we're theoretically less "diverse." But that's the bad kind of diversity - we want the kind of diversity where healthy and highly capable people all specialize in what they most care about, and self-actualize at higher levels than would be possible if they were sicker, dumber, and less capable.
Why wouldn't we want this?
Eh... intelligent people are harder to align, are more capable of being a threat to the collective, and are generally harder to please. Ideally we'd want different castes specialized for different tasks, sort of like an ant hive.
It's interesting that you associate yourself grammatically with the selected population, saying WE'D be less diverse, WE'D have had 8 generations of buffs, etc. Maybe you identify more with the higher-functioning end of humanity by default - and not with the people whose genes would be extinguished under the kind of eugenics we reserve for animals.
How would you convince someone whose in-group/out-group associations were different than yours that "we" should all want a eugenic future?
> How would you convince someone whose in-group/out-group associations were different than yours that "we" should all want a eugenic future?
I mean, if I were god-emperor, I'd have made gengineering legal 10 years ago and we'd already be putting SNP edits into kids, and I'd make some amount of those publicly funded.
So you wouldn't convince them, but you WOULD let them choose from a menu of healthier, naturally muscular, prettier, higher IQ, and whatever, and the state would pay for some number of those genemods being put into their kids, depending on cost.
In other words, let the parents decide!
And there's SO many SNP's that do great stuff that we could literally do today if all the governments of the world didn't suck:
https://performativebafflement.substack.com/p/the-case-for-gengineering-and-my
> Why do we care if IQ is 20% genetic and 80% other luck as opposed to 80% genetic and 20% other luck?
Each option sets different expectations on the expected outcomes of social engineering efforts. That's potentially a big deal.
In the US there are education gaps (which translate to wage gaps) in different populations. Not only the usual racial stuff, but also between eg working class vs intelligentsia. Is this caused by social causes (eg discrimination) or is this innate?
Then migration is an effective intervention for a gene pool of a country. Does it matter for an English colony if future immigrants are German, Irish, Chinese, Latin-Hispanics or Somalians? Or are people blank slates?
The short answer is that it’s obviously about immigration.
The longer answer is that there are no lotteries. There’s plenty of things that influence which genes pass to the next generation. That’s what life has been about for most of its history. For individual communities being smart about “immigration” mostly trumps everything else, but there are other ways, which apply in a global scale as well. (“immigration” here can also refer to things like the Darwin family mostly marrying within themselves. Not that I am in favor of that sort of tactic specifically.)
Also, the other effects are only lotteries to the extent that we don’t know what they are. If we understood them better, we would certainly try to control them.
I don't understand why "heritability" of a trait is expected to be remotely consistent in different populations and studies. Neither the amount of genetic variation, nor "environmental" effects, nor measurement errors seem like they ought to be universally consistent. If two different studies in different countries with different recruitment strategies find people have different average heights, should scientists divide into factions believing "people are height X" and "people are height Y"? I would understand the confusion if the estimates differed by 100x.
Yeah, I think this is a key point - heritability is a ratio between genetic and environmental factors, if you use two different populations with different environmental variance, you should expect different heritability ratings.
I'm not sure whether I'm missing something obvious or everyone else is missing something obvious... it feels like everyone is trying to deduce the singular 'true' heritability value for factors like intelligence, without reference to environmental context. But heritability is a ratio between genetic and environment, there *is* no 'true' value independent of environment.
It's like saying 'I want to measure the TRUE universal temperature, independent of the context of season.' The temperature depends on the season, measuring it at different times will give different results and that's not a mysterious anomaly or a mistake in your methods.
Yeah, this sounds correct. Why are people having a big fight over 30% heritability vs 50%, anyway?
A lot of EA people believe they are genetically superior to others (in terms of IQ - however they want to measure that) and that’s why they should rule the world to best mold it. That’s where most of this comes.
If you can say a sizable enough portion is due to environment then they don’t have the “I’m superior” argument anymore. They have “I was born into a better environment” argument - which you could put any baby in. I’m sure the more white supremacist part of rationalists would disagree with that part too but that’s just the world we in.
{citation needed}
Also, why would my claim of superiority depend on the cause of my superiority? If Alice is smarter than you because of the wonderful childhood environment she was raised in, and Bob is smarter than you because of his two Nobel-prizewinning parents, is there any reason for Alice to feel *less* superior to you than Bob?
Pretty sure even this blog has said the same stuff, man. There was no lack of articles a couple years ago about “actually our autistic children are superior and will take over the world.”
I don't think you'd be any happier if they said they had superior IQ and it was because they worked really hard at it.
That’d be fine tbh. I don’t think anyone has qualms about that. However, a lot of people here want a more genetic/inherent IQ measurement. Not one that relies on how well you studied a test or prepared or whatever. The goal posts will always move because people want IQ to show that they’re genetically superior to you - not that they just studied a lot more. The point of these studies is to prove the validity of an IQ test being mostly genetic and therefore if you’re better at said test then you’re genetically superior and therefore better qualified to rule over others and be treated better and be the authority of mankind.
> The goal posts will always move because people want IQ to show that they’re genetically superior to you
Maybe there is a simpler explanation.
1. There is a thing called intelligence.
2. Some significant (maybe not half, but certainly more than a tenth) portion is genetic.
Those are pretty straightforward facts, but there are some people who refuse to acknowledge 1 and/or 2, and it's maddening to lots of smart people. Not because they want to be homo superior, but because people are saying wrong stupid things. The same way it's maddening to smart people when someone says vaccines cause autism or that the moon landing was fake.
Saying dumb wrong things is a very effective form of nerd sniping.
EDIT mmmm is an extremely obvious troll
Because we started off with claims of staggeringly high - 70-80% - heritability found by some twin studies, and implausibly low, <20%, heritability found by GWAS on small numbers of SNPs. This is a classic case where bits of the educated public feel they really have to keep fighting even when the gap has shrunk to almost nothing. But at least one person in these comments is going to try to defend the old, high twin study numbers. They always do.
Old twin studies aren't bad.
It's just that trying to reverse engineer something which was not even made to be comphensible to humans to start with, is difficult.
In many ways news studies are much worse than the old ones. Twins studies used twins registries, and researchers went to all of them and asked them to participate, and gave them a real 60-minute IQ test, not a crappy 2-minute test (and them claim it as "IQ" on social media). But for biobanks, volunteer bias is huge -- average person in biobank is richer and more educated than average Briton, people with undergraduate degrees is almost as frequent as those with high school. Vast majority of these people who even showed up, do not take even this crappy 2-minute test or measurements for many other traits. It just happens it's not important for height, but important for others traits, esp. mental diseases (people with some kinds of mental diseases will try to hide from UKBB and the others will participate more) is rampant. They don't even try to correct for it.
No, they're not bad. But I think we're forced to conclude they included something that is not simply additive inheritance as if it were.
Forget intelligence for a moment, because all the data is not yet in there, and consider BMI. I am cribbing from Sasha Gusev here, of course. Classic twin studies give you 65-96% heritability. Sibling regression gives you 39-55% and whole genome GREML gives you 28-34%. So, true additive heritability we can be confident is at most 34%, then you have another 10-15% that's probably GxG interactions with some amount of environmental confounding to get to the SR numbers. But then to get to the highest twin study numbers you need to find another 25%+.
We have really run out of places where 25% heritability could be hiding. Its not in ultra-rare variants because we have large whole genome samples. It could be in very high order GxG interactions, but then shouldn't lower order interactions that show up in SR also look significant? If its not that, it has to be environment. We don't know how, but we've run out of other places to look, haven't we?
And since the existing gap between GWAS and twin studies exists across all traits, wouldn't you expect the twin study estimates to be similarly high across all traits?
>We have really run out of places
No, we don't even bother applying such simple things such as adjusting for volunteer bias and unreliability. If we are not doing such simple things, we are also not doing other more complicated things.
We are not trying to include on how mother genes interact with foetus genes in womb.
It's not like doing WGS magically gives you information about how rare variants affect phenotype. GWAS is underspecified problem - it has more free variables than samples. They can count number of rare variants in an individual and add it to predictor with some weight but true effect of these is larger.
Also, we actually know the mechanism how BMI changes to environment inputs.
I think the point is, what we're really interested in is a causal estimate of genetic vs environmental factors: for a given set of environmental changes that we think of as reasonable/plausibly in our control, what effect do they have on the distribution of iq.
Under certain assumptions about what kind of environmental variation is captured by these studies, they function as a proxy for that question.
So the "heritability" isn't really what's at issue; what's at issue is how much of the set of plausible environmental interventions is ruled out as having a material causal impact.
You can have impactful environmental interventions with 80% heritability or fail to have them with 30% heritability. If you want to know about environmental interventions, then study environmental interventions.
(To take an example from the dynomight post linked above, say you have a dictator that injects every redhead with carcinogens, and this is the only source of cancer. Then cancer is 100% heritable, yet an environmental intervention could be 100% effective.)
I agree, and if you wanted to say 'what is the true heritability ratio of IQ for everyone born in the UK between 2000 and 2020', that's absolutely a real and singular number that you could measure.
All I'm saying is that it will be a *different* number from 'what is the heritability of IQ in this set of 200 twins' or etc., and that this isn't mysterious or requiring of an explanation.
Sure, but I guess most people think that the sources of environmental/genetic variation should be prettttty similar in both cases, so either it's a little mysterious if they're too different, or it's a little mysterious what the differences in the sources of genetic/environmental variation are.
Isn't it pretty straightforwardly likely that, even in cases where twins are separated at birth, they have more similar environments than what you find across a sample of 347,630 people?
This is what I keep being mystified that we're not talking about... there is no singular 'true' heritability value for any factor. Heritability values are always the ratio between the influence of genes vs the influence of environment, and so every population you study will have a different 'true' heritability value based on how influential their environment was.
Ad absurdum to illustrate the concept: Imagine you live in a society where, every time twins are born, the one that comes out second is immediately given an icepick lobotomy.
In this society, twin studies find that the heritability of IQ is around 1%; first-borns have IQs 50 points higher than their twins on average, it seems like genes don't even matter. But when we look at a genetic study of the general population, we find that heritability for IQ is 50%! Who is correct, the twin studies saying 1%, or the genpop studies saying 50%? What could possibly explain this mysterious difference in results?
(The answer is icepicks.)
That's an exaggeration, but you see what I mean - we shouldn't *expect* those two studies to produce the same heritability value, because they are studying different populations with different environmental factors. Two heritability studies on two different populations should almost always have different 'true' heritability values. That's not a mistake or a mystery, that's just a normal result of this being a ratio between genetics and environment, and different populations having different environments.
Some twin studies look at twins raised together, so that the environment is controlled for, giving very high heritability ratings.
Some twin studies look at identical twins separated at birth, controlling for genetics so that we can get a measure of just the environment across their two circumstances.
But even when identical twins are separated at birth, the adoption agencies have standards for who they allow to take kids, and the people who want to adopt are more like each other than like the general population. They will still have less variance in environment than you'd see in a genpop sample of 350,000
So, none of this is very mysterious to me. It just seems like you're measuring something using different methods that you'd expect to give different results, and getting different results. I'm not sure I understand what is 'missing' beyond those factors.
It feels to me like people are trying to reify some universal notion of the singular 'true' heritability of some factor, independent of context from the environment. Which is something that obviously just doesn't exist, since that measure is a ratio between genetic and environmental contribution.
If that's *not* what people are trying to do, then I'm missing something about this whole project.
We are only really interested in what happens in normal environments in roughly the current time period.
Pointing out the heritability would be different in 1400s China or if one twin got a rare disease isn't particularly interesting to us.
Right, but why do you expect the environmental variation in twin studies to be identical to the environmental variation across 350k genpop members?
Saying 'well all of those people are in 'normal' environments, so they should all be the same' is begging the question.
If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
The relative influence of variance in the environment to variance in the genes is exactly the thing we are *trying to measure* here.
You can define what population you care about - 'all British people', 'everyone above the poverty line in a first-world nation', 'twins raised in the same household', etc. - and get the 'true' heritability value for that population.
But each of those populations will still have *different* heritability values, because they come from difference environmental distributions.
Again: if they didn't, every heritability value would be 100%.
(now - if you put people in 100% identical environments, and also had a 100% perfect way of measuring genetic impact on IQ, you might find that the heritability of IQ and the genetic contribution of IQ are two different numbers, because there's some amount of biological 'noise' that happens in development independent of either genes or environment. This could be an interesting measure of 'missing heritability', if you could measure it... but current studies can't measure it, because the environments aren't actually identical!)
It is a fairly natural assumption, maybe people in twin studies are more likely to get head injuries or members of the UK biobank are guaranteed to get sufficient Iodine as toddlers, but your prior should be no significant difference.
The relevant metric here is not whether there's a statistical difference in the average, it's whether there's different levels of variance.
You don't need to get a head injury to change you IQ. Whether or not your parents read to you as a child can have an impact.
And again, the question is not whether one of these groups has their parents read to them more often than the other group. The question is whether one population has more variance across the millions of tiny environmental factors like this one that could potentially have some small cumulative effect. Two population can have the same average with wildly different variance.
> "You don't need to get a head injury to change you IQ. Whether or not your parents read to you as a child can have an impact."
There's strikingly little evidence for this based on adoption studies, at least within first-world populations.
My impression of the literature is that that's not really the case, but rather it's that the effects are small and not universal, and only apply to certain metrics,
But having small and inconsistent effects on a limited number of metric for a single factor like this is very important to heritability calculations, if the effect is nonetheless real and casual. Remember that there are ~thousands of similar environmental effects, and remember that we are concerned with population variance among those effects rather than the cumulative end result they average out to.
If you read the literature on childhood reading and said 'this sounds like a small and tenuous result, I'm going to dismiss it as an important part of the question of how to raise a child', that's correct.
If you read the literature on childhood reading and said 'this sounds like a small and tenuous result, I'm going to dismiss it as an important part of the question of heritability', that's wrong.
The environmental contribution to heritability values is *sometimes* on large factor like malnutrition or brain damage, but usually it's thousands or millions of tiny individual factors with cumulative effects.
Looking at any one factor and saying 'the effect of this one factor is small, therefore I can dismiss it, therefore I can dismiss all small environmental factors, therefore environment can't explain heritability differences' is begging the question. By this logic, no river can ever wash away a stone, because the river is made of individual small droplets which are each incapable of moving it on their own.
> If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
No, not at all. If all environments were completely identical, variation would still come from genetics and chance. You seem to be assuming that chance doesn't exist, which is hugely false.
Already addressed:
>(now - if you put people in 100% identical environments, and also had a 100% perfect way of measuring genetic impact on IQ, you might find that the heritability of IQ and the genetic contribution of IQ are two different numbers, because there's some amount of biological 'noise' that happens in development independent of either genes or environment. This could be an interesting measure of 'missing heritability', if you could measure it... but current studies can't measure it, because the environments aren't actually identical!)
Chance exists, but in a standard heritability calculation it is included under 'environment'.
Or, more precisely, the heritability value is the ratio of the variance in a trait explained by genetics to the variance explained by everything else, and both the environment and chance are combined under 'everything else'.
Fro Wikipedia:
>The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance? Other causes of measured variation in a trait are characterized as environmental factors, including observational error."
Really, I think everyone involved in this comment section should read the Wikipedia entry for 'Heritability' if they haven't yet, it explains a lot of important things clearly:
>Heritability measures the fraction of phenotype variability that can be attributed to genetic variation. This is not the same as saying that this fraction of an individual phenotype is caused by genetics. For example, it is incorrect to say that since the heritability of personality traits is about 0.6, that means that 60% of your personality is inherited from your parents and 40% comes from the environment. In addition, heritability can change without any genetic change occurring, such as when the environment starts contributing to more variation. As a case in point, consider that both genes and environment have the potential to influence intelligence. Heritability could increase if genetic variation increases, causing individuals to show more phenotypic variation, like showing different levels of intelligence. On the other hand, heritability might also increase if the environmental variation decreases, causing individuals to show less phenotypic variation, like showing more similar levels of intelligence. Heritability increases when genetics are contributing more variation or because non-genetic factors are contributing less variation; what matters is the relative contribution. Heritability is specific to a particular population in a particular environment.
> Really, I think everyone involved in this comment section should read the Wikipedia entry for 'Heritability' if they haven't yet
Did you read it? Here's some text you quoted from wikipedia:
>> The concept of heritability can be expressed in the form of the following question: "What is the proportion of the variation in a given trait within a population that is not explained by the environment or random chance?
And here's some text you provided yourself:
> If all those people were in 100% identical environments, then the heritability for every single trait would be 100%. That's definitional to what a heritability value *is*.
These two statements are, obviously, incompatible with each other.
> Chance exists, but in a standard heritability calculation it is included under 'environment'.
There are plenty of (experimental) circumstances where we can comfortably say that the environments are in fact 100% identical and we nevertheless see wide variation in phenotype between cloned research organisms. It is not the case that if you put people in identical environments, measured heritability for all traits would be 100%. That is not a part of the definition of heritability, and it also isn't true.
This is a very important point to make, because people commonly assume that if something isn't determined by genetics, it can be changed by some kind of intervention. You just stated that people making that assumption are right, which -- again -- isn't true.
... Ok, I think the issue here is that you think I'm using the word 'environment' to mean 'specific factors relating to nurture and life experiences' and excluding things like random chance and observer error. Ie. you think there are at minimum three parts of a heritability calculation - genetics, environment, and chance. Is that correct?
If that's the issue, read the quote from Wikipedia again:
> Other causes of measured variation in a trait are characterized as environmental factors, including observational error.
In common technical parlance, the word 'environment' here includes *all* non-genetic factors, *including* random chance and observer error.
It also includes what we call the 'individual environment', which includes things like which side of a petri dish a cell is dividing on... no actual real-world experiment has or could have 100% identical individual environments, you'd need atomic-level precision.
Saying 'a 100% identical environment' *means* 100% identical random chance and observer error and individual environment, along with all other types of environmental factors.
In which case, yes, you would get 100% heritability, because nothing but genes would explain any of the variation.
Now, if your takeaway here is '..but it's stupid to call chance and observer error "environment"', then I agree, but this is a historical accident of how the topic was first discussed by early scientists, and we're kind of stuck with it. It's indeed confusing, and I suspect is causing a lot of confusion here in this thread, but... that's the way the semantics worked out.
If you think your point is *not* related to a semantic miscommunication of this type, and you think you are making a precise mathematical argument that contradicts my claims, then I'm unfortunately going to need you to describe it in much more detail and specificity than 'These two statements are, obviously, incompatible with each other.' Based on my understanding of the semantics and the math, they are not at all incompatible, so I'd need you to teach me what you think I'm missing.
This completely misses the argument that is being made by darwin...
People living in Britain born between the 1930s and 2020s is NOT a “normal” environment and is NOT a random selection human environments generally. The sample is all living in a WIERD environment with standardized education and mass media and etc…
The representative sample for the past 10,000 years of selection was an asian, autocratic, agrarian, illiterate, and poor society.
Consider height and BMI. These clearly have a strong genetic influence. But in an AAAIP society the children with low WEIRD BMI and high WEIRD height are dead. Right? Traits that look attractive in one context are lethal in another context. Also a different set of genes are responsible for height and BMI genes that make children sympathetic beggars are directly tied to caloric intake and probability of survival in an AAAIP context. Genes optimized for near-starvation-level consumption of rice and cabbage have different relevance in a British diet of dairy, sugar, and meat.
British children have a narrow environmental context because they all live in WIERD Britain. Inside that sample of highly correlated environments the major component of variance is left to genetics. But when the sample is increased and globalized you get a wider and wider range of environments.
What is your point?
I am interested in the modern West not 500 B.C. China
The point is that heritability *always increases* for small, non-random, non-diverse samples. Many important environmental factors are standardized and mass-produced. Heritability systematically increases or decreases depending on the sampling.
The point is that heritability is not a property of individual, but is a property of some specific population (or sample) in specific environment. Btw, why 10000 years? Why not 200 mya?
Btw, you forgot that this study includes only White British, not all people who live in Britain. Include all british residents (and avoid volunteer bias), get much higher heritability
Its also true that if you have really high order GxG interactions, that required 4 or more distant SNPs to take effect, they are only going to show up in identical twins.
Isn't it pretty straightforwardly likely that, even in cases where twins are separated at birth, they have more similar environments than what you find across a sample of 347,630 people? // no, you made it up.
Also, the number of people out of these who took "FI" (not a real IQ test) is much smaller, probably about ten thousands.
> Either people are somehow assortative mating on blood pressure, or else these remain the strongest evidence of some deeper problem.
Everything is correlated with everything else, isn't it? Maybe I'm assortative mating on blood pressure because the people with the best senses of humor have the most correctly pressurized blood.
Or like, isn't high blood pressure much more common among black Americans? (Not sure about Black people more broadly). I'd also guess it correlates with diet and smoking, which in turn correlate with income and education. So if you assortively mate based on race, income, and education--pretty much the canonical variables for assortive mating, surely?--would you not expect to see the effects of assortive mating on blood pressure as well?
Here you go
https://gwern.net/everything
I don't see a debate remaining, I see a red herring being used to apparently keep it open.
The actual missing heritability is largely gone and a large chunk of the residual seems like it can be tackled right away (https://x.com/cremieuxrecueil/status/1988746015877525683). This is the crux of the matter.
What you're discussing as the environmentarians' new argument is not about missing heritability, it is an entirely unsubstantiated argument that they've somehow won some debate no one has been having, about a difference between biometric methods rather than between biometric methods and molecular ones. Missing heritability is about the latter, not the former.
But to even make the former argument credibly–if that's even possible!–means mustering new facts. Proponents need to add meat to the idea instead of simply retreating to a seemingly irrelevant argument: they need to show that the pedigree estimates have moved biometric estimates down, or are anomalously low, or whatever other permutation of their argument that they wish to settle on. Noting that other studies find different results fails to do that. They'd need to show this discrepancy in one sample, because phenotyping (inclusive of trait measurement, sampling, etc. See: https://x.com/cremieuxrecueil/status/1938391982667116808) has major consequences that can plausibly explain any discrepancy they're claiming (but not showing) exists now.
For example, BMI heritability moves up from childhood into adulthood and then down again into old age. The UKBB sample is an older one, and so it could have lower heritability for the trait in actuality. Age-related heterogeneity could also impact estimates from both the pedigree and molecular methods in the study, likely adding noise. This is one of many potential issues that has to do with the novel issue of the absolute level of estimates–not the missing heritability problem per se (unless people are positing sources of bias that drive a wedge between biometric and molecular methods).
What percentage of real-world variance in IQ or other traits can the polygenic scores from this study predict? If you know?
This study is not about polygenic scores, so 0%.
I'm not an expert in this subject, but if the gene-variants identified predict literally zero percent of any phenotypic trait how would you compare the biometric predictions with molecular predictions?
You're misunderstanding what I'm saying. Yengo et al. is not about polygenic scores, so it says nothing about polygenic scores.
The molecular methods in question are in the category of methods like https://yanglab.westlake.edu.cn/software/gcta/, not https://github.com/zhilizheng/SBayesRC (as used for PGS computation).
Look... I don't have a masters in molecular genetics so I'm sure a lot of this is going over my head, but the first page you linked to (https://yanglab.westlake.edu.cn/software/gcta/#Overview) opens with "GCTA (Genome-wide Complex Trait Analysis) is a software package initially developed to estimate the proportion of phenotypic variance explained by all genome-wide SNPs for a complex trait, but has been greatly extended for many other analyses of data from genome-wide association studies."
I don't see a specific bullet-entry for GREML-LDMS (the method used by Yengo et al) on the same page, but the entry for GREML does define it as "estimating the proportion of variance in a phenotype explained by all SNPs". The first result I get searching for GREML-LDMS (here- https://www.nature.com/articles/ng.3390) opens with a paragraph saying that ∼17 million imputed variants explain 56% of variance for height and 27% of variance for BMI.
Are you saying these methods estimate the percentage of phenotypic variance that *would* be predicted by a PGS, given perfect knowledge about the genome and gene expression, but that the specific variants in question and/or their effects are not actually identified?
> Are you saying these methods estimate the percentage of phenotypic variance that would be predicted by a PGS, given perfect knowledge about the genome and gene expression, but that the specific variants in question and/or their effects are not actually identified?
I can't speak for Crem, because I'm not sure I understand the argument he's making. -)
But you are correct that the GREMLs estimate the potential explanatory power of a PGS without actually building a PGS or pinpointing causal variants. (And in case you're wondering, GREML doesn’t build a polygenic score because it calculates the variance of all SNPs collectively. It's not designed to estimate individual SNP effects at all.)
Basic GREML usually gives a lower estimate of heritability that all SNPs can collectively explain, while GREML-LDMS typically gives higher estimates, and (to the best of my understanding) that estimate is the *upper limit* of the heritability that SNPs can collectively explain. Saying that the GREML-LDMS results that place IQ heritability at the lower ranges which twin studies give, and thus support the twin study estimates, isn't quite correct. Rather, some GREML-LDMS studies say that heritability *may* approach the lower end of the twin study ranges. But it doesn't really confirm they reach that range.
But GREML-LDMS IQ heritability estimates vary widely across studies. Because (a) different studies measure IQ using different tests, (b) samples differ in age (remember IQ tends to go down with age), and (c) ancestry (e.g., some studies include family data and others don't). Overall, IQ heritability estimates from GREML-LDMS-type methods are higher than older GREML estimates, but less consistent across studies (than, say, compared to the results we get for height).
I hope this helps. I've tried to be very careful with my wording and edited it a few times to improve clarity. Any mistakes are either due to my clumsiness as a writer or a misunderstanding of the underlying theory (but I think I understand the theory in broad strokes). I'm perfectly willing to be corrected, but please provide some links if you do (so I can improve my own understanding).
Crem is wrong on many of the points above but regarding variance explained by polygenetic scores the current state of the art is about 16% per Herasight's new paper
https://herasight.substack.com/p/cogpgt
No, I'm not wrong on any point above and you can't support your contention because it's incorrect.
And you're contributing to __browsing's confusion. Yengo et al. is not about polygenic scores.
I appreciate the link, but I'm more curious about how well the PGS's produced by this specific paper would predict associated traits. (Or, if PGS are not involved here at all, how biometric vs. molecular comparisons are even possible.)
Jeez, seems like you should say how you think he's wrong.
Evidence: trust me bro! What, are you a racist or something?
Whether or not Crem is wrong (and I'll admit I don't really understand his point), Yengo et al use GREML-LDMS. The various GREMLs calculate the variance of all SNPs collectively — without actually building a PGS or pinpointing causal variants.
Herasight uses PGS which they mix into their secret sauce and call it GogPGT, and they claim, "our PGS achieves a standardized regression coefficient with fluid intelligence of β = 0.406 (SE 0.009), corresponding to an R2 of 16.4% (95% confidence interval: [15.1%, 17.9%])."
A β=0.406 is a moderate-to-strong relationship in behavioral genetics. But an R2 of 16.4% means the genetic score explains about one-sixth of the differences in fluid-intelligence scores among people in their very large UK Biobank sample.
I'm withholding an opinion on Herasight because I haven't tried to pick apart their paper yet.
https://cdn.prod.website-files.com/68795c69c46b98b838f692d1/68f76c0e23645840d3f97a86_Within%20family%20validation%20of%20a%20new%20polygenic%20predictor%20of%20general%20cognitive%20ability.pdf
(1) Which part of the genome is responsible for the genetic component of inteligence?
(2) What is the range of possibilities for this part of the genome?
I feel like the headline here should be "~everyone agrees that ~everything is at least 30% heritable."
Yes, isn’t it the case that, a couple of years ago, nurturists would point to GWAS studies and suggest a heritability of IQ of about 15%, whereas now they are accepting ~ 30%? In other words, the lowest value that one can plausibly argue for seems to be to have shifted upwards markedly.
~30% is still incredibly far from the claims some hereditarians make about things that common sense would suggest are important (such as values you are raised with, how you are treated as a child, behaviors and mannerisms of family that raised you, etc.) not mattering at all such that adoption is a bad idea at best, parenting isn't important, etc. etc.
Those conclusions would only be false if the shared environmental component of such traits were shown to be large. There are also non-systematic, non-shared environmental influences on these traits which are neither nurture nor nature, but as such it's unlikely they can be manipulated or improved. (This could include everything from random hormone fluctuations in the womb to personal choices in adult life, which would probably appear random to external observers. Although I guess personal choices could also *be* random hormone fluctations.)
But at the same time, last time this was discussed here people were still waiving around very old twin study estimates of 80%+ for IQ as if they were plausible.
No-one has yet produced a convincing explanation of why such an estimate is wrong. Twin studies are also the most examined and critiqued of the various methods. And the various newer methods have not yet settled down to producing consistent and replicated values.
There are lots of potential reasons why they’re wrong. Given that all other methods are converging on lower values, the only question is which reason is correct.
There’s not really “convergence” of the other methods yet, there is unexplained scatter. And yes there are suggestions as to why twin studies are wrong, but as yet no suggestion is properly convincing.
Right, but there's unexplained scatter among twin study and between them and other lineage based methods also. When I talk about convergence, I mean that if there was any purely genetic component left out of whole genome GWAS, we would see a gap between GWAS-WGS, RDR and SR. GWAS will not show any non-additive effects as heritable, where SR will show most GxG interactions as heritable, with RDR somewhere in the middle. But we don't see that - for the traits where we have a full three-way comparison between those methods we have very close results, and the very high numbers from some twin studies are an outlier.
There are three possible explanations here:
1. There is some problem with twin studies, but we don't know which of the many problems identified it actually is that's causing the discrepancy.
2. There is some very high order GxG interaction that means very closely related individuals (ie. twins) share far more traits than you'd expected based only on lower order ineractions that will show up in SR.
3. Some traits, intelligence in particular, just so happen to really be 80% heritable as shown by older twin studies, even though we can now show that other traits shown by older twin studies to by 80% heritable are only ~30-50% heritable.
Until this week, I thought the likely explanation was 2. But recent SR studies show lower order interactions aren't important so it seems a bit of a reach to supposed that higher order interactions are. 3 is ... very implausible. So right now I am defaulting to 1. Where am I wrong here?
That depends on the method used to arrive at 30%. Within-family controls are the only way to truly deconfound environmental factors and they typically cut the genetic component by something like half (compared to just using ancestry components and other measured covariates). So we’d back to 15%.
Isn’t the study Scott is talking about based on GREML? If so, the 30% still contains environmental influences. Are there RDR/SR studies that also find 30% for IQ?
Within-family removes some but introduces others, such as sibling rivalry and makes volunteer bias much worse. I wonder if they ever correct for birth order effects.
>(compared to just using ancestry components and other measured covariates)
which is already overcorrection, as these variables aren't independent of genetics.
Somehow Markel et al. 2025 finds 75% of IQ (averaging different tests actually).
Another brilliant piece. Thank you! Just love your topics, research, analysis and style.
Gender may also influence results. My daughter studied Behavioral Genetics at The University of Colorado Boulder. Her master’s thesis was on alcoholism. I’m oversimplifying, but her twin studies uncovered that alcoholism in men was, for the most part, stress related or socially induced. Victory for the nurturists. On the other hand, they found genetic markers in women which predisposed them to alcoholism. Hereditarians rejoice.
Hereditarians never claimed that alcohol usage and smoking were 80% genetic. Twin studies found shared environment for these traits but not IQ.
Or would the male stress-induced alcoholism be more due to lack of social safety net, whereas women more easily bond with ppl & ask for help.
That’s a possibility, but the controlling finding was that there was a genetic marker in women for alcoholism for women. None for men.
Seems to me that height is the one real outlier here, and that should be for the fairly obvious reason that absent serious nutritional deficiencies or very advanced age, there is almost nothing environmentally that's going to effect this. My mom, sister, aunts, and female cousins are all roughly the same height, despite drastically different lifestyles and diets, etc. I assume this study did not use subjects living in places where nutrition might actually stunt height?
But everything else on here is definitely effected by lifestyle, even if you have different genetic baselines. Diabetes and cholesterol depend a lot on your diet. White blood cell count depends a lot upon your level of exposure to viruses and bacteria...I would guess if you work in a kindergarten your WBC is going to be higher than if you work alone in a room on a computer. I can't see a single thing here that I wouldn't expect to be highly modifiable by environment and lifestyle, other than height.
The one that really stands out to me as surprising is how low the heritability for neuroticism came out. That's kind of a shocker. But also I assume based on self report rather than directly measurable, so I'm going to guess that one might not be particularly reliable?
> My mom, sister, aunts, and female cousins are all roughly the same height
Not to nitpick, but unless your family is severely inbred, that your mom, sister, aunts, and female cousins are all the same height doesn't tell you much information about heritability because your genes are expected to vary quite a bit too, especially with more distant relatives like aunts and cousins. You only share 25% of genes with your aunts and 12.5% with your cousins.
IDK, we all all share the same grandpa and he was 6'5" back in the olden days when most people were short. We are all just a titch shy of 5'10" which is about 98th percentile for female height...to me this just shows that the tall genes from grandpa are very strong and hard to dilute. I should have added my nieces too, this far. No one has married a relative, but grandpa's height genes seem to have passed down basically untouched. Though all of his progeny going down to great grand children are female, so idk if that makes a difference.
Prenatal environment seems highly plausible here as a reason why twin studies would consistently show higher estimates for heritability across the board. Identical prenatal environment for twins should almost certainly have some impact that would bias all of the results for heritability upward in the way shown here. Especially in older studies or studies involving adopted children where FASD may be more common.
One of the major types of twin studies is comparing identical (same genes, same prenatal environment) to fraternal twins (1/2 same genes, same prenatal environment). Adoption and twin studies and comparing them has been one way to look at the effect of the prenatal environment. Strictly speaking the prenatal environment may not be exactly the same between the two cases (identical twins are much more likely to share a placenta for ex.), but they are probably quite similar relative to the population at large.
Surrogacy could be used to test prenatal vs genetic effects.
Could it be the case that IQ (and some of the other traits examined) is more heritable at the extreme ends than in the middle? Suppose that the 32% of people who score outside one SD from the mean (over 115 or below 85) are inheriting nearly all of that whereas the people scoring within one SD of the mean are not and their variation within that range is largely environmental.
If that hypothesis were true, what result would we expect to see from a study like this?
Why would this be the case though? If you have two 100 IQ parents, shouldn't IQ be as heritable( whatever that % is) for you as it is for someone with two 120 IQ parents?
I think what you're really getting at is that because of long-time class barriers and assortative mating, the 140 IQs in the population had a higher mean to revert to compared to the average population one of 100 IQ.
It could be the case because outlier results are genuinely different in some way, is what I'm saying. I don't think every point of IQ is the same, a 90 and a 110 are more similar people than a 115 and a 135 are.
Some people are amazing endurance runners, and some people have heart disorders that make sustained cardio impossible or dangerous. Both of these fringe outcomes seem to be very heritable. But if you didn't inherit either of those things then you're just a normie whose ability to run a half-marathon in a certain range of times is mostly due to conditioning, technique and a bunch of minor physical traits that don't correlate with each other. So the ability to run a marathon below 2:30:00, or not to be able to run one at all even with training, could both be heritable in a world where the difference between a 4:45:00 and 5:30:00 (common range for amateurs who trained up to one) is NOT heritable. In the absence of the genes responsible for the rarest outcomes, other things dominate.
Likewise, perhaps there is some group of genes that pass on being a genius or an imbecile, but if you don't get either of those then you land within 1 SD of the mean and where you end up in that range isn't highly correlated to genetics. In that world, if my theory were correct, two 140 IQ professors having a child who ends up being 115 is explained by A) child did not inherit the genius trait, but B) wound up on the high end of the normie grouping due to good upbringing.
Schizophrenia does seem to be common in families of geniuses, like Einstein's. Obviously Asperger's is also quite common among geniuses themselves, and often their family members. But being an imbecile( IQ under 80?) is not common in families of geniuses among those people who don't have a genetic disorder.
IQ tells you who is more likely to be a genius in certain intellectual fields, but it obviously doesn't guarantee it. For music, one study even found a 97 IQ person who had spectacular performance on a brain function related to proficiency in music. But even without IQ, the specific strong areas of certain groups are quite evident if you are paying attention. Like say China's engineering proficiency.
IQ tests are modified to favor ethnic groups? Aren't you confusing them with something else?
School cirruculas are changed to favor some ethnic groups.
I see what you mean but I still don't think you're correct. I could be wrong, but from what I've read what matters most to your potential IQ range is the scores of your parents/grandparents and ethnic group mean.
In some groups, a 145 IQ is quite rare; in others, it's not uncommon. I don't know the exact variance possibilities, like whether two 90 IQ parents can even have a 145 IQ, or if it is possible but just extremely unlikely.
But we do know that regression to the mean depends on what IQ your parents and grandparents were. For example, two 140 IQ parents are likelier to have their four kids be in the 128-140 range, with the skew toward the low or high end of that range depending on that of the grandparents. Imagine two of the 4 grandparents were 140s, while the other two are 120s. This would skew the grandkids toward being around 130, despite their parents both being 140s. This is kind of a simplified explanation, but shows how certain ethnic groups develop different average IQs, rather than simply reverting to some ancestral average that is common to all humans.
I don't see your substantive issue with what I wrote. Inbreeding's results depend a lot on who's doing it. It's bad en masse, but in some groups it leads to conserved differential IQ levels and more birth defects/genetic disorders as well.
Yes, this sort of thing is possible, and it's part of what makes heritability hard to interpret–you can get different heritability numbers depending on how the study participants were selected.
However I'm not sure if there's any particular reason to believe IQ should be more heritable at the tails than in the center.
I think my naive assumption was that the heritability might look like an M shape. The most extreme fringes could have gained/lost those last few points as the result of some environmental stimulation, the normies would have a lot of variance as well, but that significantly above or below the norm people are sort of their own breed. That would account for the world I've seen around me in my criminal law practice, you seem to observe consequences of a child's home life in ordinary folks, but two educated professionals adopting a trailer trash baby from a multigenerational clan of hillbillies reliably find themselves 15 years later with a dimwitted teenager stealing their valuables to sell for drugs. I don't know if that IS the case, but I've learned that heritability can be higher or lower for different quintiles in some other things (reading proficiency was the one I saw) and was curious if it applied here.
If I remember correctly, twin studies produce estimates close to 'broad-sense heritability', ie all genetic effects that make MZ twins more similar than DZ twins (additive effect of variants but also dominance, and epistasis). On the other hand, other methods, including for example sib-rregression, usually produce estimates of 'narro-sense heritability', i.e. of only the additive effects of variants. It is therefore expected that almost all other methods produce lower heritability estimates than twin studies.
Also, assortative mating decreases the heritability measured with twins, it does not increase them. Twin study heritability measurement is based on the comparison of the ressemblance between fraternal and identical twins. With assortative mating for a given trait, fraternal twins share more than 50% of their additive genetic variance on average for that trait, but identical twins are still ~100% identical genetically for that trait. The difference between the ressemblance of fraternal versus identical twins is therefore reduced, which will in turn reduce, not inflate, heritability estimates. And population stratification does not really inflate heritability either (though it really must be taken into account for many genetic analyses).
I actually asked Sasha Gusev about this yesterday. He pointed me at simulation data that shows sibling regression captures 2nd order GxG interaction completely, and very high amounts up to the 4th order. That's obviously less than twin studies, but SR is also less confounded by environmental factors than twin studies. So if SR shows lower heritability, its either very high order interactions or environment. But the other thing we should consider here is high enough order GxG interactions are basically only going to occur in identical twins anyway, so are they really heritable at all?
Only occur in identical twins or only be measurable in identical twins?
Measurable, but for the purposes of studies like this, you can’t study something that’s not measurable.
I'm just happy that even the "anti-hereditarian position" is apparently now ca 30% heritability. Before in these kinds of online discussion, that used to be the low end of the hereditarians, while the anti-hereditarians argued for only negligible heritability, if not going full blank-slate.
No argument there.
Why does it make you happy? Wouldn't it be better if it was closer to blank-slate, because it would mean any problems can be fixed with simple environmental changes rather than complex racist programmes or gene hacking? Kind of giving away the hereditarian hand here.
Imagine two sides have a long term argument about deaths in WWII. Maybe it's about number of people killed in the Holocaust.
There is some new consensus on some minimum number of deaths. The side that was arguing that they believed *more* died in the Holocaust says "I'm just happy that the argument has shifted ..."
Is this happiness from joy that lots of people have died?
Or is the happiness that the reality they previously conceived is correct? (And thus the deaths are being acknowledge?)
Being happy about final agreement on the facts does not entail happiness about the facts themselves. You just seem awfully eager to attribute malice to hereditarians.
Of course I am, a large plurality of them are racist and want me and my friends to be segregated or forcefully removed from the gene pool. It is perfectly rational to attribute malice as a first reaction.
I doubt very much that you have any persuasive evidence of that. Your personal experiences aren't really persuasive given the drive for algorithmic engagement bait.
https://www.theguardian.com/world/2024/oct/16/revealed-international-race-science-network-secretly-funded-by-us-tech-boss
Here describes a powerful network of these types of people, headed by Emil kirkegaard, who is mentioned in this article and posts on this blog. Not sure why you are pretending not to understand things.
Are these studies taking into account that gene A might increase educational attainment in isolation but decrease it in the presence of another gene which itself had little or no impact? We know that increased IQ is correlated with both educational attainment and drug experimentation. If B is associated with addiction then B itself could have little independent effect but could toggle the polarity of the impact of "A." Also, presumably not all genes are additive. We know that many are since we end up with a normal distribution but undoubtedly some are multiplicative. I am just curious if their model is flexible enough to expect percentages more accurate than what we see already.
They almost always assume a linear effect. That said there's some justification from an evolutionary standpoint why linearish effects are more likely for selected widely poly-genetic traits.
I absolutely agree that linear(ish) effects are going to dominate. That is why the distributions are "normal." My point was that the nonlinear effects could easily explain why the model still has marked room for improvement.
People have tested for nonlinear effects and their effect is negligible as far as I know (in particular, dominance).
The article implied that this was the first testing looking at the entire genome rather than the obvious 0.01% of responsible genes. This was enough to nearly double the observed effect. It seems plausible that nonlinear effects in these previously unexamined genes could further increase the observed effect. Not claiming that it is likely to end the discussion but it seemed worth discussing.
Wait, this is huge. How did people test for non linear effects, or 'switch' effects, or horseshoe effects? These are famously difficult to test!
See New World (i.e. Americas) monkeys where in most species only female heterozygotes have trichromatic vision. Unlike us, they didn't get duplication of longer-wavelength color receptor and therefore stuck in situation where strong selection pressure can't make all of them have trichromatic vision.
I propose this for the pithiest answer to nature versus nurture: Nurture governs as much as possible. And nature governs how much is possible.
There's an additional problem that heritable traits can still be affected by nurture. I don't believe the tale that IQ tests aren't effected by education, the problem is that they are mostly affected down.
In 10 years of teaching I have seen over and over again that smart kids can become stupid if they are surrounded by stupid people and cultures, but very few stupid kids can become smart no matter where they are raised.
So a genetic test could easily result in findings of a high heritability for inteligence that doesn't correlate to tested IQ.
Heritability is a function of environment. Heritability of height will be different someplace with a lot of endemic parasitic infections and malnutrition than someplace without those things, because more of the variation in height will be caused by parasite load or lack of food while growing. It's not a constant always and everywhere.
That's what I'm saying, but I just want to add that it only really goes one way. A person who would be 6'4" in ideal circumstances can end up shorter, but a person who would only be 5" won't get taller.
This can throw off a lot of the numbers.
I think the way this shows up in our data is that as our societies get richer and more functional, heritability goes up. Which seems backwards at first, but makes sense when you understand that what happened was that we fixed the broken stuff that was stunting peooles' development. Einstein raised in a mud hut and never shown a book doesn't discover any new physics, and neither does Bozo the Clown. Einstein given a very enriching environment and lots of opportunity to learn and explore physics can discover some new physics; Bozo the Clown will not manage this no matter what opportunities he is given.
It's. Fine but I like to meet up with you
> Nurturists argued that the twin studies must be wrong; hereditarians argued that missing effect must be in hard-to-find genes.
I mean, I feel like the nurturist whole starting point is just wrong/bad. It seems very difficult to accept that the twin studies are wrong, they're straightforward and should be fairly accurate.
Whereas trying to _find_ the cause of the heredity is naturally very difficult, fraught, and likely to be wrong.
I don't see that there's really much room for the nurturist argument even before this study.
Agreed; for this very reason, I predicted—in the comments for Scott's last post on this topic—that we'd find most of the missing heritability sooner or later. I still predict that we'll find e.g. IQ to be at least ~50% heritable, and probably more.
Asking for clarification: doesn't assortative mating (AM) mean that twin-studies estimates of heritability are *underestimates*, that is, lower than the true value?
That's because AM means that DZ twins are genetically more similar than otherwise, so the MZ-DZ difference is less, to the phenotypic difference is caused by a smaller genetic difference, so the genetic influence is actually higher than calculated?
This is at odds with the above quote: "But this same argument can be deployed against the nurturists’ favorite explanations for high twin study numbers: population stratification and assortative mating. These could be expected to affect socially-relevant and environmentally-mediated traits like educational attainment. But nobody assortative-mates on white blood cell count, ..."
In contrast, Scott in the previous "more than you wanted to know" post says:
"But this [AM] is the opposite of what you would need to “discredit” twin studies - if this bias is true, then everything is more genetic than twin studies think.
"I’m only mentioning this one here because some anti-hereditarians argue that you can’t trust twin studies because of assortative mating, without mentioning that this can only bias them down."
Yeah it seems that assortative mating might increase general gene IQ correlations in the population but *within family* (like twins) it should lead to underestimating genetic contribution (because family members vary less genetically than you’d expect)
I think I mostly update towards hereditarians here - my impression was that as a study this one isn't that different from the range we've seen before (so estimates should stay within that prior range), but its main distinguishing feature is the test for rare gene effects, in which it supports hereditarians.
>my impression was that as a study this one isn't that different from the range we've seen before (so estimates should stay within that prior range), but its main distinguishing feature is the test for rare gene effects, in which it supports hereditarians.<
That's a good way to think about it (that I didn't think of & haven't seen anyone else mention)—thanks for pointing it out!
What is the place of epigenetics in that debate ? Is it equivalent to nurturism ? But epigenetic processes are also a part of genetics isn't it ?
Would epigenetic factors be different for identical vs fraternal twins?
Epigenetics has negligible effects overall. It gets far more press coverage than it deserves, because it's cool and breaks the rules. But most epigenetic effects are just going to be that your cells methylate some DNA, then clear that all up before turning into your kids. Relatively little of it is passed down generationally, and even then most of that is in sperm-activated vs egg-activated genes (AKA the reason you can't get a healthy mammal by fertilizing an egg with another egg).
Thanks for that debunking.
> studies overestimate this because of assortative mating and population stratification. This affects biomedical traits like white blood cell count just as much as behavioral traits, because shut up.
Is it obvious that this is wrong? Couldn't WBC count and heart rate be strongly correlated with stratification and sorting traits like class, overall health, having hobbies that improve health, etc? Like we know that health and SES are correlated, right? In fact this doesn't seem *that* surprising.
< So they gave these people a short crappy IQ-like test with a lot of random noise. Past studies estimated the reliability of this test at 0.61 (low). It’s easy to statistically correct for this;
I don't understand how it is easy to correct for the crappiness of this test. I wouldn't bring this up just to nitpick stats. I think most people would be startled by the idea you can statistically correct for test crappiness. If it were, you could just dash off a short crappy test of anything you want to measure, then statistically correct later for its flaws. And it seems like correcting for what was wrong with the IQ-like test makes a significant difference in what you can conclude about IQ heritability here. Never mind the math, and the kinds of reliability that are calculated when professionals validate a test -- just do a thought experiment. Say you have a yardstick made out of very stretchy elastic, and that yardstick also swells and shrinks with the amount of humidity. You can calculate its reliability by measuring the same person's height 2 days in a row. So say you find that across subjects a person's height measured with the yardstick on day one only predicts their measured height on day 2 with 61% accuracy. So we know that 39% of the info about their height is getting obscured by noise, most of it due to the yardstick's stretchiness and sensitivity to humidity, but there is no way to figure out what information was in the lost 39%. Our yardstick only captured 61% of the info about people's height. How can you compensate statistically for that?
A couple things I think, though I’m no expert. First this isn’t the only study done, so it’s typical to compare the results of a study to more precise and reliable known tests and past studies. If your results are abnormal, and there’s a good reason to think they’re abnormal, you can totally “statistically correct” for a noisy test like the one they did.
Secondly Scott actually gave one clear reason to think the test was abnormal right after this snippet, being the “healthy volunteer effect”.
I don't think you're taking into account the way this test is abnormal. If it differed systematically from some good test we had, then we could use the info about how it differed to correct its score. In terms of my yardstick analog, that would be a situation in which we knew our yardstick, which was supposedly 100" long, was actually 110 inches long. Then we could correct all its results just by multplying them by 100/110. But the yardstick -- and the crappy test -- do not vary systematically, they vary in unpredictable ways -- they are full of noise.
In statistics one common way to test for reliability of a measure is to use it to measure 2 things that should be the same, such as a single person's height 2 days in a row, and see how well the measures agree. If the measures agree poorly, say only 61% of the time, all we know is that the measures are unreliable. They do not differ systematically in a way that's correctable, they're just full of noise.
You're correct that the missing information can't be recovered on an *individual* basis. However, on a *group* basis, it can be possible to recover group-level statistics if the distribution of the noise is well understood.
In your yardstick example, suppose you know that the yardstick added noise with (a) mean 0cm (b) standard deviation 5cm (c) no correlation to what's being measured. Then, by using the properties of how mean and variance combine, you can infer that the real heights would have the same mean but standard deviation of sqrt(stdev^2 - (5cm)^2).
However, the assumptions about the noise are quite important. If the statistical properties of the noise aren't well understood, you can't infer much about the de-noised distribution.
So if I'm understanding you right, if the noise in the crappy IQ-type test has the right statistical properties we could know mean and standard deviation of the IQ scores for the subject population as a whole. But how would that be helpful in answering questions about how closely each subject's genetic data correlates with their IQ test data? We need both genetic and IQ test data about each individual to do that. Or am I missing something here?
For correlation and heritability, it's still possible to subtract out noise, but the math is more involved and the assumptions need to be stronger. (When Scott says it's "easy" to correct for noisy data, that should be interpreted as "easy for a working statistician".) In particular, you'll need some assumption about the noise being uncorrelated to both IQ and genes. This might not be exactly correct, but it's usually "correct enough" to still give useful results.
I apologize I'm not sure I can give a good explanation of the math without actually writing out the formulas. But from a high-level view, measurement noise will always bring correlation/heritability towards zero, and the exact size of the effect can be calculated if you know enough properties of the noise and data.
Yes, no chance to recover any individual information. But the point here is that noise will smooth out any correlations. I.e., if you have two traits that are perfectly correlated, but each of your measurements is noisy, you'll get a correlation less than one between them based on your noisy data. But if you roughly know how noisy your measurements are (assumptions about noise, mileage might vary etc.) you can calculate what the maximum correlation that you can measure is. If you then divide your observed correlation by the maximum or whatever you'll get something like the actual correlation of the traits, if measured perfectly. Not foolproof, but should work reasonably well if the noise isn't too large.
Let's take a simpler example, because real statistics is too complicated for me. Say you have two binary variables A and B and you measure one of them entirely accurately while the other one is 60% reliable (symmetrically). If your measures agree 60% of the time, you know that A = B, if they agree 40% of the time, you know that A = !B, and between those two points, the true concordance is a linear function of the measured concordance.
I'd always though that, just because of the way developmental biology works, there ought to be a lot of non-linearity in behavioral influences from genetics in ways which would not be fully captured by a GWAS, even one including rare variants.
Why is the assumption that if the “true” heritability was higher, it would be genetic variability instead of missing heritability? It seems that the variance explained by genetics is fixed regardless of interpretation?
Tl;dr:
Unless you actively work in this field, assume 50:50 heredity:nurture and you won’t be too far off the mark
Something that rarely comes up in these discussions is that genetically determined traits affect parts of the environment that then affect outcome. For example, physical attractiveness affects how kids are seen & treated by peers and teachers. Of particular note is the halo effect of physical attractiveness -- attractive kids are seen as smarter, for instance. Various chronic health problems that are heritable, such as asthma, affect school attendance, energy level and attention, and those surely affect life attainments.
To be fair, good looking people do have advantages in life, but when it comes to pure brain horsepower, many top achievers and brainiacs are definitely not very much above the average, if at all. You can easily do a quick check of say the people in the Manhattan Project: no one is hideous, but there are few who could've had an alternative career path in Hollywood.
I have been looking for this type of summary, even if it is not conclusive, and I thank you.
Over the last decade or two there has been increasing emphasis on non-shared environment, which has a random element. It has been give a 50% number. If this random/non-shared element is 30-70% depending on the trait, that would thread the needle on this. That's not evidence in itself, but it gives a place to look for the keys being where they were dropped rather than under the streetlight.
Decent summary. I wouldn't say this settles anything but it's a great start to hopefully more WGS GREML like studies. I hope they include structural variants next study. Maybe look into the reliability of the biomedical measures too, they are not necessarily highly reliable (in retest sense) because many such values can have large short term fluctuations, sometimes daily. I don't know what measures the family studies used.
For blood pressure and the like is there potential for assortative mating based on smoking (and perhaps some other lifestyle traits, though that would be a big one)? Or did they correct in some way for those? Anecdotally, smoking preference is a pretty big filter on mating preferences and is pretty strongly associated with blood pressure and various other bio-medical factors (though I guess how much of smoking preference is genetic?).
Transgenerational epigenetic inheritance and the correlations of separated twins aside (likely to live in same region/country, etc):
People fail to understand how determinant the gestation is, the age of the parents, their metabolic health, the quantities of oligoelements, the duration of the gestation, etc. They impact the total quality of the body that is being generated (nephron number, etc) see e.g. neural tube defects and autism studies on vitamin D showing statistically eradication https://vitamindwiki.com/pages/autism-risk-is-reduced-by-vitamin-d-early-pregnancy-or-chlldhood-umbrella-review/
Transgenerational epigenetic inheritance is, AFAIK, a non-starter, *in re* heritability / genetic-components-of cognitive traits: it's not going to affect anything.
And what makes you believe that.. For starters there are more than 228 imprinted genes in humans and many known transgenerational epigenetic disorders in humans known among other things as imprinting disorders. Albeit rare their mere existence and the fact that partial imprinting exist make it an underresearched api surface. https://en.wikipedia.org/wiki/Genomic_imprinting
>And what makes you believe that[?]<
IIRC, due to the Weissmann barrier, it is very difficult for stable inheritance to result from (environmentally induced) epigenetic changes—although I think it's possible *in principle,* it would, as said, have to be difficult/rare—and there is very little evidence (again, AFAIK; I know a few studies claimed to find some a while back... but the years have not been kind to their conclusions, I think, heh) that anything significant herein occurs in humans.
Imprinting is indeed an epigenetic mechanism (so I was wrong to say that TEI isn't going to affect *anything*); that said, imprints are *not* environmentally acquired, but rather (the "plans" thereof) are genetically hard-coded—sc., variation in imprinting will be captured as broad-sense heritability by twin studies—and I don't think we expect much smooth variation in complex, polygenic traits (such as *g*) to result from variation (read: errors) in imprinting anyway: those seem to *mostly* cause binary, you-have-it-or-you-don't disorders, as you mentioned.
I am a moderate hereditarian. But also, I think twin studies are highly problematic, because twins are rarely "separated at birth" in the relevant sense.
Imagine a pair of identical twins who are given up for adoption in the Midwest. One ends up with a family in, say, Illinois, the other in Wisconsin. Then they're raised independently, right? No, not at all! Notice that one did not end up Burundi. Certainly one did not end up in 10,000 BC. Not only did both remain in the U.S., they likely were placed with families with similar socioeconomic and cultural backgrounds. This is very far from statistically random in the grand scale of human populations over history.
It is accepted that a heritability estimate only pertains to the range of environments sampled in the study (indeed that is necessarily the case). But that is anyhow what we are mostly interested in: what factors matter for the kids in the range of environments that are typical in a given nation at a given time.
But there's an equivocation here, and my claim is that this equivocation leads us into trouble. On the one hand, sure, "heritability" as a statistic is defined only relative to a specified population, not humanity as a whole. On the other hand, this means that "heritability" cannot be straightforwardly interpreted as "genetically caused," which is what people intuitively want it to mean. It only means "genetically correlated in a specified population," which is a quite different thing.
You can compute a heritability statistic for "receiving a PhD," and you'll get a non-zero number. But it's simply obvious from first principles that receiving a PhD is not genetically caused: humans with those exact genes in a different environment (say, in 500 AD) would not receive PhDs. People tend to say "well, of course," and dismiss this consideration, but they haven't really reckoned with how it confounds reasoning about genetic causation.
Nothing ever has “a” single cause, there is always a mixture of multiple causes. But it is still meaningful to talk about the relative influence of genetic factors versus environmental factors in a given place and time.
I agree with that. It doesn't really speak to particular difficulty I'm pointing to, though.
What you’re pointing to is not a “difficulty”, it’s just a well known and inevitable aspect of such studies.
“This is already well-known. It’s part of the definition of heritability.”
Except that doesn’t stop people, including Scott, from talking as if heritability measured genetic causation, when it only measures genetic correlation, resulting in analytic mischief. I call that a "difficulty."
I dunno, man... I don't think anyone thinks that saying that "obtaining a PhD has a heritable component", or "...is partly 'genetically caused'", means or implies /causation in the sense that "even in AD 500, your genes could make you get a PhD"/—"causation" is probably a bad term to use here in any case, but the relevant sense thereof would maybe be more like "propensity" or "ability".
I gave the PhD example because it's so obvious that it's hard to deny. The point is that this is a characteristic of the heritability statistic in general, even for less obvious examples. It's fundamentally a measure of correlation, not of causation. It measures the proportion of phenotypic variation that correlates with genetic variation, but the latter need not be causal.
That's not true though, is it? The interest is in determining what factors matter for global populations. That is why hereditarians are always discussing IQ differences between nations with entirely different geographies and histories.
That's /another/ interest, rather than "*the* interest", I'd say; but any proposed relevance of "heritability as determined for environment A" to those in environments B & C is probably going to be predicated upon the possibility that some portion of the former is robust to the changes in environment (e.g., while different soil will certainly change—lower—the heritability of various traits within some particular crop, relative to the heritability of those traits as assessed in identical soil, teosinte selected for ear-size will nevertheless produce larger ears in either case than will totally wild teosinte, and in either case it is—to some extent—a heritable trait).
That is not how twin studies work. Twin studies compare the similarity of fraternal twins to the similarity of identical twins.
Right. Only rarely are identical twins raised in really different environments, like the two pairs of identical twins that got mixed up in the maternity ward in Colombia, with 2 being raised middle class in the capital and other 2 being raised in a village in the jungle. They came out pretty different on their IQ tests.
Exactly, and this effect would be even greater for some phenotypes other than IQ, such as educational attainment.
Very fair criticism but that's also just the usual WEIRD bias in human sciences (sadly)
Thanks for diving unto this breach again. I'm going to set IQ and EA aside for the many confounding factors you list in the post but I think it's worth expanding on your final question about biomedical traits for which a sizable molecular/twin gap still exists. Let's take the possible explanations one at a time:
1. Assortative mating. Twin studies (and Sibling-Regression and RDR) are deflated by assortative mating and pedigree/GREML studies are inflated by it. So Assortative Mating cannot explain the gap. Also most biometric traits are not under strong assortment (e.g. mate correlation on biomarker traits in Horwitz et al is <0.1 - https://pmc.ncbi.nlm.nih.gov/articles/PMC10967253/).
2. Gene-Gene Interactions. Twin studies are inflated by GxG, Sibling Regression includes GxG, RDR/GREML do not include GxG. The fact that Sibling Regression on average produces estimates still much lower than twin studies on average suggests GxG does not explain the gap (on average).
3. Gene-(Shared) Environment Interactions. Operates the same as [2] except for "fixed" shared environments (e.g. across your entire cohort) which inflate every estimate.
4. Equal Environment Assumption Violation. Twin studies are inflated (or at least biased) when this assumption is violated, whereas the other methods are not. For medical traits, this would include something like one MZ twin finds out she has a lump, so the other goes in for screening and both are diagnosed with breast cancer while DZs are less likely to do so.
5. Measurement Error. Since twins tend to be measured together, this would deflate the non-twin estimates. For biomedical traits, age is typically already a covariate and instrumental measurement error is low. So this would imply that more specific timing (time of day, season, age) of the measurement adds a substantial amount of environmental noise.
6. Publication Bias / QRP / Replication Crisis. Many twin studies were published in the "bad old days" where research practices were less strict.
Take your pick!
For 5, wouldn't that apply to fraternal twins as well?
For 2, in what sense are twin studies inflated by GxG? If there's a high-order GxG reaction that basically only occurs in identical twins, are you counting that as inflating heritability since it's not really relevant to most people?
Finally, how can you quantify how much heritability from rare variants you expect to be missing? If this study closed some of the missing heritability gap with more thorough sequencing, when should you be confident that you've found it all?
Thanks!
Yes, for (5) I do mean that applies to both sets of twins, and therefore reduces the influence of environment in the twin study. For (2), see here for the details (https://theinfinitesimal.substack.com/i/169938925/in-twins-epistasis-makes-the-shared-environment-look-like-genes) but the basic idea is that the twin ACE model assumes a linear decay from MZs to DZs but GxG produces a quadratic decay, which the ACE model treats as *extra* genetic variation. In the example in the post, if you have a trait with 20% additive heritability and 40% epistatic heritability (so the true *broad* sense heritability is 20+40=60%), the twin ACE model will estimate an additive heritability of 80% -- i.e. an overestimate even of the true broad sense h2.
With respect to rare variants, Sibling Regression includes the effect of all rare variants and RDR includes the effect of most of them; both methods produce estimates that are much lower than twin studies *on average*. Separately, various evolutionary models have attempted to estimate the selection parameter on a given trait and, from that, extrapolate the expected rare variant heritability component; these estimates are also generally very low (~10% of total heritability).
Thank you for engaging in the comments here! I hope it is ok to add some questions:
Regarding (2), is there a good comparison of Sibling Regression and RDR/GREML estimates that might allow us to interpret the difference between them as the extent of GxG and GxE interaction effects?
General question: In the second graph shown in this post, why are the categories labeled: Common/Rare/Missing/Non-Genetic? Shouldn't the last one at most be interpreted as "Not directly genetic", because of the possibility of the above mentioned possible interaction effects?
Sure thing. Table 1 from Young et al. 2018 (https://pmc.ncbi.nlm.nih.gov/articles/PMC6130754/table/T1/) investigated these methods in simulation. You can see that when there is (10%) epistasis, Sibling Regression estimates the total broad sense heritability (50%) whereas RDR estimates the total narrow sense heritability (40%). The same principle applies for GxE but is highly dependent on the structure of the environmental interaction, so it is harder to simulate. And yes, depending on how you think about GxE, it could account for some of the so-called "non-genetic" component.
Thank you for your reply.
Is there a specific reason you speak of Gene-(Shared) Environment Interactions in (3)? Are Gene-(Non-Shared) Environment Interactions not possible for some reason?
How do they account for grades (and to an extent educational attainment) being judged on a curve? It's easy enough to compare your height to your neighbor's, that's measured separately and objectively. But if your neighbor gets the top spot in the class and their pick of graduate schools, you don't and you may not.
They don't
The amount of shared environment of identical twins and even fraternal twins is massive. These results don't seem all that surprising given that context. For physiological things the example of blood pressure is directly related to obesity which obviously has a lot of environmental effects. The whole debate seems to be about what extrema things lie on and the truth is about midway and both sides are declaring victory, which is a sort of boring result. The most interesting thing seems to be that neuroticism is a lot lower than twin studies estimate. Maybe that indicates that depression is a lot more tractable than it seems, or at least that there's some kind of childhood intervention which could make it less likely.
One thing that struck me when reading this was that while the sample was large, it was all British people. I'd naively expect the average amount of shared environment between two randomly-selected British people to also be pretty large. They'll both be living in a developed country with universal healthcare and education, live within a relatively small distance from one another in similar climates with a similar set of environmental contaminants. I would naively expect the effect of environment on outcomes of two randomly selected humans from anywhere on the planet to be quite a bit larger.
If you included people in papua new guinea in the sample the degree of heritability of educational attainment would probably look a lot less. But I should have made clear that all types of twins share in utero environment which is extremely difficult to tease out from genetics and that isn't shared with more distant relatives and that specifically may account for a lot of the different results in these studies, indicating that heritability is actually on the lower end of the currently disputed range
Can anyone help me out with the sentence about "kierkegaard's adjusted number?" Is there some stat principal I'm unfamiliar with?
Presumably from the linked "Emil Kirkegaard on Substack" (https://www.emilkirkegaard.com/p/what-did-the-new-wgs-ukbb-study-show)
Oh, God, now I feel dumb, NOT Soren Kirkegaard, "Kirkegaard's adjusted number" just sounded like "Russell's teapot" or some principal I wasn't familiar with and google wasn't helping
"[T]win studies find that most traits are at least 50% genetic, sometimes much more."
What I find interesting about these studies, when applied to social phenomena (ie - "success", intelligence, etc), is how many confounding variables there are that don't get addressed. Which makes sense. It is functionally impossible to properly stratify the data to do this comparison correctly.
The only thing we can do is put our interpretation of necessarily incomplete data.
Interesting that the only trait to be more heritable in this study than twin studies was height, which I'd expect to be the most heritable of those metrics.
I know everyone here knows heritable is not wholly genetically determined, as it was classically known.
And our host was using heritable as shorthand for genetically heritable. Which is fine, we all get it.
My personal belief with gene/culture interaction there is a large error band for say academic achievement. Put someone with high genetic potential for IQ in a boiler room academic environment...that will be expressed if they have any interest in putting one over on their peers. Likewise, where being well read is seen as nerdy, you ain't reading a lot of books.
"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education? A certain sort of contrarian might relish this conclusion. "
Many high-IQ pupils end up drop-outs because of the immense tedium of being shackled to a classroom of people of less ability.
Certainly many other traits come into play.
When we talk about heritability of IQ, do we correct for measurement error?
A single person tested with different tests repeatedly on different days under different conditions will not get the same score each time, they might get scores that vary by ±10 points or so. So even if the underlying g were 100% heritable, you'd still expect to see some kind of substantial black bar in the actual observed IQ test scores... unless this is already corrected for somehow.
Associative mating on white blood cell count seems very plausible if white blood cell count correlates with any behavioural traits not well captured in the other measurements. My sense was that everything would correlate with something. Has anyone ever tested directly for assortative mating in a weird-seeming trait like this? Who knows, maybe white blood cell count correlates with phlegmatic personality
According to 23andMe and their enormous data set and thousands of questions they ask their users, they find assortative mating on virtually every possible trait: more than 97%. In fact there were only two things they could find where people did the opposite of assortative mating, which was that night owls are more likely to pair up with early birds, and that people with a poor sense of direction tend to pair up with people with a good one. Every other thing tested, married couples are more alike than random chance.
Could you cite or explain your "virtually every possible trait" and "more than 97%" claims? This ( https://blog.23andme.com/articles/23andme-couples-correlated ) seems to be the source of your overall statement, but the specific claims do not appear to be supported.
"Either people are somehow assortative mating on blood pressure, or else these remain the strongest evidence of some deeper problem."
Beauty and health are probably fairly correlated, and people do assortively mate on beauty.
It might be the last refuge of a hereditarian scoundrel, but it's important to note that this is still just short-read sequencing data. If most of the effects are from longer scale inversions then we'll need long-read (Nanopore/PacBio) data to conclusively sort this question out.
Here's my hypothesis for why longer scale inversions are an especially interesting variable to keep an eye on:
https://cbuck.substack.com/p/can-self-cleaving-dna-resurrect-lamarck
"Only 10 - 20% direct causal heritability, which would be a total nurturist victory."
Not sure about that. Anything other than zero is already a bit edgy. If you put a sign that says "in this house, we believe that 20% of the variance in educational attainment is due to genetic factors" in your front yard, what would people assume about you?
Even the nurturist win seems like a hereditarian win to me. 20% of IQ being heretitable is a lot. And if I'm understanding things right, it'd be especially relevant for the right end of the bell curve. If to get a 1st percentile IQ, you need near perfect genetics and near perfect environment, then only people with parents who have the good IQ genes will even be the in the 1st percentile. That's the core of the hereditarian argument most of the time as I usually hear it.
This just isn't what heritability is. To wit: if you believe that racism based on skin color explains most of the variation in intelligence between people (you shouldn't believe this but it's possible), then you'd expect intelligence to be highly heritable, because under this model the genes that code for different levels of melanin would be the IQ genes. Heritability is mechanism-agnostic and cannot typically settle arguments between competing mechanistic explanations.
> Either people are somehow assortative mating on blood pressure
What are strongest predictors of blood pressure? Obesity, diet, physical activity. All of these are affected by assortative mating.
"But the nurturists declared victory (Sasha Gusev on Substack) because the graph, zoomed out, looks like this"
Well I read Gusev's article when it came out (https://theinfinitesimal.substack.com/p/the-missing-heritability-question) and he does not even show that graph. Instead, he does a deep dive into the different methodologies. He also mentions something not mentioned here:
"This approach [GREML] does not have the advantage of using within-family variation, and therefore will include any environmental influences that are correlated with genetics, such as familial factors or stratification. That means GREML-WGS estimates are essentially untethered from narrow- or broad- sense heritability because they can include entirely non-genetic variance."
"This affects biomedical traits like white blood cell count just as much as behavioral traits, because" - infectious disease history, diet, exercise? These are a few obvious possibilities that an extreme non-expert like me can spot instantly.
“does this require us to believe that IQ only barely affects success in education?”
This is unremarkable. No educational attainment data discerns between an education degree from Swamp U and a physics degree from Cal Tech. The modern push for everyone to get “some” degree hides the signal.
I agree that physics likely requires more calculative intelligence than education, but in terms of institution I feel like that's heavily swamped by the other factors which go to who gets accepted. We can imagine someone who is more than intelligent enough to perform well at a high level program but doesn't get in because they don't have the right family connections/political views/high school/wealth/ethnic background/whatever.
I'd imagine capability is a variable, but I'm not sure of the signal to noise ratio
Physics is for sure one of the most cognitively demanding fields. Even very competent verbal ability people( i.e. lawyers) can struggle immensely with relatively simple problems.
Oh boy let's call it ~50/50 and move on. It's both, if 40/60 that doesn't make much difference to me. I'm reading Thomas Sowell's "Black Rednecks and White Liberals" and one of his points is that ghetto black culture makes a difference in life outcome. Let's ID those cultures that help people succeed and encourage those. (Or visa versa, discourage those that lead to failure.)
How is it? I've only read one Sowell book before and I felt it kind of lost the plot halfway through
I'm about 1/2 way through. Culture is important. We knew this but he brings it home. The first Sowell book I read was "A conflict of visions" which I really liked.
That was the one I read too. I can't remember how, exactly (it was a few years ago now) but I remember having the impression it kind of lost its way from being a relatively interesting book of political philosophy and became more of a "and this is why my political views are best". But I wadnt exactly in the best headspace at the time so its plausible it misread it
Yes he does show a preference for the constrained vision. I wasn't bothered too much by this because I thought he did a fair job of explaining the unconstrained vision. (And I also find myself feeling some preference for the constrained vision.)
You're offhandedly dismissing assortative mating based on white blood cell count as absurd, but I think quite a lot of information about those sorts of biochemical factors might actually be available through scent and other sensory impressions available during pre-sex intimacy, which then influences relationship decisions subconsciously. At the very least, otherwise-invisible blood pressure problems can certainly cause erectile dysfunction.
It amazes me that anyone actually takes Kirkegaard seriously in these conversations.
He works for a white nationalist propaganda organization. He's the most obviously compromised source imaginable. Why pretend otherwise?
The entirety of academia is compromised. I don't see why the opposition should be trusted less. Obviously they shouldn't be blindly trusted, but no one should be.
I don't think you get to cancel him for belonging to a white nationalist organization. I am capable of discussing research findings regarding issues I care a lot about without turning into a lying sleazebag. Quite likely you are too. How do you know a someone who has committed to the white nationalist point of view is not also capable of that? Because his beliefs are Wrong and Bad? There are people, not all of them dumb or demonic, who think *your* beliefs are wrong and bad. And I myself have discovered several times that some stuff *I*believed was wrong and bad. I appear not to be made out of a different and purer substance than Kirkegaard is. What, you never made a discovery like that about yourself?
These issues are directly connected to his political agenda, and one side of this argument is dramatically more useful to him. That vested interest would make his words suspect even if his political agenda was not nakedly evil. And it is.
We're not discussing lawnmower brands. On an issue like that it probably doesn't matter how awful his politics are. But this issue is at the heart of his racist project.
We can't listen to people who put forth data and arguments regarding things directly related to their aganda? Scott posts about AI risk, EA and other topics directly connected to his agenda. How come you're reading his blog?
If Kirkegaard is posting lies about research findings or interpreting true findings in a ridiculous way, point that out here. If he does it in the present discussion, point that out in replies to his posts. If you can't rebut him, why are you so sure he's wrong?
And besides, it is evident from the comments here that there are quite a few posting who are quite interested in the subject and know a lot about genetics and statistics. If you (assuming you are knowledgeable and not just having an ick response) can't rebut Kirkegaard, watch to see if others do. Call on them to.
Scott's not a lobbyist; he's just a person who has opinions.
And if you want me to respond to an argument, find a reasonably honest person to make it. You can't have a useful discussion without good faith.
Calling him names and saying he’s arguing in bad faith just seem to be excuses for not having to engage with the evidence.
This isn't name calling. This is a very simple, but very sound, argument backing up the claim that he should not be trusted or listened to.
Do you disagree with that, or do you just think it's terribly impolite to point this sort of thing out? Do you think we're under some sort of moral obligation to pretend a snake is not a snake?
Because I don't. I think that, if you know someone is not to be trusted, you should treat them as such. You should not pretend otherwise. Pretense is an enemy to the truth.
If you want my position on how hereditary various personal traits are, by the way, I have no clue. Not my field, and not a topic I'm terribly interested in. Usually wouldn't comment on a post about it at all, if not for the obvious.
>These issues are directly connected to his political agenda, and one side of this argument is dramatically more useful to him.<
This applies just as much to the modal academic working in this area (population genetics, quantitative genetics, social psych, etc.)—or, if you have a slightly rosier view of the Academy: to at least a 𝘴𝘶𝘣𝘴𝘵𝘢𝘯𝘵𝘪𝘢𝘭 𝘱𝘳𝘰𝘱𝘰𝘳𝘵𝘪𝘰𝘯 thereof—as to Emil, though.
E.g.: I have, more than once, looked up the author(s) of some paper or another, and found statements such as: "...racism is 𝘴𝘰 𝘦𝘷𝘪𝘭 that, if the evidence turned out to undeniably support the existence of some heritable between-race cognitive differential, we should suppress it." (That one in specific is from Turkheimer—but he's not alone; this sort of sentiment doesn't seem to be at all uncommon.)
I think Emil clearly knows his stuff—just look at his actual work;¹ you'll find no raging race-hatred, only solid science (& much better statistical practice than is typical!)—but even if he was manifestly a poor researcher, or an outright propagandist, his work would probably 𝘴𝘵𝘪𝘭𝘭 be useful as an antidote to the prevailing bias.
(But then, I am 𝘢𝘭𝘴𝘰 a terrible person who believes that e.g. "Blank Slatism" is, of a near-certainty, wildly wrong; and—though perhaps my ability to even notice such things has been lost due to my moral turpitude—I've never once seen Emil say anything like "death to all other races!" or "round 'em up in camps, boys!", or the like... so I'm likely to be a bit biased in his favor, myself.)
¹(rather than, esp., his Wikipedia page; I contribute a fair amount on Wiki, and cherish it, and everyone I've interacted with on the site has been swell... but I must admit: there's a 𝘭𝘰𝘵 of left-wing partisanship to be found thereon. it's rather disappointing; I just stay away from any topics likely to be contentious—which is sort of "ceding the field to the enemy", as it were, I know; but.. I just don't have the heart for it, any more, if you know what I mean.)
Yeah, this has Isolated Demand for Rigor written all over it. How should I evaluate economics papers supporting free trade from think tanks that have a broad position of supporting free trade? How about papers showing benefits from universal pre-K from center-left think tanks?
For that matter, how should I evaluate academic papers from professors at universities that have taken a lot of left-wing or progressive public stances on political issues?
Approximately nobody applies these standards to the arguments from their own side.
Sure it is possible racists can be objective about science, this is all good and well, until he becomes involved with something where the racism actually does matter (like, working for a white nationalist organisation that tries to influence public policy), at which point your defence doesn't really hold up.
So, since approximately every university, think tank, and funder in the world is opposed to racism in any form, how should we evaluate the research coming from people receiving funding from those organizations when they discover racial discrimination in policing or employment or education?
"Ad hominem attacks are totally legitimate when they're aimed at people I don't like" and "anyone I don't like is a white supremacist" is a fun combination.
(I don't know what organisation he actually works for, but the term "white supremacist" label is so degraded these days that I don't deem the accusation worth looking into unless it comes with supporting information and from a trustworthy source who goes out of their way to show they're working in good faith.)
No, like, an actual white nationalist. I don't think he'd object to the label; I think he'd wear it with pride.
Here's the Wikipedia page about his organization:
https://en.wikipedia.org/wiki/Human_Diversity_Foundation
Shrug. Even if he's an actual, literal Nazi who has a shrine to Hitler in his bedroom, that may well make him an odious person but it doesn't mean his argument is wrong.
If you have evidence the study was performed poorly or fraudulently, present it. If you have evidence he has substantially misrepresented it, present it. If you have specific rational errors he has committed, point them out and describe why they matter.
His politics and general moral failings are not relevant. I guess you could argue that people of certain beliefs are substantially more likely to lie, but I'm certainly capable of citing specific examples from my own personal experience of academics who describe themselves as anti-racist lying to approximately the same level of severity, so that also seems like a dead-end, logically speaking.
The only reason you'd mention his character unless that is what was being discussed (like say I was saying he's a great guy and he's an actual literal Nazi) is because you are trying to discredit his character. Which is not what is being discussed here.
Man is literally a lobbyist. It's his job. He should not be treated as an equal participant in a scientific debate; he should be treated as a political spokesperson, or an advertiser. This would be the case even if the thing he was advertising for wasn't, well, evil.
Pretending that bad faith is good faith never works.
I notice you have yet to actually point out an actual lie or error or substantial misrepresentation he's actually made on this specific topic, continuing to make ad hominem points.
(I'm happy to stipulate he's a bad person. But the annoying thing is moral odiousness is not the same thing as incorrect on any specific topic.)
Do you have any actual specific errors or lies/misrepresentations? If yes, this whole pointing out his odiousness seems irrelevant. If not, it's ad hominem fallacy.
It's not about whether he's a good person or a bad one. And it's not about the points he made, either.
If I see him talking up a specific brand of lawnmower, fine. And if I hear a lawnmower salesman giving his opinion about genetics, fine. But you can't trust the lawnmower salesman about lawnmowers, and you can't trust the racism salesman about genetics.
Almost every person in academia has been turned into political activist, just for the left-wing. If they speak against left-wing, they risk losing their job. Why should we assume all academics are arguing in good faith?
Eric Turkheimer (1990) wrote "If it is ever documented conclusively, the genetic inferiority of a race on a trait as important as intelligence will rank with the atomic bomb as the most destructive scientific discovery in human history. The correct conclusion is to withhold judgment"
So he straight admits he will lie. And somehow people like Turkheimer are "good faith" to you.
Wikipedia is notoriously non-biased and never resorts to guilt-by-associationism. Heaven forbid someone thinking it’s OK to be white.
Actual white nationalists often tell that people like Kirkegaard are "IQ nationalist".
Did you miss the parts where Kirkegaard claims than Koreans, Japanese, Taiwanese, and Ashkenazi Jews have higher IQs that whites?
This is only your concern, and you have worse problems than Emil.
I appreciate that you put "Counter-Semitism", "Ethno-Nationalism", and "Eco-Fascism" in the little description that shows up when I mouse over your name; lets me know where you're coming from.
Would be nice if Emil and his arguments came with such clear warnings.
I’m shocked to hear you’re so terrified of words.
Yes, I remain baffled that he gets referenced and linked to without even a caveat.
Well, it's because the stuff he writes is generally pretty well-supported & precisely articulated. You won't tend to find anything worthy of a caveat on the modal EK post, IME.
In case nobody has mentioned these popular something-somethng hypotheses, it's all caused by microbiomes and/or breastfeeding and/or childhood exposure to allergens and/or some seemingly innocuous parenting choice that is actually ruining your kid.
Growth mindset
That's the ticket.
The Missing Malleability has been a big subject in American social science since the release of the federal Coleman Report in 1966.
This appears to me to be classic "nobody agreed ahead of time what evidence would be sufficient to upgrade/downgrade their belief on whatever topic is actually being argued about"
A couple of comments:
Firstly, of all the traits to track, IQ as tested in an IQ test is, I would suggest, a poor example because it requires both an innate intelligence (probably inherited) PLUS a childhood that at least did not suppress it. I would suggest that some parents may find precocious children annoying and from a young age,suppress or punish children that display high intelligence. Some of those children will be damaged and fear displaying their intelligence for life.
Secondly, i have always assumed the 'logic of genetics is similar to a circuit board logic; "If this, this and this, and not this, then that." In other words, a dozen genes may be required to be "on" and three more "off" to produce a certain trait. So tracking the "on" or "off" of individual traits would not give you the i formation about the final trait unless the combinations were tracked too.
In short, it is more complex that our science can currently work out!
IMHO best explanation is that both sides are missing an X factor that is currently undetected. I'm going to commit the narrative fallacy because otherwise this comment is too vague, but my loony idea about what is happening is that there's semi-heritable parasites that we can't detect that are influencing development.
Eh. They're both right, and they're both wrong.
Genes are only half the picture of genetics; you can't implant a human fetus into a live-birth shark's womb and expect it to work, because the instructions are, in and of themselves, incomplete. You also need a "compiler" which matches those instructions; pregnancy involves things like adjusting hormone levels at different points of development, which in turn change which parts of the instructions are being read (sort of). Is this process part of "heritable" or part of "nurture"? It's both.
“But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education?”
Over half of Brits attend universities, most of which are accreditation mills for the gullible and illiterate. My uncle is an English literature lecturer at two and complains that he’s forced to pass kids who lack basic KS3 (11-14 year old) writing skills.
Others exist because the government decided to buff the numbers by forcing degree requirements on professions that used to be devoid of them (nursing and policing, for example).
In other words: educational attainment is a very noisy indicator of anything, and we shouldn’t be surprised by low heritability scores, or dramatic generational variance in heritability.
"Emil and Cremieux argue that we know why this study found low heritability of IQ. It’s because you can’t give 347,630 people a full-length IQ test. So they gave these people a short crappy IQ-like test with a lot of random noise. Past studies estimated the reliability of this test at 0.61 (low). It’s easy to statistically correct for this; when you do so, you find that if the test had been better, this study would have estimated the heritability of IQ at 55%."
wait, what? "It's easy to statistically correct for" a short-form IQ test by making a guess about what it would have been if the exam had been more comprehensive? You're doing a conjecture-based ballpark estimate and certifying it as AA?
Data scientist Vinay Tummarakota had a thread about this article at https://x.com/unboxpolitics/status/1996461533241495568 and I asked him what he thought about the 55% claim, he responded at https://x.com/unboxpolitics/status/1996739017564672286 by saying that the correction "assumes that measurement error is uncorrelated with genetic similarity", which may or may not be true (or at least a good approximation) in this case. He also gave an example where it wouldn't be true at https://pmc.ncbi.nlm.nih.gov/articles/PMC8513766/
Thank you! I was wondering the same thing while I was reading Scott's take. How can you make a correction and just assume the data will move in the direction of your hypothesis? It sounds like we can conclude is that the CI could include the 55% claim, but we should be clear that this 'correction' doesn't actually provide evidence of that claim? (I'm still not exactly clear on the correction/assumption being made.)
That seems like a pretty safe assumption.
Did you look at the example he gave where this assumption breaks down? Are there any obvious factors present in that example which should have led us to expect a strong likelihood the assumption to fail even before analyzing the results, factors which clearly wouldn't cause problems for the 55% claim here?
'Just to provide an illustration of what I mean, Willoughby et al estimate the heritability of IQ using two measures: the ICAR-16 and the vocabulary subtest of the WAIS-R. The vocabulary test has a higher reliability (0.93) than the ICAR-16 (0.81), so intuitively, you'd expect its heritability to be higher than that of the ICAR-16.
But that's not actually what they find. The vocabulary heritability was just 12% while the ICAR-16 heritability was 42%. So in this case, the less reliable cognitive assessment had the higher heritability.'
https://pmc.ncbi.nlm.nih.gov/articles/PMC8513766/
Did you see his caveat to that note:
Ah, I should caveat that technically the reliability measurements I cited are a bit apples-and-oranges here.
The vocab reliability was measured using test-retest correlations while the ICAR-16 reliability was measured using item correlations. I wasn't able to find an exact apples-to-apples reliability measure that was available for both tests.
https://x.com/unboxpolitics/status/1996746698513592570
I don't know what's going on with Willoughby but it seems more plausible that there's a systematic flaw in the experimental design than that two independent sources of noise are correlated.
Sure, but in order for this apples and oranges aspect to fully/mostly account for the ICAR-16 test having higher heritability than the vocabulary test, I'd think you'd have to adopt a hypothesis where not only would an apples-to-apples test reverse the relationship in reliability (it would show that ICAR-16 actually has higher reliability in predicting IQ than vocabulary), but would reverse it by just the right amount to account for the 42% heritability of ICAR-16 vs 12% for vocabulary. That certainly could be true, but unless you assign pretty much all your credence to that hypothesis, you should be assigning some significant amount of credence to the idea that there's something else going on.
You seem to be saying in your last comment that any alternative (besides some other systemic flaw different than the apples-to-oranges reliability comparison) would be the unlikely idea that "two independent sources of noise are correlated". But I would imagine there could be plausible explanations other than random noise for why each of the tests are only partially reliable as IQ predictors, which might allow the relative heritability of each test to differ from their relative reliability. For example, some models of IQ suggest it's about nonlinear relationships between a lot of distinct mental abilities (including learned tips and tricks in solving certain kinds of problems), a bit like the network theories of mental disorders discussed in an old SSC post at https://slatestarcodex.com/2016/12/14/ssc-journal-club-mental-disorders-as-networks/ . In this case, it might be that lack of reliability is due to a kind of one-sidedness where the test gets at certain abilities relevant to IQ but not at a sufficiently diverse swath of them. Then two different tests could be one-sided in different ways, but the one-sided abilities in test A might be more heritable than the one-sided abilities in test B even if test B was more correlated with IQ overall.
edit: I realized I was mixing up reliability and accuracy in the last paragraph above--if we just want to know how well a test predicts g, that would be its accuracy as a measure of g. But doing some quick searching, the page at https://www.accuracyresearch.com/blog/accuracy-vs-precision-vs-reliability/ says that reliability "encompasses both accuracy and precision", where precision would just measure how consistent a person's scores would be on a test if they retook it a bunch of times (with the analogy of a bunch of arrows that are tightly clustered in the same region of a target even if it's far from the bullseye). Does anyone know the definition of "reliability" that would have been used here, whether it could be influenced by how accurate the two tests are as a measure of g, as opposed to just their precision?
Some other sources seem to say that reliability is sometimes just used as a synonym for precision. But even in this case, I think it could be that a less reliable/precise test is more accurate as a measure of g--in terms of the arrows/target analogy, one group of arrows could be more tightly clustered but with an average farther from the bullseye, while another group could be less clustered but with an average closer to the bullseye. So this might still allow for something like the idea I talked about above where both tests are one-sided in different ways, but the one-sided abilities in the less precise test are the more heritable ones.
Maybe dumb question - but have they considered mother-child epigenetic influences during pregnancy? This would show up as heritability in twin studies, but would not be captured in GWAS
> But they couldn’t do a twin study, because most people in their sample did not have twins.
Why not? Their sample is 350,000 people. With identical twins at 1 in ~400 births, there should be around 800 people in the sample who have identical twins. With fraternal twins being more common than identical twins... is a 1,500-person twin study just not a viable concept? How many people have been in other twin studies?
The UKBB probably has many sample members who are one of a pair of twins. But it doesn’t sample the other twin (except by chance, and that’s too rare to be useful).
> (except by chance, and that’s too rare to be useful)
What is the role of chance here? I calculated for another comment that the sample includes about 0.6% of the population of England. A bit more than one in every 200 people.
At that level of coverage, if inclusion was random, I wouldn't expect much in the way of family links between sampled people, but that doesn't seem to have been an issue?
If 0.6% of twins in the sample have their fellow twin in the sample, that is much too few to be useful.
Sure, I'm not challenging that. I'm saying that coverage appears to be nonrandom because the study had no troubles finding relatives within the sample base to do their measurements on.
I guess if we assume that everyone has 5 "relatives" and coverage is uniformly random, 3% of people in the study will have a relative in the study and we can break that down into ~5000 pairs of relatives.
How is the sampling actually done?
Definitely non-random - it’s focused geographically around a small number of health centres. But even if the 0.6% figure were say 6% because of this, it would still be too small.
If it's driven by volunteering, I might expect identical twins to be much more overrepresented than that on the grounds that what seems like a good idea to one identical twin is overwhelmingly likely to seem like a good idea to the other one too.
(I'm not saying this applies, but it did seem at least somewhat relevant.)
It's not like they have every trait measured for every of these 350k people. For some traits the number of people is small.
> That is, if I and my neighbor are 50.001% genetically similar, and I and my other neighbor are 49.999% genetically similar, how much more do I resemble my first neighbor than my second neighbor?
Do you live between your dad and your mom?
Not sure what metric of genetic similarity is used here, but I've heard that reptiles are about 50% genetically identical to humans.
Yes, I'm aware in broad strokes of what you're referring to.
Mostly I'm just making a joke. However, in any context where "50.00% similarity" might actually hold between two humans (with "similarity" being contextually defined), I'm pretty sure that those humans must have a parent-child relationship.
(Full siblings are 50% similar (in a sense) on average but not exactly. A parent-child relationship involves exactly 50% similarity (in the same sense, but not in every possible sense).)
Assortative mating on blood pressure doesn't strike me as that crazy if you assume it's actually getting at neuroticism, SES, diet, or some other variable.
What makes us think people do not assortative mate based on white blood cell count and blood pressure? These are health indicators! Many of them flow directly into beauty, family life, economic outcomes. And having a healthy partner makes a huge difference for your quality of life.
"But if IQ is >55% heritable and educational attainment is <10% heritable, does this require us to believe that IQ only barely affects success in education?"
sqrt(0.10/0.55)=0.43, which seems like a pretty healthy effect size?
Very much not any sort of expert on this, but does this account for epigenetics and the possibility that my parents' nurture (as opposed to my direct nurture) might affect my outcomes?
can be mitigated by looking at physicists and mathemathician professors whose children also are same. ttps://pmc.ncbi.nlm.nih.gov/articles/PMC9755046/#:~:text=Combining%20national%2Dlevel%20data%20on,their%20scholarship%20and%20their%20reproduction. https://www.reddit.com/r/science/comments/zt5yok/professors_are_up_to_25_times_more_likely_to_have/#:~:text=Professors%20are%20up%20to%2025,r/science If one needs to look at heritability, we need to look for very specific and hard to game measures. Athletic performance and becoming scientists are the places to go. ~Niksphy9
Tweet from Pinker linking to the article:
"The Good News Is That One Side Has Definitively Won The Missing Heritability Debate, reports Scott Alexander @slatestarcodex. Actually, the real debate was won long ago: heritability of intelligence is substantially above 0. There can't be an exact "correct" estimate, because heritability is a proportion and it mathematically depends on variation in the sample. Also obvious (at leas to me): GWAS has to underestimate heritability compared to adoption & twin studies, since the latter estimate effects of the entire genome with all its interactions, & the former is capped by the number of measurable genes. (Still nice to see the gap closing.)"
https://x.com/sapinker/status/1996282777650839924?t=9aR_I-dqN0-0Ww7DsPmPrg&s=19
The main issue is what are we going to learn / have learned after all this patient work
I'd argue not a lot more than. different traits are differently heritable with the exact numbers still to be nailed down. Or equally different traits are differently non heritability conditioned with the exact numbers still to be nailed down.
But if we did nail the exact numbers down do any of us expect the percentages on most traits to be game changing in practical or research terms.
As expected, Scott got lost in minutiae while missing the Crux of the debate:
- Nurture's observed positive impact on variation is at its MAXIMUM, rich parents are spending everything on anything to boost their kids. If a factor like height is 20% Nurture, that is roughly the MAX which Nurture could ever add to height
- Heritability is generating the MINIMUM possible variation, because of Regression towards the Mean among dozens of genes, numerous alleles interacting; we are basically mutts-descendant-from-a-genetic-bottleneck, which means we do NOT display the upper-bound of heritable variance; our heritable variations are at their minimum.
So, if Nuture can, under ideal circumstances, only generate +10 IQ per standard deviation observed, while rare alleles are at MINIMUM generating +5 IQ per stdev, that leads to the question: "What is the MAXIMUM variance possible, when heritable traits have been selected in plants and animals?" How different is a cow from an auroch, in standard deviations of auroch?
I'm still confused by the reliability adjustment. The test in question is the UK Biobank Fluid Intelligence Test. Based on it's supposed to measure fluid reasoning, not g[1]. In factor analysis, fluid reasoning is a separate factor from g with separate, oblique loadings. It is thought to have its own heritability. This breaks the assumptions underlying the reliability adjustment, namely that all variance is explained by either g or noise. The test likely loads on g and fluid reasoning.
It's also worth noting that the test lasts 2-minutes and contains 14 question. It's entirely plausible that it's loading on many other higher-order and test-specific factors.
Note: My understanding of g and factor analysis mostly comes from a cursory review of John B. Carroll’s Human Cognitive Abilities: A Survey of Factor-Analytic Studies (1993). Happy to be corrected if I've misinterpreted anything.
[1] https://biobank.ndph.ox.ac.uk/ukb/refer.cgi?id=2029
> Twin studies, adoption studies, and pedigree studies overestimate this because of assortative mating and population stratification
And presumably because of partially-controlled environmental variation: families from the Minnesota Twin study families would be more environmentally similar than randomly-selected American adoptive families, American families writ large, or families worldwide
The article sets up a false dichotomy that the main difference between the "hereditarian" and "nurturist" positions is the degree of heritability. The true "nurturist" position is this: Behavioral characteristic differences among people may or may not be influenced by heredity. However, family, twin, and adoption studies are based on false assumptions and other major problem areas, and are therefore unable to detect possible genetic influences. Claims that causal genes have been found at the molecular genetic level are questionable. "Heritability estimates" are highly misleading and should be abandoned in all areas of human behavioral research. Much more, but I will leave it there.
The elephant in the room is that despite nearly all governments in last century being anti-hereditarian yet the gaps are still here.
Why are anti-herederatians are busy arguing? Do they need to attract general population votes so their interventions be made in practice? No, they already had total political victory and had got almost every law they wished and gaps are still here. So they aren't even trying to find interventions that work and concentrate on badmouthing hereditarians.
If anti-hereditarians were right, they would be instead of ridiculing National IQ lists be busy producing lists of which Baby Einstein Toys provide best value for the dollar spent.
"A certain sort of contrarian might relish this conclusion." Yup. Count me in. There are soooo many problems with IQ tests. Starting with the fact that they do not measure intelligence (because we don't have an independent objective definition of what intelligence is).
"The biomedical traits confuse me the most; it’s still hard to square the twin studies with the sib-regression and molecular estimates. Either people are somehow assortative mating on blood pressure, or else these remain the strongest evidence of some deeper problem."
I have a suggestion. All these people are still testing for main effects: the influence of heredity vs. non-heredity. They aren't going to pin this down until they start testing for an interaction. The formal definition of an interaction is "When the impact of one variable on the outcome variable depends on the impact of another variable." Impact is defined here as the correlation of a change in value in the independent variable with a change in the dependent variable. How correlated the DV is with a change in an IV depends on a change in another IV. That's an interaction.
In other words, heredity is more important when you hold environment relatively stable. When it isn't ("I grew up in a war zone") then it declines in importance. This means you have to test across childhood in different extreme environments. I suspect that those long black bars indicate variation across environment (I'm guessing some variety of poverty). The sample was restricted to "British people", did they include immigrants? I predict that if you extend the sample to more diverse regions around the world the black bars would grow. Even Height might fall to somewhere in the 60's range (due to variations in diet).
Note that I am *NOT* claiming that diet is more important than genes in determining height. I'm saying that diet influences how important genes are in determining height (and vice versa). If you are starving, it doesn't matter who your ancestors were, except to the extent that you are the tallest corpse in the mass grave (and your kids are dead too, so no one is inheriting anything).
This implies that genes should be becoming more important as time goes by, because standards of living are improving globally. One day the hereditarians will be right, but it hasn't happened yet.
I'm not gonna be convinced by either side of this debate until someone shows me an IQ test that is universally applicable across language groups, easy to administer, and does not respond to having taken similar tests in the past.
There is almost certainly some sort of general intelligence that is to some degree determined genetically, I just find it hard to believe that any of the IQ tests I've seen measure it. EG, I've participated in some number of studies involving IQ tests because I returned to college as an adult in a place where people care about this and signed up to all the database programs at said college, so I kept getting invited to these studies and: I notice that you can boost your scores significantly by
Being well rested
Having just the right amount of caffeine in your blood
Having taken a similar test before
Having the lights in the room be less distracting
Having the test printed out on a printer with a certain DPI/On an OLED vs. LCD screen
Having enough background noise but not too much background noise
I don't know, Venus being in retrograde or something
These tests are noisy as shit; they probably are not phrenology in 2025, but they aren't anywhere close to eg. a blood panel in terms of predictive power. As it stands, I don't think they are good enough to definitively isolate signal from noise.
Retest correlations for IQ are > 0.9 so those factors can't plausibly matter much. Yes IQ tests don't measure intelligence perfectly but don't pretend like intelligence isn't real and doesn't vary between people. Some people are very clearly much smarter than other people and it's not because they're well rested. There is no set of environmental interventions that could ever ever ever turn either one of us into Einstein or von Neumann. That's just common sense. Squabbling over exactly how accurate the tests are is missing the forest for the trees.
In my view comments like yours are the equivalent of sticking your fingers in your ears. IQ is real, significantly genetic, and predicts many many life outcomes. If you're going to argue against that then you might as well be a creationist.
I am not sure your retest metric is accurate is the thing. I have never seen any truly adversarial work on the subject but I also don't care enough to look deeply. If you have such a study please show me and I promise I'll at least read the abstract and the methods. As it stands, IQ testers testing their tests and getting back an A- doesn't move me.
https://www.sciencedirect.com/science/article/abs/pii/S016028960300062X
https://labs.la.utexas.edu/tucker-drob/files/2024/02/Breit-et-al.-2024-Stability-of-Cognitive-Abilities.pdf
Thanks for putting up instead of shutting up:
I've only gone through the first study so far, and it fails to address my objection: every person chosen in the sample shared and language and cultural group (as far as I can tell, I had to find the study elsewhere and they have not published their demos or raw as far as I can tell), the tests chosen were exactly the tests I objected to so it is unsurprising the results I think are bogus were bogus together, and the administration was shared across each test and each testee (hahaha balls) in the same time unit, their methods were not pre-registered, and they did some statistical fishing that is ok in a soft science ala sociology or economics but I would find fucky in a hard science.
This next bit is unfair and not being considered: Sample size == way too small. I think this is just how it is though, current IQ tests are way too expensive to run well to get a significant size. I am electing to ignore this, but in any other situation I would reject out of hand a sociological study with +-500 participants as insignificant.
This paper is not so bad it prejudices me to the concept, but it fails to move me from my position. I am done with my coffee though, so the next paper will have to wait.
>every person chosen in the sample shared and language and cultural group
Yeah this is a standard bad-faith trope that people who have an ideological opposition to IQ use. Cultural bias is precluded by ensuring that measure invariance isn't violated. If you really want to understand this read "The g factor" by Arthur Jensen. This has been debated by people smarter than you for 100 years. You're not going to come up with an objection that hasn't been thoroughly investigated and dismissed.
> it fails to address my objection
What objection, where? Your initial objection was that retest reliability is low. You said nothing abut cultural fairness, which is addressed with measure invariance. This is clear goalpost moving and confirms my suspicion of bad faith on your part.
Like I said, this is missing the forest for the trees. You can nitpick various things about IQ but that's all you'll be doing. It's the single most well-validated measure in all of psychology and it's very clearly consistent with the large-scale social patterns that one finds in the world. If you don't want to accept that then as far as I'm concerned you can go sit with the creationists. Reality doesn't go away just because you refuse to believe in it.
>Yeah this is a standard bad-faith trope that people who have an ideological opposition to IQ use. Cultural bias is precluded by ensuring that measure invariance isn't violated.
Yeah, and smarter people than you have thought it was a valid objection for just as long.
You can't just say "This effect has been controlled for" and then not explain how, and then expect me to just ignore the obviously visible results of said effect.
Re. the rest of your post: If I pick all the nits of a study and then it blows away in the wind like a ghost, whoops, it turned out those were load bearing nits, pardon me if I don't construct my entire value structure about it.
This is banged out on a 15 so It wont be deep but listen: I am not a heritability denyer, on account of having eyes. I also am not a credulous IQ hyperbooster, also on account of having eyes. if you want me to get on the IQ determinist train, you are gonna have to provide more evidence than one questionable study of less than 500 people and an un-preregistered meta-analysis.
The biggest predictor of success is the practice test. Take many and your score will go up. HOW to take any test is as important as the test Qs.
Hello Catholic exit polls?
I'm struggling to see why there is this qualitative distinction between "hereditarians" and "nurturists" if everyone agrees that most traits are somewhere between 30%-70% hereditary. Like, yeah if we're doing quantitative utilitarian calculations on whether a given environmental intervention will be worthwhile then a difference between a particular trait being 30% environmental and 60% environmental is going to give us a 2x coefficient on the possible effectiveness. But broadly it seems like you some of each trait can be improved with environmental changes, and some of it can't unless you resort to eugenics or gene editing.
Why would people categorize themselves or others using these labels? I would expect a "hereditarian" to be like 80%+ or a "nurturist" to be 20%-. If everyone's in the middle then why is there so much fighting?
Because it's a proxy battle for the culture war. Hereditarians like to say that racial differences are unalterably genetic so all these expensive education programs are a waste of money and blank slaters like to call hereditarians racist so they can justify their redistribution programs. If everything is genetic then progressive arguments about racial oppression fall apart.
Why should one care for racists, if hereditarians are right, its good news, we can just incentivize smarter people from poorer countries/ groups to have more kids over 50 to 100 yrs. Smart people having more kids is the solution and the reason for smartness to increase in human evolution . Breeding works even in animals and plants. More smart, more non zero sum game dynamics and no one will have to care for racists.
Virtually all accusations of racism are made in bad faith. It's become little more than a blanket defense for bad behavior. The concept has no place in productive conversation.
you’re correct that understanding race is real doesn’t mean we have to go full Hitler, but it’s absolutely assbackwards to then find the conclusion: “this means we just need MORE redistributionism!” No, we need eugenics. Conscious, explicit, ethical, and thorough eugenics.
It’s also hilarious that you basically said everything racists say is right, but being racist is still bad (?)
Well there’s the question of whether intelligence is really what we want to optimize for.
There’s a good argument (and is the one most accepted by psychologists who use IQ in diagnosis), that IQ differences are so small that they only really serve for negative diagnosis not positive ones.
So that, the difference between someone with a 110 and a 130 isn’t the same as the difference between a 80 and a 100.
Yeah, eventually everyone will end up back at the "most important things are somewhat heritable but environment is also important."
I really don’t understand this debate. We have separated twins at birth. Despite a wide difference in rearing, they are still found to be nearly identical. The only thing they share in common is their ancestry… which, therefore, is the important factor.
That’s it. There’s your conclusion. Anything else is ridiculous until proven otherwise.
I am autist enough not to care for what racists have to say on this topic. To me, its quite common sense that eugenics works in plants and animals. we call that breeding. We need more numbers of bio computer to solve problems, we can do with more smarter people . Without a single new idea , even 100 yrs ago, if we said, ok, smart parents should have more kids from all poorer communities, would it be bad?. It can be done even now. People want equality among groups, it can be done. Lefts denial of genetics is troublesome as eventually science will catch them with their pants down . And the history that will be written later will be one of discrediting them entirely and of potentially keeping humanity backward for couple of generations due to moral panic. Coming from India, Here the DEI is insane enough and getting worse every day. And no one believes in IQ tests/working memory tests, one doesnt know why, thats easiest test requiring least preparation to identify talent universally across populations.
Huh, I wouldn't be surprised if white blood cell count somehow correlated with how often and for how long you get sick (though I dunno in which direction), and it doesn't sound that absurd to me that people would assortatively mate based on that to some extent; and blood pressure correlates with diet and anticorrelates with physical activity and people pretty definitely do assortatively mate based on that. (I'd expect the author of "Society Is Fixed, Biology Is Mutable" to have figured this out too! Or am I missing something?)
I still think IQ is almost entirely genetic. I've known lots of cats, and not a single one has managed to learn arithmetic.
I remain confused by heritability studies, including this one. How do they account for prenatal influences? Isn’t it a misnomer to call it heritability?
Why not perform this research on dogs? There are hundreds of recognized breeds with distinct characteristics: size, color, coat, conformation, even personality (retrievers, herders, and trackers have very different instinctive behaviors). These characteristics are unquestionably hereditary - the very definition of a breed.
The characteristics assessed in these studies could be measured for dogs (except smoking, education, and number of children). Dog generations are much shorter than human, so inheritance could be measured. Breed-mixing has become popular (e.g. "labradoodle", "bullwhip", "cava-tzu", "schnocker"), which offers lots of experimental data.
While I personally think that would be an interesting study, I'm not sure it would help much to resolve the human question. The genetic effect is only half of heritability (the numerator, in particular). The other half is environmental effects. Even if one can assume dogs' genetic effects are similar to humans', it would be too much to assume the environmental variance is similar.
Dog breeding is a pretty good demonstration of the ways I'd expect effective human eugenics programs to go badly. (None of the ones we've ever had operated at enough scale to do anything but let some bureaucrats mistreat some unfortunate people.). We probably end up with people who are 6'5" and blonde and chiseled and get great SATs, but are missing all kinds of other things that weren't optimized for / were traded off against to get there, the way Dachshunds are *really Dachshundy* in ways that often cause them back and other problems.
Which of the following phrases is more correct?
A) "The heritability of IQ might be about 15%."
B) "The heritability of IQ at a particular concert on November 2, 2024 might be about 15%."
The answer is B: because the stated implication is that the statistic is for a known population at an unrepeatable event, although the idea that someone was running around giving IQ tests is implausible. A, meanwhile, is completely wrong, always, whether written or spoken, because it carries an inference that a rule might be found for all human beings at all times. Heritability is a measure of differences (if there are no differences, then the heritability of that trait is 0%) in a trait's expression for a particular population (and only that population) at a particular moment in time (the time of measurement, and only the time of measurement) that can be attributed to genetics (and no-one knows how much of an IQ score is attributable to genetics, and heritability studies provide zero information on this). It is NEVER generalizable. The misunderstanding of the term "heritability" isn't just a source of confusion, it's the foundation of the entire political project to find out if one group of people is genetically smarter than the other. It's such a badly named concept that it ought to be retired. A good source of clarity on this is the short book The Mirage of a Space Between Nature and Nurture, by Evelyn Fox Keller.
I deeply appreciate you sharing that with me; your input https://www.ez-passde.com is genuinely valuable and helps me immensely.
I haven't particularly followed this debate, but shouldn't the people attempting to find "explanatory genes" (what year is it, 2003?) have come to the extremely obvious conclusion of gene interactions dominating the traits, where no particular collection of QTLs linearly regresses onto height in the same way you can't get a faster civic by replacing your airbox with your mom's lycra lampshade?
Of course people assortative mating on blood pressure. Blood pressure correlates with which activities you prefer, what drinks you like, how good you are at sports, all sorts of things. And it's affected by diet so partners will be even closer then they should be according to their genes.
Scott, is it possible that heterogeneity across researchers—particularly in how “nurture” and “heritability” are defined and operationalized—is contributing to conflicting conclusions?
I remain unconvinced in the absence of clear, reproducible demonstrations of explanatory and predictive power—ideally evidence showing the ability to reliably engineer predictable outcomes under controlled conditions to account for the "non-heritable" factors. To date, I am not aware of such evidence. Without standards of prediction, control, or constructive validation, these claims risk resting more on interpretive frameworks than on rigorously testable theory. In that context, I worry that methodological and philosophy-of-science standards may be weaker than acknowledged, and that disciplinary siloing plays a larger role than is often admitted.
What is the mechanism I ask? It can't possibly be hand-waving.
But this still doesn't answer the important social questions:
1) are the differences between populations in socially relevant traits like IQ due to genetic differences or environmental differences?
2) are the differences between populations in social outcomes like education level or crime rate due to genetic differences or environmental differences?
These two questions are what people really disagree on and this paper does not advance that debate. Many hereditarians will see that individual differences in IQ are ~30-50% genetic and then suggest that differences between groups are explained to similar or same degree by genetics.
This may be either really stupid or really obvious, but has anyone tried doing one of those population simulations (like those wolves vs. sheep vs. grass things) for how much heritability vs. nurture works best? It seems to me like a "tradition vs. progress" type thing, where heritability helps to keep useful traits that were hard to achieve, and nurture helps to obtain new traits to adapt to the current situation. I guess I'm arguing for a "inherited nurturability" type thing.