I think you can just check and intelligence is being positively selected for over evolutionary time scales. See eg https://www.nature.com/articles/s41598-018-30387-9 (random study, haven't done a deep dive but I believe it's representative).
This doesn't mean things haven't reversed in the past few centuries, but that probably hasn't had too much of an effect yet.
Maybe I'm misreading your argument, but neanderthals and other archaic humans almost certainly selected for intellect as well. Neanderthals had rudimentary art, decoration, funerary practices, almost certainly language, etc. which are all things they developed extremely quickly on an evolutionary time scale compared to their ancestors.
It seems likely to me that the conditions for massively accelerating intelligence are an environment and a species toolkit that makes each marginal improvement in intelligence create outsized gains in reproductive probability. The climate and environment of the archaic past was much more hostile than the one we have today, and with even very basic technical advancements (the use of specialized stones rather than general hand axes, for example) the odds of survival shoot up in that environment.
The human species is no longer facing those conditions, so selection for intelligence has slowed down. Our growing biomass is just the outworking of a process that has more or less been pre-determined since we discovered writing and gained the ability to greatly increase information transmission.
You need to think in terms of ecological niches. Organisms (plants and animals) evolve to fill each available ecological niche in a given environment/ecosystem. For some of those niches, like say pack hunting ruminants for protein and fat, intelligence will be an asset. For others, like turning sunlight into plant food, it won't be. This is what determines if it gets selected for.
I agree, it does seem odd that there would be more selection for intelligence among hunter gatherers than in the past few hundred years, when the economic and social returns available to people of above average intelligence seem so obvious. Human social dynamics are complex, though, so it could be any number of factors.
I feel like selection in general is weaker than it used to be, because almost everyone survives to adulthood, and most people have kids. Maybe the sweet spot for low selection was in the decades after 1945; few young deaths, and also few childless weirdos like me.
Come. Are you seriously debating whether horses are smarter than houseflies? The fact that it's hard to judge close calls does not in the least call into question the capability to judge *at all*.
We can easily observe intelligence in animals, because we define it as "solving problems." They don't have to be problems that are interesting to humans, and indeed they are typically not, they are problems interesting to the animals, like finding food or how to get out of or into some space with barriers, et cetera.
Anyone who has owned animals knows that some are clearly smarter than others. Some dogs are smart, some are dumb, and any dog-owner can tell that. Some dogs figure out how to beg or steal food cleverly, some are dumb about it, some never figure it out. You can fool some dogs easily, some are much much harder to trick.
Some horses are smarter than others, e.g. can figure out how a latch works, or when a gate is likely to be left open, and someone who works with horses can see that easily enough. Heck, I've seen someone who kept rats as a pet observe that some rats were clearly smarter than others because they could solve problems interesting to rats -- finding food, shelter, mates, whatever.
It feels to me like you're getting lost in mechanistic details, and overlooking the basic crude observational data, which is that we can clearly see that individual members of a species can solve problems important *to that species* at differentiated rates of success, over time. We can also observe that one species can solve problems interesting to that species, on average, better than another, so we can readily conclude one species is smarter than another.
I'm sure it's much harder to distinguish fine gradiations of intelligence of dogs than it is of humans, because we know much less about what's important to a dog, and how dogs think. But that is, if you will, merely implementation difficulties, a challenge to experimental or apparatus design, it doesn't call into question the entire existence of the trait.
Yeah I think you are still favoring the types of problems that people think are impressive. Insects are great at surviving and solving problems in ways that humans are totally uninterested in or actively hate.
Virtually every species that humans consider to be particularly intelligent is one much like us, i.e. a social-group living species that hunts or is at least omnivorous. Dogs, dolphins, whales, elephants, primates, wolves, etc. So either living in a socially cooperative manner confers greater intelligence (which would make sense, because having to consider not just your own interests but also how the group will react to your behavior is a lot of cognitive load), or we just tend to be particularly impressed with forms of intelligence that we can recognize and that serve our interests.
I take your point in that I've had pets that were clearly smarter or dumber than others. But it's also hard to say in some cases. People tend to consider dogs smart when the dog is easily trainable and likely to do what the human wants. I'd say cats are smarter than dogs but people tend to think dogs are smarter because they're submissive to humans and want to please them. One of my dogs learns new tricks at literally about 200 times the speed of the other -- it's a stark difference -- and is much more attuned to people and how to manipulate them. But the stubborn/"dumb" one has far superior survival skills with respect to being appropriately suspicious of danger, knowing when to hide or approach, hunting skills, etc. If they weren't living with and dependent on humans, I would expect the "dumb" one to have far greater survival chances and the smart one to be dead within a month.
(1) Intelligence confers a survival advantage to humans, and natural selection has tended toward optimizing it.
(2) Intelligence does not confer a survival advantage to humans, and the fact that humans -- every one of them -- are about umpty times smarter than dogs or rabbits is just a weird coincidence.
The problem with (2) is that it asks us to believe that *the* most salient characteristic of a species is mere accident. It's like observing that cheetahs are really, really fast in a sprint, but this is just an accident, and their survival as a species actually derives entirely from the clever design of their spots.
In terms of very recent evolution, there's reason to think western europe was eugenic between 14th-20th centuries because so many violent criminals were being executed and the upper classes were having relatively more kids than the lower classes. Not an exact measure of changes in intelligence but its something.
This essay is very interesting, and it led me down a fascinating rabbit hole.
However, it honestly seems like his claims about the genetic effects of capital punishment are wildly overstated, by a factor of ten.
I was very shocked by his claim that men in Late Medieval/Early Modern Europe had a 0.5-1% lifetime chance of being executed, due to an annual execution rate of 1 execution per 10,000 people. When I looked up the source he cites for this claim (L'Occident est Nu, by Philip Taccoen) on Google Books, I discovered that he really does appear to have simply misquoted Taccoen: the latter says that Early Modern England and Malines, as well as 18th-c. France, executed one out of every *hundred* thousand people annually.
(The other source this paper cites for the execution rates claim, Paul Savey-Casart, isn't an independent source but simply the one cited by Tacoen as the source of this claim. Based on the lack of a page number given for the Savey-Casart citation, I strongly doubt that the authors of the paper tracked down Savey-Casart's book themselves.)
It's entirely possible that I'm totally wrong and there's some obvious issue I'm missing--but it really does seem like they founded a significant part of their thesis on a claim that could have been disproven by simply double-checking what their source actually said.
I guess I’m confused why generalizability to other species has any relevance on g as it applies to humans? It seems fairly obvious that intelligence has been strongly selected for (how else could we get so smart).
A point that it seems you’re trying to make is that humans aren’t any more intelligent than other species, we all fill our niches equally well. A bee could look at a human with a PhD and think them unintelligent because they don’t know how to build a beehive. The part that I think this view misses is that bees (and all other non-human animals) only know how to do one thing. Each species of bee is evolved to build one type of hive and produce one type of honey and collect nectar from one ecosystem’s flowers. Humans on the other hand have been able to spread over the globe over the last ~40,000 years into every possible ecosystem and have adapted - quickly and with our intelligence, not with slowly natural selection - and it is that capability that we use to justify our intelligence relative to other species.
I don’t think anyone makes the claim that intellect is always a positive trait, as people have mentioned elsewhere in this thread it involves a heavy trade off in between general-purpose intelligence and energy usage, so in most niches it is not something that is especially selected for.
Rather, I think that intelligence is something that became useful in human’s particular niche (perhaps due to our ability to learn socially) and then rapidly evolved to fill that niche
I'm not sure what you mean by the second part, there is no purpose of intelligence, its that more intelligent people outsurvived and outreproduced less intelligent people, causing the average intelligence of the population to rise. What is left to explain?
I think your word choice is misleading, here, since you're trying to distinguish the "merely bright" from "geniuses"; describing the former as "low intelligence" is true in the relative sense, but the people you're thinking of are still above average intelligence for the general population
I doubt that. In my experience, smart people overestimate how smart the average person is. (because smart people generally live and work with smarter than average people). Also, I know a few people who failed the qualifying exams in my PhD program in a natural science. They were all, every one of them, hard workers who loved the subject. They were also all much much smarter than 105 IQ. They just weren’t smart enough for the program.
Among college professors, 10th percentile = 97, 25th percentile = 104, 50th percentile = 115.
I'd expect college professors to be a bit higher than PhDs in the same field because there's an additional filter in the job application process. The average IQ of PhDs has probably declined over time as the number minted per year has increased ~20x since the 50s. I've heard elsewhere that the average IQ of PhDs was 125. That might be a much earlier sample. Or it might be the case that the easiest fields (such as grievance studies) have higher rates of PhDs becoming professors to spread the mind-virus instead of going into industry to do something useful, since there's nothing useful to do with bullshit fields. That could make the average of professors lower than the average of PhDs even if each field has above-average professors relative to the PhDs in the same field. (assuming almost all professors have PhDs)
Anyway, that was a long digression, but if the 25th percentile of college professors have an IQ of 104, it's totally plausible that people with very high noncognitive traits but IQ 105 could get PhDs in most fields.
Less agreeable does not mean grumpy, it means literally less likely to agree with others, i.e. go along to get along, concede on matters, etc. One can be quite cheerfully disagreeable. Higher intelligence means more likely to think that one is right about things, come to one's own conclusions, and less likely to go along with what others say just because they say it.
I asked what he meant by agreeable because, what you are describing above is not the dictionary definition of agreeable as I understand it. It is generally used either "as agreeable to x" or just "agreeable" where means pleasant. I think the latter is along the lines of he is agreeable to be around(where the agreeable one is the thing being agreed to).
I strongly disagree that higher intelligence equates to "more likely to think one is right about things." Haven't we had at least 1 post describing the opposite? Less-smart people think they are correct. More-smart people understand that things are complicated and they are better able to appreciate that they may not be. Everyone comes to their own conclusions.
I think you are mixing up tendency to update one's priors and adjust to new facts (which would be correlated to intelligence) and what I was talking about in respect of "being right", which meant that one is not going to say that something is correct, when it obviously is not, just because of social pressure. I was talking about thinking one is right as a matter of sociability rather than being open to changing ones mind based on new information.
Example: you are sitting in a room with ten people and everyone is asked to come up with an answer to some question where there is an obviously correct versus incorrect answer on something factual (maybe a math problem). If the other nine people absolutely insist that the wrong answer is correct, and want you to go along with them, and you insist that your correct answer is right, you are probably not very agreeable.
Agreeable is referenced here in relation to the "Big Five" personality traits, which is what is shown in the table above, and it does indeed show that the cognitive genes are inversely related to agreeableness (i.e. more genes for cognition/IQ make you less likely to be agreeable). But the non-cog genes were positively correlated with agreeableness. The literature I've seen shows there is no strong association either way with IQ. Which is probably because the trait of "agreeableness" has a bunch of sub-traits which probably work at cross-purposes. "Compliance with social norms" is one. But so is empathy and compassion. And also sociability. Honestly, these are all pretty different things and perhaps should not be lumped together at all.
In any event, agreeableness should NOT be considered "grumpy", which would go more towards the neuroticism factor of the Big Five personality traits (i.e. moodiness and emotional instability).
I was not aware of "agreeableness" or the "Big Five." That was why I had asked for clarification. On the "being right" issue, I suspect I just misunderstood you. I read being right as "being certain" and in my experience, one of the hallmarks of very bright people is the opposite.
There is a tendency to hedge: "if feels like", "is it possible that", "this reminds me of." This is perhaps increased awareness of ones innate(non-numerical) bayesian thing. Interestingly, this can also comes across as agreeableness(hedging) but at least some of it is a deeper awareness of the complexity of issues and an ability to acquire understanding without confidence in a solution.
Undoubtedly, High IQ people feel less need to prove themselves and are well aware of potential benefits of being agreeable, as well.
Yeah, sorry, I wasn't very specific. Though you caused me to go look more at what constitutes "agreeableness" in reference to the Big Five personality traits, and it turns out that there is quite a bit of disagreement on this and it has many sub-components that don't seem very related. Empathy versus being submissive/easily conceding to the group versus stubbornness versus friendliness -- it's not clear to me whether these traits really have anything to do with each other whatsoever. So perhaps it isn't surprising that IQ doesn't show any strong correlation to agreeableness one way or the other. And it's fairly context-dependent anyway. I'm pretty far towards being extremely laid-back and easy-going in 90% of situations, EXCEPT in instances where I'm expected to agree with something I think is factually wrong, in which case I'm horribly obstinate and entirely disagreeable. That's probably not that unusual, and while I think that would measure as low neuroticism low agreeableness, I'm not sure.
I don't think this is true, most research I've seen shows no relationship of agreeableness with g. The chart from this study above also shows an inverse correlation with cognitive genes (but a positive correlation with non-cog genes).
> Depression is just bad. I strongly recommend not having it. Don’t even have any risk genes, if you can avoid it. All of you people trying to come up with clever evolutionary benefits for depression, I still think you’re wrong, and so does the genetic correlation with cognitive and non-cognitive aspects of educational attainment.
I am not a doctor or biologist, but the fact that rates of depression are heavily correlated with ethnic background (at least that's the stereotype; I haven't validated this), I would expect that depression is genetic in nature. And if it is genetic, I would assume that there is some kind of compensatory benefit to keep it around.
I don't know what the benefit is, and it seems pretty all-around shitty to me. But why would evolution specifically give you depression genes? There has to be some evolutionary benefit
Is it actually that correlated with ethnicity? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1199525/ (first study I Googled, I'm not claiming to have looked into this deeply) says it's 8% of whites, 9% of blacks, and 11% of Hispanics. That seems pretty similar - any differences could be accounted for by social deprivation or culture.
Aren't suicide attempts, or even just suicides, not a better measure for this sort of comparison?
In my own searching (which admittedly was in a journalistic context rather than an academic one) I seem to remember finding that depression rates seem to suffer from a large number of effects that skew the rates of diagnoses, chief among them the fact that merely being the sort of person who deals with those sort of problems by going to get diagnosed instead of simply stewing in them makes someone a non-typical example. Suicide rates bypass this for the same reason murder rates are great for looking at crime: it's tough to ignore when a person is gone.
I don't know to what extent the study you linked can control for that, but looking at suicide data presents a very different picture, with there basically being a hard split between Whites and Amerindians on the one hand, and Blacks and Latinos on the other. The former group has suicide rates about double the latter. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8155821/
This doesn't necessarily show the effect to be genetic, but it certainly leaves it open as a possibility.
Cultural confounders seem to be a pretty big issue here. I'm thinking specifically of Asian acceptance of suicide compared to Western disavowal of it. Breaking that down further, it looks like religion is a significant component of it. Black and Latino populations are known to be more religious (of the type that is anti-suicide) than White and especially Asian.
I don't know that this is necessarily true, but that's been my understanding.
That's a reasonable point to make, though I struggle to think of what exactly the cultural co-founders could be, given:
- Asians have the lowest rates of religiosity, but also very low suicide rates, while Blacks have the highest rates of religiosity, and yet also low suicide rates
- The two races with the largest percent of single-parent households are Black and Amerindian, and yet these two lie on opposite ends of suicide rates
- The two races with the highest rate of gun access are White and Black, yet they lie on opposite ends of the spectrum as well.
- The two wealthiest races per capita are Asian and White, yet they are opposite in suicide rates. The two poorest are Amerindian and Black, which are once again polar opposites when it comes to suicide.
There might be some sort of cultural component, though I'm unsure what research has been done on that, but it isn't clearly evident from any factor that I would expect to influence suicidality
I doubt it's the explanation, but I've read that Black children are actually more likely to have both parents than White children. We assume the opposite because Black children are less likely to have married parents, but Blacks often stay together without marrying. (As it happens, I read this the same year that I did taxes, and two of my clients were an unmarried Black couple with a child.)
An interesting perspective that I'd never considered. If this is true, it would make for a fantastic example (for myself at least) of how sometimes basic statistics can mislead you from the reality on the ground.
I'm not sure what you mean by kids being "more likely to have both parents" -- all kids have two parents, unless one or both are dead. But if you meant LIVING with both parents, though unmarried, this is not true. Black children who live with both parents, including both married and unmarried but cohabiting, is still less than 45%. For white kids that's 80% and for Asians, almost 90%. There's a huge racial discrepancy and MOST black children do not live with two parents, while most children of other races do. The portion of unmarried parents who live with their kids is about the same for all races (6-9%) except Asians, where it's unusual.
strength of social networks come into mind. a more 'tight-knit' black family may provide pressures to prevent people from suicide if you have a functional-ih family structure.
Some economist should find a natural experiment and study whether or not converting to X religion or being adopted by X culture actually has a protective effect against suicide (or increases income, or increases life expectancy, or increases life satisfaction, or whatever).
Could Japan being counted as part of Asia be confounding the Asian numbers? They have a history and tradition of ritual suicide for various reasons as part of their culture (less often in modern times but still a thing). My intuition is that people with fewer existential problems (hunger, thirst, shelter, power etc) tend to be less able to cope with the smaller problems that remain, which means I would expect whites to be "winning" in suicide rates while everyone else is lower but still around the same order of magnitude.
There are likely different confounders there - people are more likely to commit suicide when they have the easy means to do so, which probably varies a lot by culture, region, wealth, etc.
No because suicide isn't a good proxy for depression. It has lots of non-depression causes (eg alcoholism, schizophrenia, impulsive action after one really bad day) and lots of non-depression preventors (eg religious people are much less likely to commit suicide, maybe because of fear of afterlife)
While I understand that those make for issues with using suicide as a proxy in general, are they truly an issue for this particular case (racial comparison)?
Schizophrenia has decent evidence of being race-neutral, rates of alcoholism aren't so starkly different between races to account for anywhere near the gap (except perhaps among Amerindians? I am failing to find any decently reliable dateset that includes them in alcoholism data,) and unless white people are comparatively incredibly impulsive, impulse control wouldn't be a differentiating factor.
And as far as preventors, see my response to Mr. Doolittle above. Races with similar rates of religiosity fall on opposite sides of the suicide spectrum. The same goes for income, educational attainment, gun access, absentee parents, etc.
As far as I can tell, considering these factors only make the racial separation even more stark.
Not all religions are the same, and definitely not interchangeable. Western religions (Jewish, Christian, and also Muslim) have a very negative view of suicide, which I don't think exists for Eastern religions. Japan is well known to have very different views of suicide than in much of the rest of the world.
I bet depression and suicides are correlated with more free time (being less busy), more of the basic human needs covered (food, shelter, sex, security) and less close relationships. And it's not only depression but even health in general and life expectancy.
Didn't Emile Durkeim mention how protestant regions of Europe had a higher rate of suicide than Catholic areas as well as how industrial or urban the area was?
Well, I'm only a layman, but here is a possible benefit:
Don't mildly depressed people have a slightly more realistic view of reality compared to the rest of us?
So let's say you are in a historical environment and you believe something maladaptive that you derive negative utility from but is not so bad that you aren't dead.
This over time depresses your mood, enables you to see reality better and gives you a chance to change your mind.
Depression acting as a belief/behaviour switching function.
Historically the lack of motivation wouldn't have been to much of an issue as you are cold, hungry or horny enough to push past it. The lethargy may even be good in giving you slightly less caloric upkeep while you figure out what you are doing wrong.
Of course, if depression gets too deep, that's a failure case. But hey, we know evolution only aims for "barely good enough".
So its only labelled depression when the results are bad? In that case by definition depression is always bad as Scott said. I always imagined it as a spectrum sort of thing rather than an on/off switch.
Your right I should have used the term sadness rather than depression if depression is the technical term for the failure case.
I kinda see them as the same thing though differing by degree?
That something must be bad to be considered a psychiatric disorder, I believe is true of all of the psychiatric disorders. Pretty much always 1 of the DSM requirements is that it interferes with your quality of life.
I think this labeling issue is at the heart of the problem Scott's trying to get at. If you only label bad cases, then all cases are bad. If you label adaptive cases that don't reach the level of bad (we call it "sadness") in the same category, we might be able to say something meaningful beyond "depression is always bad."
As someone with lots of experience with depression, for me it's not about having a realistic view of reality; it's not caring about reality at all. When I'm having an episode, it's impossible to imagine anything being interesting. It's all just boring.
>Don't mildly depressed people have a slightly more realistic view of reality compared to the rest of us?
I think perhaps you mean pessimistic people, not people with clinical depression.
That being said, experts who are optimists tend to make significantly more accurate (20% more) political predictions than pessimists (Expert Political Judgment : How good is it, Philip E. Tetlock 2005).
Somebody did a study with a box with lights and buttons, and asked people to estimate how much their button presses influenced the lights. Most people overestimated, but depressed people tended to get it right.
I think maybe I read about this study not replicating? But I've read about so many things not replicating I might have gotten mixed up.
Key question here though: by depressed people do you mean people at that moment suffering from depression or just people at some point diagnosed with it? Because my experience of suffering from depression is that your ability to analyse and observe is markedly different depending on whether you are suffering an episode or not.
Pain is just bad. I recommend not having it. Yet its absence is maladaptive.
Abraham Lincoln seems to have had depression. I suspect that high IQ + depression (to a degree that isn't disabling) might be more suited to navigating times of strife than high IQ + alternative, particularly for leaders/decision makers.
What's the alternative mental state during lengthy, difficult circumstances? Cold indifference? Cheerfulness? Better to err being in a depressed state than one of emotional deadness or irrational glee.
No there doesn't. Natural selection works on the entire genome at once, it doesn't individually optimize each and every gene. (If nothing else, the space is so highly dimensional it would need a huge amount of time to do that.) It's completely possible for natural selection to optimize foo (which confers a major survival benefit) which alas brings along bar (which confers a mild disability) for the ride, because the two genes happen to be hooked up in the DNA world in any of the various ways that can happen.
You do realise that agricultural societies work throughout winter? It was the only time that was available for non-routine work, although there were annual activities as well. Winter crops, preparation for early spring planting, finding fuel, making repairs, clearing drainage, pickling and brewing, digging out the sheep, taking in new land... Not doing these things (often communal activities) screws up your chances of prospering and therefore surviving/having your children survive.
Also, the positive genetic argument for depression has to deal with the minor issue that it generally removed your sex drive, which is an evolutionary fail.
Are human beings, biologically speaking, more a product agricultural society or of pre-agricultural society? I would presume the latter, and so to the extent that depression is biological I could see how it could make sense evolutionarily.
Keep in mind, not all evolutionary benefits remain beneficial in such a different context as the modern world. An obvious example is food, we have access to more calories than we know what to do with and adaptations to save calories wouldn't be advantageous today.
It could be that depression used to have an evolutionary benefit but somewhere between hunter-gatherer and flappy bird it lost that advantage and just became purely negative.
Title on the graph seems wrong, "EA FDR correction tries to impute the results from the main timeline, where Roosevelt was an effective altruist and diverted the resources of the Depression-era US into curing all diseases."
But I also initally thought Scott had made a mistake! (some of us read the text underneath to figure out what on earth we're looking at before reading all the acryomns in the very dense picture!)
"As of last time I checked, the leading hypothesis was that schizophrenia genes were just really bad, evolutionary detritus that we hadn’t quite managed to weed out."
Is it really? I thought the idea that schizophrenia genes/low levels of schizotypy were somewhat positively associated with creativity (when they don't result into full-blown schizophrenia, which does decrease creativity) was rather widespread. I wouldn't necessarily have expected creativity to correlate with educational attainment, but it's clearly a positive trait that is easy to think would be selected for in many environments - not quite what you'd expect from "evolutionary detritus". Eg https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1601-5223.1970.tb02343.x or https://www.nature.com/articles/nn.4040 but a Google Scholar search would return much more
I'm not familiar with this type of research at all. When one says "found genes for intelligence" or "found genes for educational attainment", what does that mean? Is the claim that some portion or portions of the human genome have been identified that, when they look like 'x', 'y', or 'z', make a person unintelligent, of average intelligence, or very intelligent, respectively?
Suppose you have 100,000 genetic tests and you know the educational attainment of each individual. If a sufficiently large number randomly selected sample should have approximately the same average educational attainment. If instead of selecting randomly, you select by presence of a specific gene, and if educational attainment varies between the two groups, you have a correlation. Repeat for next gene.... (Where a gene is a DNA sequence that is, in its entirety, passed on by inheritance)
Google tells me that humans have between 20-25k genes. That seems like a lot of free parameters, especially when you start to consider combinations of different genes. I'm not trying to dismiss a field I know nothing about out of hand, but I worry about chance correlations with a parameter space that large. What's known about function of these genes? Is there any reason to suspect they would influence cognition? Do different studies using different test populations tend to find correlations between intelligence (or whatever trait you are interested in) and roughly the same set of genes?
A typical study might be looking at 2.5 million SNPs. So if they didn't know statistics and just checked for p<.05, we'd expect them to get on the order of 100k false positives.
"but I worry about chance correlations with a parameter space that large."
It was a huge problem when using small (a few hundreds) sample sizes and a small set of candidate genes, leading to many many false positive "discovery" during the nineties and the aughts. But now the studies use sample sizes of several hundreds thousands people, leading to much higher reliability, and yes, the snp (=position within the genome)/traits associations are replicable, e.g.
("First, GWAS findings are highly replicable. This is an unprecedented phenomenon in complex trait genetics, and indeed in many areas of science, which in past decades had been plagued by false positives. ")
Nice summary of the current state of research here in the link below. I really appreciate the fact that they listed the major controversies about genes and intelligence without commenting on them — but just giving links to footnotes. The fact that they admit there's controversy is big step forward if you ask me. Lol! They also have a section on Parieto-Frontal Integration Theory (P-FIT), which seems to be 21st-Century phrenology with brain scans. I guess I shouldn't be snarky P-FIT, but when one studies the history of science one sees recurrent meta-theories arise over and over again.
This next paper claims to have identified 187 loci that have a role for neurogenesis and myelination that seem to affect intelligence. Pretty dense (or maybe my myelination quotient isn't high enough to immediately grasp it).
There was another paper I read a few months back that insisted that there were only 11 genes that are strongly correlated with IQ. But now I can't find it. I would have been interested to see if any of the 11 overlap with the 22 paper.
"Both cognitive and non-cognitive skills are correlated with neuroticism, which I guess makes sense - that’s probably what makes you do your homework on time."
They're both negatively correlated, though, right? So this also means that NonCog has correlation in the "good" direction for each of the Big Five, which makes sense.
Also, I'm trying to think of how to recreate this analysis, but without looking at genes. I guess it would mean that for a given educational attainment (say, college grads), people in the bottom 10% of IQ are more likely to be schizophrenic than the average college grad? Or at least have more relatives with schizophrenia?
The negative correlation with neuroticism makes sense to me in that on the Big Five, neuroticism means more like "emotional reactivity" than like "obsessive."
I'd like to see some exploration of the ADHD–IQ link. To me it seems plausible that ADHD genes might affect intelligence test scores without actually affecting underlying intelligence. This could be investigated by looking at the results of the subtests: if ADHD genes have an equal effect on subtests that require attention (such as digit span) as on subtests that do not (such as vocabulary), that would be evidence against my hypothesis.
I think that sort of perspective on intelligence highlights the fundamental problems of this research - the bedrock for “smart” are subjective.
For example, there’s a very famous story about a world class ballerina (use Google for the specifics) who was terrible in school as a child, so her mother took her to see a psychologist. He watched her and asked the mother what she enjoyed most and what she did in her free time. The mother said she likes to dance so the psychologists simply said then that’s what she finds most interesting - if she can’t pay attention to arithmetic, she’ll probably pay attention to dance routines. And she went on to fall in love with dance and become a world class dancer.
In today’s world, she probably would have been given ADHD medication and sent on her way. She might have thought she was dumb all her life, but it’s impossible to deny that she was a genius in a kinesthetic sense. Her underlying “ADHD” was not a cognitive benefit in the classroom, but was in the ballroom. This genetic analysis does not assess that sort of professional assessment in correlation with IQ.
I also seem to recall that Scott has a pet theory that a lot of ED docs have ADHD and thinks that might benefit them in their environment. But I wonder how their genes would fall on this type of table? Are they simply outliers who take medication or somehow overcame their ADHD to achieve a high level of academic and professional success anyway?
My question is why shouldn’t kinesthetic ability or other things like artistic genius or language acquisition ability be included in “intelligence”. It’s somewhat of a subjective umbrella to begin with. That’s my point.
The point is made below that ADHD might manifest as a latent desire for some specific things like music, dance, math, etc. and so that may imbue a certain drive for talent in that domain but also make for sizable distraction from others.
I'd say the reason digit span and vocabulary are both included in IQ tests is that there is evidence that they are affected by a common factor, which we tend to refer to as intelligence. I am not aware of any evidence that ability to dance has any strong link to that same common factor, and, might I add, having seen a fair number of intelligent people dance, I have not noticed such a correlation.
In my experience ADHD mostly hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task. This is why many people with ADHD end up with grades that differ wildly between academic subjects. If someone with ADHD finds every or almost every relevant subject in undergrad and medical school fascinating and doesn’t have a lot of competing interests, they can do very well.
I can attest to this. I was recently diagnosed with ADHD, but haven't taken medication so far. However, I was one of the more obvious cases since childhood.
I fully recognize myself as a completely different caliber of human when I am in focus/attentive vs when I am not. I also frequently take around 30 minutes to enter a focus state (the sensory deprivation of an exam hall helps, but the unfamiliarity of environment takes a few minutes to adjust)
It is incredibly irritating, because I would often be the person that helps friends prepare for examinations and help them understand hard concepts. But, when it came to the exam hall, I felt like my eyes & brain was dilated for the first 30 minutes. Then I'd slowly come back to the earth and rush through things as I started 'getting it', but never ended up reaching the end.
I feel like a cat sometimes. Giving off the appearance of a lazy organism with the intellect of a rock, until you actually need to be productive.
> hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task.
Yep. I might as well use this as my elevator pitch,
> ADHD end up with grades that differ wildly between academic subjects
100%. I used to get Ds in classes that needed rote learning, including biology, but physics, math and english comprehension/essays came to me really easily.
> If someone with ADHD finds every or almost every relevant subject fascinating
Happened to me with Machine Learning. Went from a lazy bum to someone who was reading papers and textbooks for fun. Ofc, this was until hackernews and reddit pulled me back in. :|
"In my experience ADHD mostly hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task. "
The fact is there's very very few jobs you'll get by being a good dancer, and lots of jobs you'll get by being good at mathematical/logical reasoning. So it can perfectly well make sense to force kids to do the regular school routine -- if they can at least convincingly do it they might have a much better shot than if they try to become a dancer/footballer/whatever else.
Right but economic conditions dictate what schools teach, so skill set demand changes. Dancing might not be prized today, but maybe 10,000 years ago it was for shamanic reasons or mating etc. (you get the idea). So on a genetic/evolutionary front people may have skillsets that are expressions of “genius” but in the current moment they don’t have the same economic value that we equate with “intelligence”. It’s just semantics and context.
The problem is once you start seperating "intelligence" from "scores well in an IQ test" the whole analysis rather falls apart, since we don't have any other measure. Really this should be called "genes for IQ test score vs genes for educational attainment."
I would define intelligence as g. Intelligence tests (and test items) differ in their g loadings, so It think it should be possible to associate genes with g instead of with test scores.
Unless you have some empirical measurement of "intelligence" that isn't functionally equivalent to "scores well in a test of intelligence" then any effort to do the separation is going to be mere sterile philosophizing, at best.
"ADHD and intelligence" is a horrible clusterfuck, because even by psychiatric standards the "thing ADHD describes" is a number of disparate things, and some of those things are even more explicitly social-determination than for most psychiatric terminology. ADHD can refer to hyperactivity, to idiopathic executive dysfunction, to very non-idiopathic executive dysfunction, to "boy", to "not middle-class". It's the wastebasket diagnosis to end wastebasket diagnosis.
Artificially low IQ results are a fairly common issue in the sufficiently neurodivergent population (their applicability to autism is notorious, see the writings of Donna Williams and Scott Aaronson on their own results; I suspect some of the negative correlation between IQ and schizospec neurotypes is due to a tendency to think in much curlier lines than those tests permit), but they're much more difficult to pattern over something as broad as ADHD. Some people with ADHD definitely have artifically low results as a consequence of their neurotypes, but these may be better seen through a different lens.
I have a kid with ADHD-Inattentive and bipolar. His ultimate level of educational attainment probably will not be high, although he is pretty bright (good writer, bad at math, horrible at history). His docs have told me that many ADHD kids are delayed in terms of maturity, which sometimes affects decision making (not him, thank god) and sometimes affects initiative, self-discipline, and attention to deadlines (definitely him). So, he may in time manage college, but it won't be soon.
By the way, there is nothing good about having bipolar disorder. What's good is the medication that keeps my son out of the hospital or worse.
Robert Sapolsky mentions in this lecture that a *little* bit of controllable oddness on the schizophrenia spectrum is useful for making usefully charismatic shamans, and that could explain its genetic advantage and persistence. The more extreme, uncontrollable end of the spectrum, on the other hand, isn't an advantage.
(Also, this is the one lecture from this series that Sapolsky has removed from his own YouTube channel. He says some highly cancelable shit in this one, which is a shame because it sure does seem to make sense to me.)
I have often wondered if the various "mental illness" diagnosis that get made are only picking up on the people that are so far out of the norm that they register as a problem. Crochety old people, eccentric uncles, the "socially awkward" and so on probably have similar traits, but at more controllable levels. Millions of other people could have the benefits of the underlying cause, without stretching too far into a problem. We don't have the sophistication needed to identify someone who is *mildly* schizophrenic in a very positive way. We would probably label them as something else or not label them at all.
I am pretty sure that some of things that are mostly harmless nowadays would end with me being dead if I lived say 300 years ago, and if "mental illness" diagnosis criteria would be applied then I would qualify.
(though - both me and my mother would be dead due to birth-related issues if I would be living 300 years ago)
"Again: you find that having more mutational load, more deleterious mutations, increased your chance of schizophrenia, or autism, or low IQ: that strongly suggests schizophrenia, autism , and low IQ are not the far edge of some strategy. Note: people talking about shamans and schiz: you’re probably wrong. Same for autism – not a strategy."
Sapolsky also wrote a book saying zebras don't get ulcers due to a lack of stress well after Barry Marshall showed ulcers were caused by a bacterial infection.
> Depression is just bad. I strongly recommend not having it. Don’t even have any risk genes, if you can avoid it.
Couldn’t depression genes be like sickle cell anemia? Eg one depression gene, say, makes you more introspective, two makes you so introspective you fixate on everything bad?
I’m struggling to understand the chart in ways that translate to common English. Is it fair to say that this chart suggests:
1) Noncognitive genes are more correlated with Academic Achievement than cognitive genes in the presence of high Big 5 Personality stats?
2) Schizophrenia and Bipolar disorder correlates to more academic success in people with more NonCog genes? So Cog genes + Schizophrenia = no correlation to academic success?
3) Noncognitive genes and academic success is correlated with significantly higher longevity as compared to Cognitive genes and academic success?
My understanding of the chart is that the position of the dot on the graph says whether the bucket of traits that determine cog or noncog achievement correlate with each trait. To tick of each point:
1) For the personality traits it depends on the specific trait, e.g. for openness noncog correlates stronger than cog but they're about the same strength for agreeableness (though directionally different)
2) The way I would put it is that the noncognitive genes linked to academic success are positively correlated with schizophrenia and cognitive genes linked to academic success are negatively correlated with schitzophrenia. The statement you made isn't really something the graph addresses.
3) Best way to put it is that noncog genes linked to academic success are statistically significantly more correlated with longevity than cog genes linked to academic success.
Hmm okay I think I don’t have a good intuition about how to conceive of the 3 variable correlation going on here (ie Schizophrenia, NonCog, and Academic Success all correlated). So in regards to 1) could you say that based on this data a person with more NonCog genes and a high degree of agreeableness is more likely to have higher academic achievements than someone else with Cog genes but low trait agreeableness?
I think I'm explaining this badly. The two objects of study are "noncog genes linked to academic achievement" and "cog genes linked to academic achievement." These are groups of genes analyzed in earlier studies in the literature and found to correlate to academic achievement.
Another thing about this type of research is that it doesn't really work on the individual level. You can say that in a population (which is sufficiently large, has a normal distribution of genes, and other simplifying assumptions) the people who are most academically successful are likely to have these cog and noncog genes and that people in the population who have these particular noncog genes are more likely to have schitzophrenia.
If you know which genes a particular person has and want to figure out their expected academic achievement, we basically can't do that because we don't know what effect these genes have on individuals or how that mechanism works; we only know how these genes express themselves in populations.
Depression is being selected FOR in Sweden. While women with clinical depression average fewer children, their almost depressed sisters have increased fertility, which outweighs the cost to the clinical cases. Thus, evolutionarily speaking, the optimal point for female depression (not male) is higher than the current rate. That is, selection is increasing the sex difference in depression.
Of course, this says nothing of why this might be the case. Considering that women in every place and country are more depressive than men, this pattern must have originated a long time ago. Would be interesting to study mental illness, insofar as it exists (!?), in primates. Are female primates more moody and depressive than males?
Probably. Even among dogs, it is commonly known that female dogs tend to be more moody and sensitive than male dogs. Though the difference isn't large, male dogs are more prone to being generally happy-go-lucky while females are more likely to sulk, act jealous, react badly to punishment, etc. These are attributed to increased situational awareness and sensitivity to the environment and social cues, which would seem to be obviously important traits for the sex that is primarily responsible for making sure the young don't die. I would expect you would see a stronger sex divergency depending on (1) how long the period of maternal care is after birth before juvenile independence, and (2) level of paternal care-giving (which in most mammals is zero). Also, women become much more prone to depression in the years after giving birth. It seems intuitively obvious to me that increased worry, anxiety, and seeing bad stuff everywhere would all be advantageous for protecting offspring and making sure they don't wander off a cliff, eat poison berries, or get eaten.
I second this. I can see a number of reasons why genes for depression and ADHD might be selected for specifically in women.
A slightly grim outlook might indeed keep your offspring safer. Also, if you benefit personally from the group staying put instead of roaming around, it would make sense for evolution to make you “heavier”, in the sense of dragging a weight. It wouldn’t even need to be all that dramatic to buy you a few extra days in camp. I’d go out on a limb and say maybe this has something to do with humans eventually forming permanent settlements.
Also it’s clear to me that mild ADHD can be really helpful if you’re a mom. I’ve thought about more actively managing my mild ADHD for years, but as a mother of small children I’ve decided to hold off because *it’s just so damn handy to have my brain work this way right now.*
Yeah, it's a stereotype, but you know what they say about those. It is *so* common when I'm hanging out with my couple friends with young kids that dad is letting the kid do [fill in the blank] while mom is yelling "slow down! Watch out! Don't let him get too close to the edge!" etc. Mothers seem to obviously perceive more danger and risk of harm than the dads do, and I'm not sure how enhanced sensitivity to potential bad things more wouldn't create an increased chance of depression.
In my very small bubble, I’d say the dads are actually more overprotective on the playground. But all the mental toggling required in childcare clearly wears me out less than than it wears out my husband. His superior ability to focus makes him great at a lot of stuff, and he’s a great dad, but after an afternoon with the kids he’s usually exhausted in a way I’m not.
I think my ADHD is working in my favor here while it used to mostly work against me. When the kids get older and I go back to work, I might want to try some Ritalin.
It also fits that most people find their kids intrinsically interesting.
There's a theory that the evolutionary reason for suicide is that when a person kills themselves there are more resources for their relatives. If this is true, it would probably make sense that a person with many fertile siblings to be more likely to commit suicide.
Consider a person with no siblings. If they commit suicide there are no siblings to benefit from the extra resources.
On the other hand if a person with many nephews and nieces commits suicide, there are many relatives that would benefit from the extra resources.
Ever since I learned that Albert Einstein’s son Eduard had schizophrenia to a level that required him to be institutionalized, I sort of assumed that schizophrenia must have something to do with genius.
Also because—take it with the usual corrections for self-reporting—I consider myself to be a genius. My family tree on my father’s side is full of schizotypal personalities. I strongly suspect that being Mennonite, chased to the coldest edges of Europe and beyond because of one’s deeply held esoteric beliefs, selects for this trait.
I remember a post from you about this, Scott (found it: reviewing Surfing Uncertainty https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/). You talked about schizophrenia as a hyperaffective reaction disorder to predictive modelling errors, as opposed to autism which produces hypersensitivity in the models themselves. To put it another way, schizophrenia, in the predictive-modelling paradigm, is a disease that makes it difficult to ignore surprises.
If someone lives their lives constantly having their models challenged, and having their attention pulled to every little model-error, it creates a lot of pressure to build better models, right? If your models get better, you get fewer errors and your mental life becomes easier to manage. And those are the models that, if you happen to be able to communicate them well, become PhD material.
On the other hand, if your models don’t get better, either because you were unlucky, not cognitively flexible enough, or the inundation of model-errors was just too much, the result is something that looks more like schizophrenia.
It makes sense to me that cranking up the genetic dial labeled “schizophrenia” increases the risk-reward of cognition: either you will produce surprising insights and breakthroughs that greatly simplify your (and maybe others’) predictive models, or you will struggle to cope with the basic demands of life and lurch spasmodically from breakdown to breakdown. Or both.
The hypothesis, then, is that Albert Einstein had the right mix of schizophrenia and cognition-enhancing genes to use his irritation at model-errors effectively; Eduard got the irritation but not enough of the cognitive mechanisms to handle it.
"I have been saying for years that I think some of the genes for some mental illnesses must have compensatory benefits. Everyone else said that was dumb, they’re mostly selected against and decrease IQ."
One obvious answer to the "if a gene is bad, it can't be selected for" argument is that a gene can be good in a heterozygous genotype and bad in a homozygous genotype. Sickle Cell being the classic example. One copy of the gene gives malaria resistance; two copies gives you a deadly blood disease.
I don't know much about exactly how these GWAS studies are done. But when they calculate the raw correlations between genes and phenotypes are they able to separate out the heterozygous and homozygous occurrences of the genes?
It seems pretty important. For example, in theory, a gene's hugely positive heterozygous effects might exactly cancel out its hugely negative homozygous effects. So that it's raw statistical correlation to the trait is essentially zero. (This is akin to the man with one foot in boiling water and one foot in freezing water, who is "on average" experiencing a comfortable temperature).
Sure. But that doesn't mean that any of the specific genes contributing to IQ can't have a different effect where the person has only one, instead of two, copies of the gene.
Sickle cell is protective against a very high-fitness cost disease whose prevalence increased relatively recently when forests got cleared, creating lots of standing water for mosquitos to breed in. That's unusual. Most deleterious genes don't have those kinds of benefits (though there are genes causing kidney problems in Africans that also protect against sleeping disease spread by tsetse flies).
That's actually my point. It is "unusual" for a gene that causes disease to persist in the gene pool without being selected out. And that is precisely why the persistence of a disease-causing gene implies that it must also have a separate countervailing positive effect within the overall gene pool.
Thus, if genes for mental illness have not been selected out of the gene pool by now, we can infer that those same genes must also carry some selection advantage. And the difference between heterozygous and homozygous gene expression is a clear mechanism to explain how the same gene can be alternately deleterious or beneficial.
The exact reason a gene is deleterious or beneficial in a given environment (e.g., cleared forests or whatever) is irrelevant. The prevalence of the sometimes-beneficial/sometimes-deleterious gene will just work itself out through the natural selection process.
This isn't one single gene of large effect being more common than one would expect via de novo mutations. These are lots of genes of small effect, which is explainable via mutational load. Natural selection keeps purging them, but more pop up.
Your theory is that mental illness genes are just random negative mutations that coincidentally correlate with higher IQ. That's possible. But that's the exact argument Scott disagrees with, and I think he is probably right.
Like I said in my original post, your hypothesis can be tested by separating out the effects of the genes in their homozygous and heterozygous expressions. My point is that someone should do this.
Mental illness genes don't correlate with higher IQ. The notable finding is that there are genes correlated with educational attainment but NOT IQ, and those tend to be correlated with mental illness.
I feel like describing this as "intelligence" vs educational attainment may be misleading, since its actually measuring "IQ test scores" vs educational attainment as I understand it. Not to get into the whole general IQ debate, but to the extent that IQ is useful as a large scale proxy for intelligence, that becomes less meaningful when you are comparing it to another proxy for intelligence. Given the nature of IQ tests that makes some of the correlations less surprising.
Re: Math Phds and autism, I am assuming it was written as a joke, but when I read it I was initially like “sounds about right” and then I thought about all the math grad students and professors I know and realized that none of them seem actually autistic. There is definitely something atypical about most math people I know, but it seems varried and generally not autism.
Do people know stats on this? It might also be because I mainly see academics, and teaching feels like something that would select against autism, so maybe there are more autistic math PhDs out in industry?
I remember reading that yes, mathematicians do get positive autism tests at higher rates than normal population. But there was also discussion about the validity of the testing for certain subpopulations. In particular, when testing autism it's usually done with some autism spectrum index. The index is a construct: you check all boxes that apply and the doctor says "you have autism". Of course the idea is that the construct correlates well with Actual Autism, but the index is calibrated on the general population and not on highly selective subpopulations such as math PhDs. I think math PhDs are weird people in a lot of ways, and coincidentally some of those weirdnesses overlap with the general population autism spectrum index.
My anecdotal experience as a Physics grad student was that autism-like traits (including a couple of very obvious cases) were more common than gen pop, but that still translates into a low prevalence in absolute terms - it's not that surprising to find a department without any autistic people?
There is a theory that the evolutionary reason for suicide is that when a person kills themselves there are more resources, (like food) for their relatives. So depression might exist to cause suicide.
I have the theory that this is right except that the gene for suicide is not in the suicidal person but in the mother of the suicidal person. That it is not a gene for being suicidal, but a gene for making your child suicidal. The child is affected when it in the mother's womb.
The benefit of my theory from the older theory is that the evolutionary drawback of the suicide is halved, as the person committing suicide only has half the genes of the mother. But the evolutionary benefit of the suicide stays the same.
We need more on verbal tilt + psychopathology + belief system, digit ratio + GNC + bone structures + Life History, or blood type + hyperopia + intelligence + SES.
Very nice post. A note on the schizophrenia result and false discovery. In figure 4 they are graphing 95% confidence intervals (plus or minus 2 standard deviations) and if you look at the confidence intervals for schizophrenia they are very far apart. If you tripled the size of the confidence intervals (plus or minus 6 standard deviations) they still would not overlap. Assuming everything is normally distributed (which they have already done, their confidence intervals rely on this) we can get a nonoptimal bound that if we run a billion trials, we expect to see a false positive in less than 2 out of a billion tries. (This number should not be taken literally because the approximation of the data by the normal distribution will not have this level of accuracy, and there are other possible errors in the study. However it accurately expresses that seeing an effect of this size simply due to a false positive from running many comparisons is exceedingly unlikely.) In the paper they also list a P_{diff_fdr}<.001 which I think is supposed to be the P value of the schizophrenia result after taking into account the risk of false discovery. I include the simple and loose analysis above simply to demonstrate that even if I am misinterpreting their P value, from the visual of the confidence intervals alone, false discovery seems very unlikely here.
There is a theory that schizophrenia is a side effect of human self domestication. Which of course means you'd need to buy into the self domestication hypothesis first but it would explain how we got schizophrenia.
"... whether you so desperately seek societal approval that you're willing to throw away your entire twenties on a PhD with no job prospects at the end of it."
I wish this stereotype that PhDs have no job prospects would die or at least get toned down a bit. There are plenty of scientific fields with good job prospects for PhDs outside as well as inside of academia (maybe not straight biology, though).
Science (or STEM more generally) is the exception, I think, and even there I don't think a PhD is helping you compared to a Masters, given how many extra years it takes
Maybe in computer science. In science masters degrees are usually worthless. Some companies may use it as a qualification for a lower tier job than what you get with a PhD, and mayyybe there's potential for upward mobility, but I don't think it's that common.
S - Academia requires a Ph.D for any of the good jobs (tenure-track faculty or researcher at one of the non-university research institutes), but it produces at least five times as many Ph.D.s as there are such positions to fill. *If* your brand of Science has a strong industrial component (Chemistry yes, Astronomy no), then a Ph.D. is likely to get you a decent job.
T - Lots of industry jobs, but they're almost all open to anyone with an MS - if you put in the 3+ extra years for the doctorate, slight chance you get to be e.g. a Comp Sci professor, more likely you're just getting societal approval and/or personal fulfillment.
E- See T
M - There are jobs for which the Ph.D. opens the door, but aside from the occasional math-professor gig they're mostly for e.g. three-letter agencies or financial firms, which is probably not what you had in mind when you decided to become a mathematician.
Just came to the comments to see whether anyone reacted to this. Is this sentence a joke or does Scott really think that about getting a PhD in general? No job prospects at the end of PhD, what?
If functional mental disorders are primarily due to evolutionary mismatch, then there is no need to explain the benefits of "mental illness genes." While some people are genetically more prone to adult onset diabetes than are others, in the environment of evolutionary adaptation such diabetes was rare. We don't try to explain the benefits of diabetes genes. Likewise, while some people are more genetically prone to functional mental disorders than are others, in the environment of evolutionary adaptation it seems likely that such disorders were likewise rare.
Durkheim's research on the higher rates of suicide among Protestants than Catholics was the beginning of a long tradition of empirical research relating to the phenomenon of anomie, as opposed to cultural embeddedness with strong social roles and bonds, leading to adverse mental health outcomes. Here is a recent study (n = 8446) on how increased exposure to US culture increased suicide attempts among youth in the Dominican Republic,
"The increases in the propensity to attempt suicide for DR youths across these US cultural involvement indicators were both robust and large. For example, the propensity to attempt suicide ranged from 6.3% for those at the lowest end of the range of use of US electronic media and language to 13.3% for those at the highest end of the range of use of US electronic media and language. This central finding is congruent with the lower suicide or suicide attempt rates found for first-generation or less acculturated Latinos across multiple national and regional cohorts of Latinos."
Liah Greenfeld's "Mind, Modernity, and Madness" provides a neo-Durkheimian account that provides a coherent explanation for how increasing levels of anomie in modernity lead to increased rates of depression, bipolar, and schizophrenia. In traditional cultures, with humanity in the environment of evolutionary adaptation being the most "traditional" societies, there was neither need nor opportunity to construct a personal identity. A human being was one's roles. There was no "I" in the modern sense (cf. Julian Jaynes). One was unaware of the water in which one was swimming. Now we are all fish out of water, flopping around, gills desperately sucking in air in an attempt to maintain mental stability. For some of us it is easy, for others very difficult. The genetic material of fish works just fine in the water.
While Scott is not sympathetic to this explanation, I've never seen compelling evidence that shows that functional mental disorders were not due to evolutionary mismatch. Yes, whether or not functional mental orders have increased over time and across cultures or not remains contested. Depending on one's priors, the burden of proof shifts.
But studies of suicide provide less contested evidence that culture is a major influence on suicide rates. As far as I know, all such comparative studies are consistent with greater anomie, greater burden on constructing one's own identity (as opposed to a relative lack of the need to create an identity in more traditional cultures) resulting in higher rates of suicide.
Human beings evolved over many millions of years in diverse physical environments. But with respect to social structure, until the dawn of agriculture and empire, almost all adolescents:
1. Lived in a small tribal community of a few dozen to a few hundred with few interactions with other tribal groups.
2. These tribes would have shared one language, one belief system, one set of norms, one morality, and more generally a social and cultural homogeneity that is unimaginable for us today.
3. They would have been immersed in a community with a full range of ages present, from child to elder.
4. From childhood they would have been engaged in the work of the community, typically hunting and gathering, with full adult responsibilities typically being associated with puberty.
5. Their mating and status competitions would have mostly been within their tribe or occasionally with nearby groups, most of which would have been highly similar to themselves.
Could the dramatic divergence from the environment of evolutionary adaptation in any or all of these socio-cultural features result in increased "mental illness" for a genetic subset of human populations?
" I've never seen compelling evidence that shows that functional mental disorders were not due to evolutionary mismatch."
It seems to me an unlikely general explanantion, although is certainly is part of the explanation for, for example, high level of drepression in modern, societies.
But it seems to me quite unlikely that severe enough autism or schizophrenia either would not appear or would not be deleterious in the environment of evolutionary adaptation.
"Their genetic measure of non-cognitive skills... was still correlated at r = 0.31 with IQ"
Note that this is also a *genetic* correlation - the genetic influences of NonCog correlate at r = 0.31 with the genetic influences of IQ, not with actual measured IQ. The same is true for the Cog/NonCog relationship you mention with self-reported math ability and highest math class taken. (Also, assortative mating would inflate these correlations.)
Delay discounting is worse than alcoholism? I interpret this not as meaning that delay discounting is bad for success, but that high educational attainment is bad for success. Huge delay discounting = not going to college because college takes a long time (not obviously wrong); no delay discounting = being willing to go to college for 12 years to get a marginally nicer job (obviously wrong).
The way they use correlation assumes that all personality traits have linear effects on whatever they're measuring. If the function relating {number of genes "for" a behavior} to {educational attainment or IQ} has a tall U-shaped (or upside-down-U-shaped) curve, as some might, the correlation results will depend mainly on the outliers. For example, if a little conscientiousness helps you finish college, but a whole lot makes you such a perfectionist that you're likely to fail college, then the "correlation" between conscientiousness and finishing college isn't telling you how strong the effect of conscientiousness on finishing college is; it's telling you something about the skew of the U-shaped function.
For some reason, when people talk about genetics they like to forget that correlation is not causation.
Scott, you are usually vigilant for distrusting "correlation but we adjusted for confounders" studies. This is the same type of study!
What are "genes for intelligence"? In this context, they are genes that are correlated with intelligence. A gene that purely causes black skin (but does literally nothing else) would be counted as if it decreased IQ, because statistically people with black skin do worse on IQ tests.
There is no end to the number of possible confounders here; there is no exogenous source of randomness at all.
In general: I've often thought that a lot of people go to college and get phds just because they are familiar and comfortable with school and scared or unsure about entering the 'real world' (certainly true for me). I've often thought that a lot of the apparently high levels of mental weirdness in phd programs are largely related to this - who wants to effectively extend their childhood by staying in school, vs who is ready to 'grow up' and enter the real world.
FWIW, I know several people who got Ph.D.s in order to be able to pursue their favorite research. Of course, arguably, the desire to understand esoteric properties of nature is also more childish than the desire to settle down with a family and 2.5 children...
I have some experience living with people diagnosed as schizophrenics. I put it that way because I'm not sure how well understood schizophrenia is and how confident we can be in a clinical diagnosis.
Other than living with diagnosed schizophrenics, I know very little about the condition. My experience is that schizophrenics live in a fantasy world. They are unable to tell the difference between the real world of experience and the world inside their head. I know one in particular who can talk at length of his day to day life in Vietnam, when in fact he has never been there. He appears to be unable to distinguish between fantasy and reality.
On the other hand, my impression of highly successful scientists, engineers, mathematicians, linguists, etc... is they can build mental maps that enable them to navigate their special practices. It seems to me they comprehend very complicated systems, such as molecular biology or microelectronics using some type of mapping onto real world experiences.
What I mean to say is what schizophrenics and mathematicians, e.g., have in common is the ability to live in an alternate reality. I would say visualizing the complex folding of a protein molecule is not too far removed from imagining that you are in fact Sgt. Barry Sadler in 1969.
It's amazing how dominant UK Biobank is in genetics right now (at least behavioural genetics). I was at the IGSS conference (https://cupc.colorado.edu/conferences/IGSS_2021/) and more than half the presentations were using that data.
Overall, I'd be worried about confounds here. We have a very noisy measure of "genes for IQ" - noisy because GWAS is noisy, and because the IQ measure itself is noisy (just a quick 12 point test IIRC). Then we deduct that from "genes for educational attainment". What's left? Maybe "genes for non-cognitive skills". But maybe "genes for IQ, that we didn't measure very well". Indeed, "non-cognitive PGS" predicts IQ.... And then there are all the possible environmental confounds. I think I'd rather see a measure of non-cognitive skills and then a GWAS that targets that directly.
However, that is just a lazy first take, and I should stop shooting my mouth off about a coauthor's paper.
Counter proposition: EA genes are a superset of IQ genes, so social skills and creative skills would be part of this EA-IQ set. But then we would see Psychosis tilt (lack of autism), Extraversion, Agreeableness, Conscientiousness, Longevity, Feminization (older mothers, tied to GNC) being part of a bundle?
There's a lot of inference to worry about here, but I'm already stuck on this, from the paper: "By construction, NonCog genetic variance was independent of Cog genetic variance (r_g = 0)." What sense of "independence" follows from zero covariance? That's pretty clearly going to create some weird conditional relationships between their SNPs' imputed Cog and NonCog scores to maintain zero correlation. It seems they recognize this in the supplement but they don't really bother to interpret it.
I ask this because I follow Charles Murray on twitter in order to argue with his position that there is a race-iq causation - has anyone dared to see if there's a variance of the cognitive genes vs ethnicity?
Seems like this database would provide quite strong empirical evidence on the unfortunate topic
Theres already like 3 admixture analysis been done...even controlling for skin color and income you get the same result, more euro ancestry better results and vice versa. Truly horrifying topic to be honest and I was just really disturbed getting into it myself. Hoping to God there's a way out of this mess.
Interesting, does that apply relative to Eurasian and/or East Asian ancestry as well? It wouldn't surprise me if there was something we don't know about the correlations that gets figured out eventually. 'My' theory (Thomas Sowell's theory) has been that historical access to the large east-west landmass of Eurasia and things like density of ports, fertile plains and navigable waterways is the main proximate cause for culturally driven differential IQ results
I understand Murray does not believe that a genetic effect would mean anything about the value of an individual and certainly not groups but I fear the social effect of a widespread belief, whether reality based or not, of there being population genetic differences. The comments his tweets on the subject get suggest he isn't right to think there's nothing to be concerned about
Am I reading the chart correctly to note that SNPs associated with higher cognition are *negatively correlated* with conscientiousness (and extraversion, and agreeableness)? That's absolutely fascinating given that educational attainment (and a whole bunch of other traits associated with success, like wealth and income) are so strongly associated with conscientiousness, and suggests at least to me that most of the difference between educational attainment-promoting and cognition-promoting genes should logically have something to do with the trait.
It also seems like a fascinating counterexample to the idea you see brought up often in population genetics that "all good things are correlated". We see a discussion of a g factor, which includes all of cognition, often, and it seems frequently mooted that g itself is part of h, a general health factor. (Also, while I would certainly associate high-cognition-but-low-educational-attainment individuals with low conscientiousness, it boggles the mind to suggest that low conscientiousness, by itself, is associated with higher cognition. Why should that be true? Where's the tradeoff?)
Assuming I am reading the graph correctly, and blue is just 'high cognition', and not something like 'SNPs promoting high cognition but not educational attainment'. I could go back and try to read the chart more carefully, or read the article the chart comes from, but I think I'm too low conscientiousness to go and do so. One read-through is enough.
I've never been tested for ADHD and I don't know if I'll ever bother as my country has a dismal record of treating it, but I strongly suspect I have it. In any case I am extremely hyperactive and have been for a long time.
My life was basically total chaos up to a point. I finished university with a very mediocre result (around 2.4 I think). I aced courses that were interesting to me, and I failed Statistics 101 TWO times until I got angry the third time and got a B+.
Then I was lucky enough to find a technique that works for me - a sort of a Zen meditation where you just sit without moving for a long time. It doesn't remove my ADHD symptoms, but it calms me down a lot and allows me to focus and work.
I am a programmer now and I think there are some advantages, but only because I am treating it in some way - otherwise it would just be a total burden.
The advantages:
- I am better than others in scanning a large amount of code in a search for some obscure problem. In more general sense, I am just better in scanning in general - if I have some list in front of me and I need to find something, I do it faster than others.
- When something gets interesting to me, I feel like my mind gets totally obsessed and get actually angry if someone distracts me - I have shouted at people for that. This might sound bad, but allows me to do fast analysis of a large amount of data.
Disadvantages:
- I have a problem when I need to slow down and focus on one thing. I just cannot motivate myself to study, I have tried for years. The only way I learn new information is when I actively write code, because it is interesting and not boring.
- If I start procrastinating even a little it is very possible that my whole day will be spent in Reddit and YouTube - basically when my brain gets interested in some bullshit many hours can pass before I can stop myself.
- I work well when my work is interesting and badly when it is not. For example, it is interesting to write some new feature, less interesting to make a strategy to test it, so that might take much more time than it has to.
Resonates with me but...its kind of worrying? I have the strengths you mentioned, plus the weaknesses and I was hoping thered be a way to move beyond just leaning into my strengths. I suppose I could marry a lady who has the opposite of add whatever that is....haha
Oh man it really just kind of is this way. Keep searching for productive interesting things to work on that have a lot of payback
PolyGenic Score. It is your predicted phenotype from your genotypre according to a genome-wide association study. Because the PGS over the whole sample has an R^2 of something like 0.05 to 0.3, it is a very noisy predictor of phenotype.
Sometimes people use PGRS, where the ‘R’ is for risk, because GWAS was first used for diseases.
Hypothesis: schizophrenics have a better connection to the supernatural, which is actually real. They're the most misunderstood mentally ill people, except maybe cluster B folks. I'm sorry to admit this hypothesis is more than a little bit inspired by stereotypes about chosen people.
There's a history of schizophrenia in my family. There's also a history of genius, creativity and outrageous financial success. The money is nice. But I miss my brother.
There is now genetic and epidemiological evidence that Autism and mental illness genes overlap (especially with ADHD, parkinsons). But this is not being communicated with the public. How long has this been covered up? I can't believe that in almost 100 years of autism research this has not been noticed or investigated until now.
I think you can just check and intelligence is being positively selected for over evolutionary time scales. See eg https://www.nature.com/articles/s41598-018-30387-9 (random study, haven't done a deep dive but I believe it's representative).
This doesn't mean things haven't reversed in the past few centuries, but that probably hasn't had too much of an effect yet.
Maybe I'm misreading your argument, but neanderthals and other archaic humans almost certainly selected for intellect as well. Neanderthals had rudimentary art, decoration, funerary practices, almost certainly language, etc. which are all things they developed extremely quickly on an evolutionary time scale compared to their ancestors.
It seems likely to me that the conditions for massively accelerating intelligence are an environment and a species toolkit that makes each marginal improvement in intelligence create outsized gains in reproductive probability. The climate and environment of the archaic past was much more hostile than the one we have today, and with even very basic technical advancements (the use of specialized stones rather than general hand axes, for example) the odds of survival shoot up in that environment.
The human species is no longer facing those conditions, so selection for intelligence has slowed down. Our growing biomass is just the outworking of a process that has more or less been pre-determined since we discovered writing and gained the ability to greatly increase information transmission.
You need to think in terms of ecological niches. Organisms (plants and animals) evolve to fill each available ecological niche in a given environment/ecosystem. For some of those niches, like say pack hunting ruminants for protein and fat, intelligence will be an asset. For others, like turning sunlight into plant food, it won't be. This is what determines if it gets selected for.
I agree, it does seem odd that there would be more selection for intelligence among hunter gatherers than in the past few hundred years, when the economic and social returns available to people of above average intelligence seem so obvious. Human social dynamics are complex, though, so it could be any number of factors.
I feel like selection in general is weaker than it used to be, because almost everyone survives to adulthood, and most people have kids. Maybe the sweet spot for low selection was in the decades after 1945; few young deaths, and also few childless weirdos like me.
Come. Are you seriously debating whether horses are smarter than houseflies? The fact that it's hard to judge close calls does not in the least call into question the capability to judge *at all*.
We can easily observe intelligence in animals, because we define it as "solving problems." They don't have to be problems that are interesting to humans, and indeed they are typically not, they are problems interesting to the animals, like finding food or how to get out of or into some space with barriers, et cetera.
Anyone who has owned animals knows that some are clearly smarter than others. Some dogs are smart, some are dumb, and any dog-owner can tell that. Some dogs figure out how to beg or steal food cleverly, some are dumb about it, some never figure it out. You can fool some dogs easily, some are much much harder to trick.
Some horses are smarter than others, e.g. can figure out how a latch works, or when a gate is likely to be left open, and someone who works with horses can see that easily enough. Heck, I've seen someone who kept rats as a pet observe that some rats were clearly smarter than others because they could solve problems interesting to rats -- finding food, shelter, mates, whatever.
It feels to me like you're getting lost in mechanistic details, and overlooking the basic crude observational data, which is that we can clearly see that individual members of a species can solve problems important *to that species* at differentiated rates of success, over time. We can also observe that one species can solve problems interesting to that species, on average, better than another, so we can readily conclude one species is smarter than another.
I'm sure it's much harder to distinguish fine gradiations of intelligence of dogs than it is of humans, because we know much less about what's important to a dog, and how dogs think. But that is, if you will, merely implementation difficulties, a challenge to experimental or apparatus design, it doesn't call into question the entire existence of the trait.
Yeah I think you are still favoring the types of problems that people think are impressive. Insects are great at surviving and solving problems in ways that humans are totally uninterested in or actively hate.
Virtually every species that humans consider to be particularly intelligent is one much like us, i.e. a social-group living species that hunts or is at least omnivorous. Dogs, dolphins, whales, elephants, primates, wolves, etc. So either living in a socially cooperative manner confers greater intelligence (which would make sense, because having to consider not just your own interests but also how the group will react to your behavior is a lot of cognitive load), or we just tend to be particularly impressed with forms of intelligence that we can recognize and that serve our interests.
I take your point in that I've had pets that were clearly smarter or dumber than others. But it's also hard to say in some cases. People tend to consider dogs smart when the dog is easily trainable and likely to do what the human wants. I'd say cats are smarter than dogs but people tend to think dogs are smarter because they're submissive to humans and want to please them. One of my dogs learns new tricks at literally about 200 times the speed of the other -- it's a stark difference -- and is much more attuned to people and how to manipulate them. But the stubborn/"dumb" one has far superior survival skills with respect to being appropriately suspicious of danger, knowing when to hide or approach, hunting skills, etc. If they weren't living with and dependent on humans, I would expect the "dumb" one to have far greater survival chances and the smart one to be dead within a month.
Well, there are two possibilities here:
(1) Intelligence confers a survival advantage to humans, and natural selection has tended toward optimizing it.
(2) Intelligence does not confer a survival advantage to humans, and the fact that humans -- every one of them -- are about umpty times smarter than dogs or rabbits is just a weird coincidence.
The problem with (2) is that it asks us to believe that *the* most salient characteristic of a species is mere accident. It's like observing that cheetahs are really, really fast in a sprint, but this is just an accident, and their survival as a species actually derives entirely from the clever design of their spots.
In terms of very recent evolution, there's reason to think western europe was eugenic between 14th-20th centuries because so many violent criminals were being executed and the upper classes were having relatively more kids than the lower classes. Not an exact measure of changes in intelligence but its something.
https://journals.sagepub.com/doi/full/10.1177/147470491501300114
This essay is very interesting, and it led me down a fascinating rabbit hole.
However, it honestly seems like his claims about the genetic effects of capital punishment are wildly overstated, by a factor of ten.
I was very shocked by his claim that men in Late Medieval/Early Modern Europe had a 0.5-1% lifetime chance of being executed, due to an annual execution rate of 1 execution per 10,000 people. When I looked up the source he cites for this claim (L'Occident est Nu, by Philip Taccoen) on Google Books, I discovered that he really does appear to have simply misquoted Taccoen: the latter says that Early Modern England and Malines, as well as 18th-c. France, executed one out of every *hundred* thousand people annually.
(The other source this paper cites for the execution rates claim, Paul Savey-Casart, isn't an independent source but simply the one cited by Tacoen as the source of this claim. Based on the lack of a page number given for the Savey-Casart citation, I strongly doubt that the authors of the paper tracked down Savey-Casart's book themselves.)
It's entirely possible that I'm totally wrong and there's some obvious issue I'm missing--but it really does seem like they founded a significant part of their thesis on a claim that could have been disproven by simply double-checking what their source actually said.
I guess I’m confused why generalizability to other species has any relevance on g as it applies to humans? It seems fairly obvious that intelligence has been strongly selected for (how else could we get so smart).
A point that it seems you’re trying to make is that humans aren’t any more intelligent than other species, we all fill our niches equally well. A bee could look at a human with a PhD and think them unintelligent because they don’t know how to build a beehive. The part that I think this view misses is that bees (and all other non-human animals) only know how to do one thing. Each species of bee is evolved to build one type of hive and produce one type of honey and collect nectar from one ecosystem’s flowers. Humans on the other hand have been able to spread over the globe over the last ~40,000 years into every possible ecosystem and have adapted - quickly and with our intelligence, not with slowly natural selection - and it is that capability that we use to justify our intelligence relative to other species.
I don’t think anyone makes the claim that intellect is always a positive trait, as people have mentioned elsewhere in this thread it involves a heavy trade off in between general-purpose intelligence and energy usage, so in most niches it is not something that is especially selected for.
Rather, I think that intelligence is something that became useful in human’s particular niche (perhaps due to our ability to learn socially) and then rapidly evolved to fill that niche
I'm not sure what you mean by the second part, there is no purpose of intelligence, its that more intelligent people outsurvived and outreproduced less intelligent people, causing the average intelligence of the population to rise. What is left to explain?
I think your word choice is misleading, here, since you're trying to distinguish the "merely bright" from "geniuses"; describing the former as "low intelligence" is true in the relative sense, but the people you're thinking of are still above average intelligence for the general population
An extremely hard working person who is obsessed with their subject but has an IQ of 105 can probably earn a PhD in most fields.
I doubt that. In my experience, smart people overestimate how smart the average person is. (because smart people generally live and work with smarter than average people). Also, I know a few people who failed the qualifying exams in my PhD program in a natural science. They were all, every one of them, hard workers who loved the subject. They were also all much much smarter than 105 IQ. They just weren’t smart enough for the program.
I wouldn't include hard sciences in my statement. There are a lot of easier fields.
see here: https://www.iqcomparisonsite.com/occupations.aspx
Among college professors, 10th percentile = 97, 25th percentile = 104, 50th percentile = 115.
I'd expect college professors to be a bit higher than PhDs in the same field because there's an additional filter in the job application process. The average IQ of PhDs has probably declined over time as the number minted per year has increased ~20x since the 50s. I've heard elsewhere that the average IQ of PhDs was 125. That might be a much earlier sample. Or it might be the case that the easiest fields (such as grievance studies) have higher rates of PhDs becoming professors to spread the mind-virus instead of going into industry to do something useful, since there's nothing useful to do with bullshit fields. That could make the average of professors lower than the average of PhDs even if each field has above-average professors relative to the PhDs in the same field. (assuming almost all professors have PhDs)
Anyway, that was a long digression, but if the 25th percentile of college professors have an IQ of 104, it's totally plausible that people with very high noncognitive traits but IQ 105 could get PhDs in most fields.
Point taken, and I’d agree with you if “most fields” can be taken to exclude STEM.
Please clarify what you mean by less agreeable(grumpy?) and why you think that it would correlate with intelligence.
Less agreeable does not mean grumpy, it means literally less likely to agree with others, i.e. go along to get along, concede on matters, etc. One can be quite cheerfully disagreeable. Higher intelligence means more likely to think that one is right about things, come to one's own conclusions, and less likely to go along with what others say just because they say it.
I asked what he meant by agreeable because, what you are describing above is not the dictionary definition of agreeable as I understand it. It is generally used either "as agreeable to x" or just "agreeable" where means pleasant. I think the latter is along the lines of he is agreeable to be around(where the agreeable one is the thing being agreed to).
I strongly disagree that higher intelligence equates to "more likely to think one is right about things." Haven't we had at least 1 post describing the opposite? Less-smart people think they are correct. More-smart people understand that things are complicated and they are better able to appreciate that they may not be. Everyone comes to their own conclusions.
My understanding is that the Big Five Personality definition of the agreeableness trait pretty closely aligns with "goes along to get along."
I think you are mixing up tendency to update one's priors and adjust to new facts (which would be correlated to intelligence) and what I was talking about in respect of "being right", which meant that one is not going to say that something is correct, when it obviously is not, just because of social pressure. I was talking about thinking one is right as a matter of sociability rather than being open to changing ones mind based on new information.
Example: you are sitting in a room with ten people and everyone is asked to come up with an answer to some question where there is an obviously correct versus incorrect answer on something factual (maybe a math problem). If the other nine people absolutely insist that the wrong answer is correct, and want you to go along with them, and you insist that your correct answer is right, you are probably not very agreeable.
Agreeable is referenced here in relation to the "Big Five" personality traits, which is what is shown in the table above, and it does indeed show that the cognitive genes are inversely related to agreeableness (i.e. more genes for cognition/IQ make you less likely to be agreeable). But the non-cog genes were positively correlated with agreeableness. The literature I've seen shows there is no strong association either way with IQ. Which is probably because the trait of "agreeableness" has a bunch of sub-traits which probably work at cross-purposes. "Compliance with social norms" is one. But so is empathy and compassion. And also sociability. Honestly, these are all pretty different things and perhaps should not be lumped together at all.
In any event, agreeableness should NOT be considered "grumpy", which would go more towards the neuroticism factor of the Big Five personality traits (i.e. moodiness and emotional instability).
I was not aware of "agreeableness" or the "Big Five." That was why I had asked for clarification. On the "being right" issue, I suspect I just misunderstood you. I read being right as "being certain" and in my experience, one of the hallmarks of very bright people is the opposite.
There is a tendency to hedge: "if feels like", "is it possible that", "this reminds me of." This is perhaps increased awareness of ones innate(non-numerical) bayesian thing. Interestingly, this can also comes across as agreeableness(hedging) but at least some of it is a deeper awareness of the complexity of issues and an ability to acquire understanding without confidence in a solution.
Undoubtedly, High IQ people feel less need to prove themselves and are well aware of potential benefits of being agreeable, as well.
Yeah, sorry, I wasn't very specific. Though you caused me to go look more at what constitutes "agreeableness" in reference to the Big Five personality traits, and it turns out that there is quite a bit of disagreement on this and it has many sub-components that don't seem very related. Empathy versus being submissive/easily conceding to the group versus stubbornness versus friendliness -- it's not clear to me whether these traits really have anything to do with each other whatsoever. So perhaps it isn't surprising that IQ doesn't show any strong correlation to agreeableness one way or the other. And it's fairly context-dependent anyway. I'm pretty far towards being extremely laid-back and easy-going in 90% of situations, EXCEPT in instances where I'm expected to agree with something I think is factually wrong, in which case I'm horribly obstinate and entirely disagreeable. That's probably not that unusual, and while I think that would measure as low neuroticism low agreeableness, I'm not sure.
This has, naturally, been studied, and IQ has been found to have a modest positive correlation with agreeableness.
I don't think this is true, most research I've seen shows no relationship of agreeableness with g. The chart from this study above also shows an inverse correlation with cognitive genes (but a positive correlation with non-cog genes).
> Depression is just bad. I strongly recommend not having it. Don’t even have any risk genes, if you can avoid it. All of you people trying to come up with clever evolutionary benefits for depression, I still think you’re wrong, and so does the genetic correlation with cognitive and non-cognitive aspects of educational attainment.
I am not a doctor or biologist, but the fact that rates of depression are heavily correlated with ethnic background (at least that's the stereotype; I haven't validated this), I would expect that depression is genetic in nature. And if it is genetic, I would assume that there is some kind of compensatory benefit to keep it around.
I don't know what the benefit is, and it seems pretty all-around shitty to me. But why would evolution specifically give you depression genes? There has to be some evolutionary benefit
Those who don't want to be a burden on those around them could just leave, so they are no longer around them.
Is it actually that correlated with ethnicity? https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1199525/ (first study I Googled, I'm not claiming to have looked into this deeply) says it's 8% of whites, 9% of blacks, and 11% of Hispanics. That seems pretty similar - any differences could be accounted for by social deprivation or culture.
Aren't suicide attempts, or even just suicides, not a better measure for this sort of comparison?
In my own searching (which admittedly was in a journalistic context rather than an academic one) I seem to remember finding that depression rates seem to suffer from a large number of effects that skew the rates of diagnoses, chief among them the fact that merely being the sort of person who deals with those sort of problems by going to get diagnosed instead of simply stewing in them makes someone a non-typical example. Suicide rates bypass this for the same reason murder rates are great for looking at crime: it's tough to ignore when a person is gone.
I don't know to what extent the study you linked can control for that, but looking at suicide data presents a very different picture, with there basically being a hard split between Whites and Amerindians on the one hand, and Blacks and Latinos on the other. The former group has suicide rates about double the latter. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8155821/
This doesn't necessarily show the effect to be genetic, but it certainly leaves it open as a possibility.
Cultural confounders seem to be a pretty big issue here. I'm thinking specifically of Asian acceptance of suicide compared to Western disavowal of it. Breaking that down further, it looks like religion is a significant component of it. Black and Latino populations are known to be more religious (of the type that is anti-suicide) than White and especially Asian.
I don't know that this is necessarily true, but that's been my understanding.
That's a reasonable point to make, though I struggle to think of what exactly the cultural co-founders could be, given:
- Asians have the lowest rates of religiosity, but also very low suicide rates, while Blacks have the highest rates of religiosity, and yet also low suicide rates
- The two races with the largest percent of single-parent households are Black and Amerindian, and yet these two lie on opposite ends of suicide rates
- The two races with the highest rate of gun access are White and Black, yet they lie on opposite ends of the spectrum as well.
- The two wealthiest races per capita are Asian and White, yet they are opposite in suicide rates. The two poorest are Amerindian and Black, which are once again polar opposites when it comes to suicide.
There might be some sort of cultural component, though I'm unsure what research has been done on that, but it isn't clearly evident from any factor that I would expect to influence suicidality
I doubt it's the explanation, but I've read that Black children are actually more likely to have both parents than White children. We assume the opposite because Black children are less likely to have married parents, but Blacks often stay together without marrying. (As it happens, I read this the same year that I did taxes, and two of my clients were an unmarried Black couple with a child.)
An interesting perspective that I'd never considered. If this is true, it would make for a fantastic example (for myself at least) of how sometimes basic statistics can mislead you from the reality on the ground.
I'm not sure what you mean by kids being "more likely to have both parents" -- all kids have two parents, unless one or both are dead. But if you meant LIVING with both parents, though unmarried, this is not true. Black children who live with both parents, including both married and unmarried but cohabiting, is still less than 45%. For white kids that's 80% and for Asians, almost 90%. There's a huge racial discrepancy and MOST black children do not live with two parents, while most children of other races do. The portion of unmarried parents who live with their kids is about the same for all races (6-9%) except Asians, where it's unusual.
Uh, no. That's totally false.
https://www.pewresearch.org/social-trends/2018/04/25/the-changing-profile-of-unmarried-parents/#:~:text=Among%20solo%20parents%2C%2042%25%20are,cohabiting%20moms%20(30%25%20vs.
strength of social networks come into mind. a more 'tight-knit' black family may provide pressures to prevent people from suicide if you have a functional-ih family structure.
Some economist should find a natural experiment and study whether or not converting to X religion or being adopted by X culture actually has a protective effect against suicide (or increases income, or increases life expectancy, or increases life satisfaction, or whatever).
Could Japan being counted as part of Asia be confounding the Asian numbers? They have a history and tradition of ritual suicide for various reasons as part of their culture (less often in modern times but still a thing). My intuition is that people with fewer existential problems (hunger, thirst, shelter, power etc) tend to be less able to cope with the smaller problems that remain, which means I would expect whites to be "winning" in suicide rates while everyone else is lower but still around the same order of magnitude.
The rituals and traditions for suicide in Japan were related to behave in war though. The banzai charge/gyokusai attacks.
There are likely different confounders there - people are more likely to commit suicide when they have the easy means to do so, which probably varies a lot by culture, region, wealth, etc.
No because suicide isn't a good proxy for depression. It has lots of non-depression causes (eg alcoholism, schizophrenia, impulsive action after one really bad day) and lots of non-depression preventors (eg religious people are much less likely to commit suicide, maybe because of fear of afterlife)
While I understand that those make for issues with using suicide as a proxy in general, are they truly an issue for this particular case (racial comparison)?
Schizophrenia has decent evidence of being race-neutral, rates of alcoholism aren't so starkly different between races to account for anywhere near the gap (except perhaps among Amerindians? I am failing to find any decently reliable dateset that includes them in alcoholism data,) and unless white people are comparatively incredibly impulsive, impulse control wouldn't be a differentiating factor.
And as far as preventors, see my response to Mr. Doolittle above. Races with similar rates of religiosity fall on opposite sides of the suicide spectrum. The same goes for income, educational attainment, gun access, absentee parents, etc.
As far as I can tell, considering these factors only make the racial separation even more stark.
Not all religions are the same, and definitely not interchangeable. Western religions (Jewish, Christian, and also Muslim) have a very negative view of suicide, which I don't think exists for Eastern religions. Japan is well known to have very different views of suicide than in much of the rest of the world.
Just ask Hamlet about suicide...
To die, to sleep;
To sleep, perchance to dream—ay, there's the rub:
For in that sleep of death what dreams may come,
When we have shuffled off this mortal coil,
Must give us pause.
I bet depression and suicides are correlated with more free time (being less busy), more of the basic human needs covered (food, shelter, sex, security) and less close relationships. And it's not only depression but even health in general and life expectancy.
Didn't Emile Durkeim mention how protestant regions of Europe had a higher rate of suicide than Catholic areas as well as how industrial or urban the area was?
Well, I'm only a layman, but here is a possible benefit:
Don't mildly depressed people have a slightly more realistic view of reality compared to the rest of us?
So let's say you are in a historical environment and you believe something maladaptive that you derive negative utility from but is not so bad that you aren't dead.
This over time depresses your mood, enables you to see reality better and gives you a chance to change your mind.
Depression acting as a belief/behaviour switching function.
Historically the lack of motivation wouldn't have been to much of an issue as you are cold, hungry or horny enough to push past it. The lethargy may even be good in giving you slightly less caloric upkeep while you figure out what you are doing wrong.
Of course, if depression gets too deep, that's a failure case. But hey, we know evolution only aims for "barely good enough".
Depression is the failure case. Being sad because things are going wrong is just normal behaviour.
So its only labelled depression when the results are bad? In that case by definition depression is always bad as Scott said. I always imagined it as a spectrum sort of thing rather than an on/off switch.
Your right I should have used the term sadness rather than depression if depression is the technical term for the failure case.
I kinda see them as the same thing though differing by degree?
That something must be bad to be considered a psychiatric disorder, I believe is true of all of the psychiatric disorders. Pretty much always 1 of the DSM requirements is that it interferes with your quality of life.
From nih.gov for "Major Depressive Episode":
- Symptoms must cause significant distress or impairment
I think this labeling issue is at the heart of the problem Scott's trying to get at. If you only label bad cases, then all cases are bad. If you label adaptive cases that don't reach the level of bad (we call it "sadness") in the same category, we might be able to say something meaningful beyond "depression is always bad."
As someone with lots of experience with depression, for me it's not about having a realistic view of reality; it's not caring about reality at all. When I'm having an episode, it's impossible to imagine anything being interesting. It's all just boring.
>Don't mildly depressed people have a slightly more realistic view of reality compared to the rest of us?
I think perhaps you mean pessimistic people, not people with clinical depression.
That being said, experts who are optimists tend to make significantly more accurate (20% more) political predictions than pessimists (Expert Political Judgment : How good is it, Philip E. Tetlock 2005).
Somebody did a study with a box with lights and buttons, and asked people to estimate how much their button presses influenced the lights. Most people overestimated, but depressed people tended to get it right.
I think maybe I read about this study not replicating? But I've read about so many things not replicating I might have gotten mixed up.
Key question here though: by depressed people do you mean people at that moment suffering from depression or just people at some point diagnosed with it? Because my experience of suffering from depression is that your ability to analyse and observe is markedly different depending on whether you are suffering an episode or not.
Pain is just bad. I recommend not having it. Yet its absence is maladaptive.
Abraham Lincoln seems to have had depression. I suspect that high IQ + depression (to a degree that isn't disabling) might be more suited to navigating times of strife than high IQ + alternative, particularly for leaders/decision makers.
What's the alternative mental state during lengthy, difficult circumstances? Cold indifference? Cheerfulness? Better to err being in a depressed state than one of emotional deadness or irrational glee.
Sadness under circumstances where it makes sense to be sad is not depression. Depression is constant, hopeless sadness for no reason.
No there doesn't. Natural selection works on the entire genome at once, it doesn't individually optimize each and every gene. (If nothing else, the space is so highly dimensional it would need a huge amount of time to do that.) It's completely possible for natural selection to optimize foo (which confers a major survival benefit) which alas brings along bar (which confers a mild disability) for the ride, because the two genes happen to be hooked up in the DNA world in any of the various ways that can happen.
There seems to be an obvious benefit to seasonal depression at least. In winter:
* there's little food to eat
* it's cold outside
* meeting strangers will get you ill easily
* there's no work to be done on the fields
Therefore if you're naturaly inclined to
* eat little
* stay inside and not meet anybody
* sleep all day
…you more likely to survive winter. Especially if you live in the middle ages or earlier.
Note that none of these conditions will get you a smart brain or a high function level, but that's not the point of survival.
You do realise that agricultural societies work throughout winter? It was the only time that was available for non-routine work, although there were annual activities as well. Winter crops, preparation for early spring planting, finding fuel, making repairs, clearing drainage, pickling and brewing, digging out the sheep, taking in new land... Not doing these things (often communal activities) screws up your chances of prospering and therefore surviving/having your children survive.
Also, the positive genetic argument for depression has to deal with the minor issue that it generally removed your sex drive, which is an evolutionary fail.
Are human beings, biologically speaking, more a product agricultural society or of pre-agricultural society? I would presume the latter, and so to the extent that depression is biological I could see how it could make sense evolutionarily.
Keep in mind, not all evolutionary benefits remain beneficial in such a different context as the modern world. An obvious example is food, we have access to more calories than we know what to do with and adaptations to save calories wouldn't be advantageous today.
It could be that depression used to have an evolutionary benefit but somewhere between hunter-gatherer and flappy bird it lost that advantage and just became purely negative.
Title on the graph seems wrong, "EA FDR correction tries to impute the results from the main timeline, where Roosevelt was an effective altruist and diverted the resources of the Depression-era US into curing all diseases."
It's a joke about the key for the grey line.
But I also initally thought Scott had made a mistake! (some of us read the text underneath to figure out what on earth we're looking at before reading all the acryomns in the very dense picture!)
It’s also a play on the bad leading in their figure legend — EA and lines below it (incl. FDR) are separate labels.
"As of last time I checked, the leading hypothesis was that schizophrenia genes were just really bad, evolutionary detritus that we hadn’t quite managed to weed out."
Is it really? I thought the idea that schizophrenia genes/low levels of schizotypy were somewhat positively associated with creativity (when they don't result into full-blown schizophrenia, which does decrease creativity) was rather widespread. I wouldn't necessarily have expected creativity to correlate with educational attainment, but it's clearly a positive trait that is easy to think would be selected for in many environments - not quite what you'd expect from "evolutionary detritus". Eg https://onlinelibrary.wiley.com/doi/abs/10.1111/j.1601-5223.1970.tb02343.x or https://www.nature.com/articles/nn.4040 but a Google Scholar search would return much more
As I posted in my comment, Robert Sapolsky discusses this idea at length here: https://youtu.be/4WwAQqWUkpI
Awesome for the overview and you can listen while doing something else, too.
I have heard the same thing. I don't have any cool links to share on the subject, though. :(
I'm not familiar with this type of research at all. When one says "found genes for intelligence" or "found genes for educational attainment", what does that mean? Is the claim that some portion or portions of the human genome have been identified that, when they look like 'x', 'y', or 'z', make a person unintelligent, of average intelligence, or very intelligent, respectively?
Suppose you have 100,000 genetic tests and you know the educational attainment of each individual. If a sufficiently large number randomly selected sample should have approximately the same average educational attainment. If instead of selecting randomly, you select by presence of a specific gene, and if educational attainment varies between the two groups, you have a correlation. Repeat for next gene.... (Where a gene is a DNA sequence that is, in its entirety, passed on by inheritance)
Google tells me that humans have between 20-25k genes. That seems like a lot of free parameters, especially when you start to consider combinations of different genes. I'm not trying to dismiss a field I know nothing about out of hand, but I worry about chance correlations with a parameter space that large. What's known about function of these genes? Is there any reason to suspect they would influence cognition? Do different studies using different test populations tend to find correlations between intelligence (or whatever trait you are interested in) and roughly the same set of genes?
Only 11 or 22 intelligence-related genes seems *really* low out of a pool of 20-25k.
A typical study might be looking at 2.5 million SNPs. So if they didn't know statistics and just checked for p<.05, we'd expect them to get on the order of 100k false positives.
"but I worry about chance correlations with a parameter space that large."
It was a huge problem when using small (a few hundreds) sample sizes and a small set of candidate genes, leading to many many false positive "discovery" during the nineties and the aughts. But now the studies use sample sizes of several hundreds thousands people, leading to much higher reliability, and yes, the snp (=position within the genome)/traits associations are replicable, e.g.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6003860/.
("First, GWAS findings are highly replicable. This is an unprecedented phenomenon in complex trait genetics, and indeed in many areas of science, which in past decades had been plagued by false positives. ")
Nice summary of the current state of research here in the link below. I really appreciate the fact that they listed the major controversies about genes and intelligence without commenting on them — but just giving links to footnotes. The fact that they admit there's controversy is big step forward if you ask me. Lol! They also have a section on Parieto-Frontal Integration Theory (P-FIT), which seems to be 21st-Century phrenology with brain scans. I guess I shouldn't be snarky P-FIT, but when one studies the history of science one sees recurrent meta-theories arise over and over again.
https://www.nature.com/articles/s41380-021-01027-y#:~:text=A%20positive%20genetic%20correlation%20indicates,likelihood%20of%20developing%20the%20disorder.
This next paper claims to have identified 187 loci that have a role for neurogenesis and myelination that seem to affect intelligence. Pretty dense (or maybe my myelination quotient isn't high enough to immediately grasp it).
https://www.nature.com/articles/s41380-017-0001-5
This third paper finds that there are 22 genes that seem to affect IQ. Can't figure out how to line up their findings with loci described above.
https://www.nature.com/articles/ng.3869.epdf
There was another paper I read a few months back that insisted that there were only 11 genes that are strongly correlated with IQ. But now I can't find it. I would have been interested to see if any of the 11 overlap with the 22 paper.
"Both cognitive and non-cognitive skills are correlated with neuroticism, which I guess makes sense - that’s probably what makes you do your homework on time."
They're both negatively correlated, though, right? So this also means that NonCog has correlation in the "good" direction for each of the Big Five, which makes sense.
Also, I'm trying to think of how to recreate this analysis, but without looking at genes. I guess it would mean that for a given educational attainment (say, college grads), people in the bottom 10% of IQ are more likely to be schizophrenic than the average college grad? Or at least have more relatives with schizophrenia?
You're right, thank you, fixed.
The negative correlation with neuroticism makes sense to me in that on the Big Five, neuroticism means more like "emotional reactivity" than like "obsessive."
I'd like to see some exploration of the ADHD–IQ link. To me it seems plausible that ADHD genes might affect intelligence test scores without actually affecting underlying intelligence. This could be investigated by looking at the results of the subtests: if ADHD genes have an equal effect on subtests that require attention (such as digit span) as on subtests that do not (such as vocabulary), that would be evidence against my hypothesis.
I think that sort of perspective on intelligence highlights the fundamental problems of this research - the bedrock for “smart” are subjective.
For example, there’s a very famous story about a world class ballerina (use Google for the specifics) who was terrible in school as a child, so her mother took her to see a psychologist. He watched her and asked the mother what she enjoyed most and what she did in her free time. The mother said she likes to dance so the psychologists simply said then that’s what she finds most interesting - if she can’t pay attention to arithmetic, she’ll probably pay attention to dance routines. And she went on to fall in love with dance and become a world class dancer.
In today’s world, she probably would have been given ADHD medication and sent on her way. She might have thought she was dumb all her life, but it’s impossible to deny that she was a genius in a kinesthetic sense. Her underlying “ADHD” was not a cognitive benefit in the classroom, but was in the ballroom. This genetic analysis does not assess that sort of professional assessment in correlation with IQ.
I also seem to recall that Scott has a pet theory that a lot of ED docs have ADHD and thinks that might benefit them in their environment. But I wonder how their genes would fall on this type of table? Are they simply outliers who take medication or somehow overcame their ADHD to achieve a high level of academic and professional success anyway?
My question is why shouldn’t kinesthetic ability or other things like artistic genius or language acquisition ability be included in “intelligence”. It’s somewhat of a subjective umbrella to begin with. That’s my point.
The point is made below that ADHD might manifest as a latent desire for some specific things like music, dance, math, etc. and so that may imbue a certain drive for talent in that domain but also make for sizable distraction from others.
I'd say the reason digit span and vocabulary are both included in IQ tests is that there is evidence that they are affected by a common factor, which we tend to refer to as intelligence. I am not aware of any evidence that ability to dance has any strong link to that same common factor, and, might I add, having seen a fair number of intelligent people dance, I have not noticed such a correlation.
"having seen a fair number of intelligent people dance, I have not noticed such a correlation." nicely said!
i think this read is loosely related, about the seven arts, aka skills to study and education oneself on https://www.delanceyplace.com/view-archives.php?4504
In my experience ADHD mostly hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task. This is why many people with ADHD end up with grades that differ wildly between academic subjects. If someone with ADHD finds every or almost every relevant subject in undergrad and medical school fascinating and doesn’t have a lot of competing interests, they can do very well.
Meant to reply to Hippo’s comment. Sorry about that.
Yeah that’s a good point!
Anecdotes are not data, but.....
I can attest to this. I was recently diagnosed with ADHD, but haven't taken medication so far. However, I was one of the more obvious cases since childhood.
I fully recognize myself as a completely different caliber of human when I am in focus/attentive vs when I am not. I also frequently take around 30 minutes to enter a focus state (the sensory deprivation of an exam hall helps, but the unfamiliarity of environment takes a few minutes to adjust)
It is incredibly irritating, because I would often be the person that helps friends prepare for examinations and help them understand hard concepts. But, when it came to the exam hall, I felt like my eyes & brain was dilated for the first 30 minutes. Then I'd slowly come back to the earth and rush through things as I started 'getting it', but never ended up reaching the end.
I feel like a cat sometimes. Giving off the appearance of a lazy organism with the intellect of a rock, until you actually need to be productive.
> hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task.
Yep. I might as well use this as my elevator pitch,
> ADHD end up with grades that differ wildly between academic subjects
100%. I used to get Ds in classes that needed rote learning, including biology, but physics, math and english comprehension/essays came to me really easily.
> If someone with ADHD finds every or almost every relevant subject fascinating
Happened to me with Machine Learning. Went from a lazy bum to someone who was reading papers and textbooks for fun. Ofc, this was until hackernews and reddit pulled me back in. :|
"In my experience ADHD mostly hurts your ability to concentrate on subjects or tasks you don’t find intrinsically interesting or when there is a competing topic to think about that is more interesting than your current task. "
Very interesting, thank you!
The fact is there's very very few jobs you'll get by being a good dancer, and lots of jobs you'll get by being good at mathematical/logical reasoning. So it can perfectly well make sense to force kids to do the regular school routine -- if they can at least convincingly do it they might have a much better shot than if they try to become a dancer/footballer/whatever else.
Right but economic conditions dictate what schools teach, so skill set demand changes. Dancing might not be prized today, but maybe 10,000 years ago it was for shamanic reasons or mating etc. (you get the idea). So on a genetic/evolutionary front people may have skillsets that are expressions of “genius” but in the current moment they don’t have the same economic value that we equate with “intelligence”. It’s just semantics and context.
Even in today's society, I would predict a correlation between being good at dancing and reproductive success...
When I was tested for ADHD they were specifically looking for this sort of subgroup divergence, so I believe that's a known thing.
The problem is once you start seperating "intelligence" from "scores well in an IQ test" the whole analysis rather falls apart, since we don't have any other measure. Really this should be called "genes for IQ test score vs genes for educational attainment."
I would define intelligence as g. Intelligence tests (and test items) differ in their g loadings, so It think it should be possible to associate genes with g instead of with test scores.
Unless you have some empirical measurement of "intelligence" that isn't functionally equivalent to "scores well in a test of intelligence" then any effort to do the separation is going to be mere sterile philosophizing, at best.
"ADHD and intelligence" is a horrible clusterfuck, because even by psychiatric standards the "thing ADHD describes" is a number of disparate things, and some of those things are even more explicitly social-determination than for most psychiatric terminology. ADHD can refer to hyperactivity, to idiopathic executive dysfunction, to very non-idiopathic executive dysfunction, to "boy", to "not middle-class". It's the wastebasket diagnosis to end wastebasket diagnosis.
Artificially low IQ results are a fairly common issue in the sufficiently neurodivergent population (their applicability to autism is notorious, see the writings of Donna Williams and Scott Aaronson on their own results; I suspect some of the negative correlation between IQ and schizospec neurotypes is due to a tendency to think in much curlier lines than those tests permit), but they're much more difficult to pattern over something as broad as ADHD. Some people with ADHD definitely have artifically low results as a consequence of their neurotypes, but these may be better seen through a different lens.
I have a kid with ADHD-Inattentive and bipolar. His ultimate level of educational attainment probably will not be high, although he is pretty bright (good writer, bad at math, horrible at history). His docs have told me that many ADHD kids are delayed in terms of maturity, which sometimes affects decision making (not him, thank god) and sometimes affects initiative, self-discipline, and attention to deadlines (definitely him). So, he may in time manage college, but it won't be soon.
By the way, there is nothing good about having bipolar disorder. What's good is the medication that keeps my son out of the hospital or worse.
Robert Sapolsky mentions in this lecture that a *little* bit of controllable oddness on the schizophrenia spectrum is useful for making usefully charismatic shamans, and that could explain its genetic advantage and persistence. The more extreme, uncontrollable end of the spectrum, on the other hand, isn't an advantage.
https://youtu.be/4WwAQqWUkpI
(Also, this is the one lecture from this series that Sapolsky has removed from his own YouTube channel. He says some highly cancelable shit in this one, which is a shame because it sure does seem to make sense to me.)
+1 for Sapolsky
I have often wondered if the various "mental illness" diagnosis that get made are only picking up on the people that are so far out of the norm that they register as a problem. Crochety old people, eccentric uncles, the "socially awkward" and so on probably have similar traits, but at more controllable levels. Millions of other people could have the benefits of the underlying cause, without stretching too far into a problem. We don't have the sophistication needed to identify someone who is *mildly* schizophrenic in a very positive way. We would probably label them as something else or not label them at all.
“Weird”!
I am pretty sure that some of things that are mostly harmless nowadays would end with me being dead if I lived say 300 years ago, and if "mental illness" diagnosis criteria would be applied then I would qualify.
(though - both me and my mother would be dead due to birth-related issues if I would be living 300 years ago)
"Again: you find that having more mutational load, more deleterious mutations, increased your chance of schizophrenia, or autism, or low IQ: that strongly suggests schizophrenia, autism , and low IQ are not the far edge of some strategy. Note: people talking about shamans and schiz: you’re probably wrong. Same for autism – not a strategy."
https://westhunt.wordpress.com/2018/07/22/more-theory/
Sapolsky also wrote a book saying zebras don't get ulcers due to a lack of stress well after Barry Marshall showed ulcers were caused by a bacterial infection.
> Depression is just bad. I strongly recommend not having it. Don’t even have any risk genes, if you can avoid it.
Couldn’t depression genes be like sickle cell anemia? Eg one depression gene, say, makes you more introspective, two makes you so introspective you fixate on everything bad?
I’m struggling to understand the chart in ways that translate to common English. Is it fair to say that this chart suggests:
1) Noncognitive genes are more correlated with Academic Achievement than cognitive genes in the presence of high Big 5 Personality stats?
2) Schizophrenia and Bipolar disorder correlates to more academic success in people with more NonCog genes? So Cog genes + Schizophrenia = no correlation to academic success?
3) Noncognitive genes and academic success is correlated with significantly higher longevity as compared to Cognitive genes and academic success?
My understanding of the chart is that the position of the dot on the graph says whether the bucket of traits that determine cog or noncog achievement correlate with each trait. To tick of each point:
1) For the personality traits it depends on the specific trait, e.g. for openness noncog correlates stronger than cog but they're about the same strength for agreeableness (though directionally different)
2) The way I would put it is that the noncognitive genes linked to academic success are positively correlated with schizophrenia and cognitive genes linked to academic success are negatively correlated with schitzophrenia. The statement you made isn't really something the graph addresses.
3) Best way to put it is that noncog genes linked to academic success are statistically significantly more correlated with longevity than cog genes linked to academic success.
Hmm okay I think I don’t have a good intuition about how to conceive of the 3 variable correlation going on here (ie Schizophrenia, NonCog, and Academic Success all correlated). So in regards to 1) could you say that based on this data a person with more NonCog genes and a high degree of agreeableness is more likely to have higher academic achievements than someone else with Cog genes but low trait agreeableness?
I think I'm explaining this badly. The two objects of study are "noncog genes linked to academic achievement" and "cog genes linked to academic achievement." These are groups of genes analyzed in earlier studies in the literature and found to correlate to academic achievement.
Another thing about this type of research is that it doesn't really work on the individual level. You can say that in a population (which is sufficiently large, has a normal distribution of genes, and other simplifying assumptions) the people who are most academically successful are likely to have these cog and noncog genes and that people in the population who have these particular noncog genes are more likely to have schitzophrenia.
If you know which genes a particular person has and want to figure out their expected academic achievement, we basically can't do that because we don't know what effect these genes have on individuals or how that mechanism works; we only know how these genes express themselves in populations.
Understood. I think I’m trying to force an explanation that isn’t there to make it more intuitive for me.
Depression is being selected FOR in Sweden. While women with clinical depression average fewer children, their almost depressed sisters have increased fertility, which outweighs the cost to the clinical cases. Thus, evolutionarily speaking, the optimal point for female depression (not male) is higher than the current rate. That is, selection is increasing the sex difference in depression.
Of course, this says nothing of why this might be the case. Considering that women in every place and country are more depressive than men, this pattern must have originated a long time ago. Would be interesting to study mental illness, insofar as it exists (!?), in primates. Are female primates more moody and depressive than males?
https://twitter.com/KirkegaardEmil/status/1071512978695053312
Probably. Even among dogs, it is commonly known that female dogs tend to be more moody and sensitive than male dogs. Though the difference isn't large, male dogs are more prone to being generally happy-go-lucky while females are more likely to sulk, act jealous, react badly to punishment, etc. These are attributed to increased situational awareness and sensitivity to the environment and social cues, which would seem to be obviously important traits for the sex that is primarily responsible for making sure the young don't die. I would expect you would see a stronger sex divergency depending on (1) how long the period of maternal care is after birth before juvenile independence, and (2) level of paternal care-giving (which in most mammals is zero). Also, women become much more prone to depression in the years after giving birth. It seems intuitively obvious to me that increased worry, anxiety, and seeing bad stuff everywhere would all be advantageous for protecting offspring and making sure they don't wander off a cliff, eat poison berries, or get eaten.
I second this. I can see a number of reasons why genes for depression and ADHD might be selected for specifically in women.
A slightly grim outlook might indeed keep your offspring safer. Also, if you benefit personally from the group staying put instead of roaming around, it would make sense for evolution to make you “heavier”, in the sense of dragging a weight. It wouldn’t even need to be all that dramatic to buy you a few extra days in camp. I’d go out on a limb and say maybe this has something to do with humans eventually forming permanent settlements.
Also it’s clear to me that mild ADHD can be really helpful if you’re a mom. I’ve thought about more actively managing my mild ADHD for years, but as a mother of small children I’ve decided to hold off because *it’s just so damn handy to have my brain work this way right now.*
Yeah, it's a stereotype, but you know what they say about those. It is *so* common when I'm hanging out with my couple friends with young kids that dad is letting the kid do [fill in the blank] while mom is yelling "slow down! Watch out! Don't let him get too close to the edge!" etc. Mothers seem to obviously perceive more danger and risk of harm than the dads do, and I'm not sure how enhanced sensitivity to potential bad things more wouldn't create an increased chance of depression.
In my very small bubble, I’d say the dads are actually more overprotective on the playground. But all the mental toggling required in childcare clearly wears me out less than than it wears out my husband. His superior ability to focus makes him great at a lot of stuff, and he’s a great dad, but after an afternoon with the kids he’s usually exhausted in a way I’m not.
I think my ADHD is working in my favor here while it used to mostly work against me. When the kids get older and I go back to work, I might want to try some Ritalin.
It also fits that most people find their kids intrinsically interesting.
There's a theory that the evolutionary reason for suicide is that when a person kills themselves there are more resources for their relatives. If this is true, it would probably make sense that a person with many fertile siblings to be more likely to commit suicide.
Consider a person with no siblings. If they commit suicide there are no siblings to benefit from the extra resources.
On the other hand if a person with many nephews and nieces commits suicide, there are many relatives that would benefit from the extra resources.
Ever since I learned that Albert Einstein’s son Eduard had schizophrenia to a level that required him to be institutionalized, I sort of assumed that schizophrenia must have something to do with genius.
Also because—take it with the usual corrections for self-reporting—I consider myself to be a genius. My family tree on my father’s side is full of schizotypal personalities. I strongly suspect that being Mennonite, chased to the coldest edges of Europe and beyond because of one’s deeply held esoteric beliefs, selects for this trait.
I remember a post from you about this, Scott (found it: reviewing Surfing Uncertainty https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/). You talked about schizophrenia as a hyperaffective reaction disorder to predictive modelling errors, as opposed to autism which produces hypersensitivity in the models themselves. To put it another way, schizophrenia, in the predictive-modelling paradigm, is a disease that makes it difficult to ignore surprises.
If someone lives their lives constantly having their models challenged, and having their attention pulled to every little model-error, it creates a lot of pressure to build better models, right? If your models get better, you get fewer errors and your mental life becomes easier to manage. And those are the models that, if you happen to be able to communicate them well, become PhD material.
On the other hand, if your models don’t get better, either because you were unlucky, not cognitively flexible enough, or the inundation of model-errors was just too much, the result is something that looks more like schizophrenia.
It makes sense to me that cranking up the genetic dial labeled “schizophrenia” increases the risk-reward of cognition: either you will produce surprising insights and breakthroughs that greatly simplify your (and maybe others’) predictive models, or you will struggle to cope with the basic demands of life and lurch spasmodically from breakdown to breakdown. Or both.
The hypothesis, then, is that Albert Einstein had the right mix of schizophrenia and cognition-enhancing genes to use his irritation at model-errors effectively; Eduard got the irritation but not enough of the cognitive mechanisms to handle it.
"I have been saying for years that I think some of the genes for some mental illnesses must have compensatory benefits. Everyone else said that was dumb, they’re mostly selected against and decrease IQ."
One obvious answer to the "if a gene is bad, it can't be selected for" argument is that a gene can be good in a heterozygous genotype and bad in a homozygous genotype. Sickle Cell being the classic example. One copy of the gene gives malaria resistance; two copies gives you a deadly blood disease.
I don't know much about exactly how these GWAS studies are done. But when they calculate the raw correlations between genes and phenotypes are they able to separate out the heterozygous and homozygous occurrences of the genes?
It seems pretty important. For example, in theory, a gene's hugely positive heterozygous effects might exactly cancel out its hugely negative homozygous effects. So that it's raw statistical correlation to the trait is essentially zero. (This is akin to the man with one foot in boiling water and one foot in freezing water, who is "on average" experiencing a comfortable temperature).
Sure. But that doesn't mean that any of the specific genes contributing to IQ can't have a different effect where the person has only one, instead of two, copies of the gene.
Sickle cell is protective against a very high-fitness cost disease whose prevalence increased relatively recently when forests got cleared, creating lots of standing water for mosquitos to breed in. That's unusual. Most deleterious genes don't have those kinds of benefits (though there are genes causing kidney problems in Africans that also protect against sleeping disease spread by tsetse flies).
That's actually my point. It is "unusual" for a gene that causes disease to persist in the gene pool without being selected out. And that is precisely why the persistence of a disease-causing gene implies that it must also have a separate countervailing positive effect within the overall gene pool.
Thus, if genes for mental illness have not been selected out of the gene pool by now, we can infer that those same genes must also carry some selection advantage. And the difference between heterozygous and homozygous gene expression is a clear mechanism to explain how the same gene can be alternately deleterious or beneficial.
The exact reason a gene is deleterious or beneficial in a given environment (e.g., cleared forests or whatever) is irrelevant. The prevalence of the sometimes-beneficial/sometimes-deleterious gene will just work itself out through the natural selection process.
This isn't one single gene of large effect being more common than one would expect via de novo mutations. These are lots of genes of small effect, which is explainable via mutational load. Natural selection keeps purging them, but more pop up.
Your theory is that mental illness genes are just random negative mutations that coincidentally correlate with higher IQ. That's possible. But that's the exact argument Scott disagrees with, and I think he is probably right.
Like I said in my original post, your hypothesis can be tested by separating out the effects of the genes in their homozygous and heterozygous expressions. My point is that someone should do this.
Mental illness genes don't correlate with higher IQ. The notable finding is that there are genes correlated with educational attainment but NOT IQ, and those tend to be correlated with mental illness.
I feel like describing this as "intelligence" vs educational attainment may be misleading, since its actually measuring "IQ test scores" vs educational attainment as I understand it. Not to get into the whole general IQ debate, but to the extent that IQ is useful as a large scale proxy for intelligence, that becomes less meaningful when you are comparing it to another proxy for intelligence. Given the nature of IQ tests that makes some of the correlations less surprising.
Re: Math Phds and autism, I am assuming it was written as a joke, but when I read it I was initially like “sounds about right” and then I thought about all the math grad students and professors I know and realized that none of them seem actually autistic. There is definitely something atypical about most math people I know, but it seems varried and generally not autism.
Do people know stats on this? It might also be because I mainly see academics, and teaching feels like something that would select against autism, so maybe there are more autistic math PhDs out in industry?
I remember reading that yes, mathematicians do get positive autism tests at higher rates than normal population. But there was also discussion about the validity of the testing for certain subpopulations. In particular, when testing autism it's usually done with some autism spectrum index. The index is a construct: you check all boxes that apply and the doctor says "you have autism". Of course the idea is that the construct correlates well with Actual Autism, but the index is calibrated on the general population and not on highly selective subpopulations such as math PhDs. I think math PhDs are weird people in a lot of ways, and coincidentally some of those weirdnesses overlap with the general population autism spectrum index.
My anecdotal experience as a Physics grad student was that autism-like traits (including a couple of very obvious cases) were more common than gen pop, but that still translates into a low prevalence in absolute terms - it's not that surprising to find a department without any autistic people?
As a chemistry PhD I can say that being on the Autism SPECTRUM is certainly correlated with the math intensity of STEM faculty....
There is a theory that the evolutionary reason for suicide is that when a person kills themselves there are more resources, (like food) for their relatives. So depression might exist to cause suicide.
I have the theory that this is right except that the gene for suicide is not in the suicidal person but in the mother of the suicidal person. That it is not a gene for being suicidal, but a gene for making your child suicidal. The child is affected when it in the mother's womb.
The benefit of my theory from the older theory is that the evolutionary drawback of the suicide is halved, as the person committing suicide only has half the genes of the mother. But the evolutionary benefit of the suicide stays the same.
I made a comic about it: http://evolutions.thecomicseries.com/
We need more on verbal tilt + psychopathology + belief system, digit ratio + GNC + bone structures + Life History, or blood type + hyperopia + intelligence + SES.
Very nice post. A note on the schizophrenia result and false discovery. In figure 4 they are graphing 95% confidence intervals (plus or minus 2 standard deviations) and if you look at the confidence intervals for schizophrenia they are very far apart. If you tripled the size of the confidence intervals (plus or minus 6 standard deviations) they still would not overlap. Assuming everything is normally distributed (which they have already done, their confidence intervals rely on this) we can get a nonoptimal bound that if we run a billion trials, we expect to see a false positive in less than 2 out of a billion tries. (This number should not be taken literally because the approximation of the data by the normal distribution will not have this level of accuracy, and there are other possible errors in the study. However it accurately expresses that seeing an effect of this size simply due to a false positive from running many comparisons is exceedingly unlikely.) In the paper they also list a P_{diff_fdr}<.001 which I think is supposed to be the P value of the schizophrenia result after taking into account the risk of false discovery. I include the simple and loose analysis above simply to demonstrate that even if I am misinterpreting their P value, from the visual of the confidence intervals alone, false discovery seems very unlikely here.
There is a theory that schizophrenia is a side effect of human self domestication. Which of course means you'd need to buy into the self domestication hypothesis first but it would explain how we got schizophrenia.
Basically, that it's a side effect of selecting for self domestication. This presents the basic theory - https://www.psychologytoday.com/us/blog/the-imprinted-brain/201609/schizophrenics-hyper-domesticated-humans. Not sure I buy it but I found it interesting.
"... whether you so desperately seek societal approval that you're willing to throw away your entire twenties on a PhD with no job prospects at the end of it."
Ouch, man. That hurts.
The payoff is in gaining matches on Tinder. PHD? 80% more matches, for a guy
I wish this stereotype that PhDs have no job prospects would die or at least get toned down a bit. There are plenty of scientific fields with good job prospects for PhDs outside as well as inside of academia (maybe not straight biology, though).
Science (or STEM more generally) is the exception, I think, and even there I don't think a PhD is helping you compared to a Masters, given how many extra years it takes
Maybe in computer science. In science masters degrees are usually worthless. Some companies may use it as a qualification for a lower tier job than what you get with a PhD, and mayyybe there's potential for upward mobility, but I don't think it's that common.
Generalizing quite a bit:
S - Academia requires a Ph.D for any of the good jobs (tenure-track faculty or researcher at one of the non-university research institutes), but it produces at least five times as many Ph.D.s as there are such positions to fill. *If* your brand of Science has a strong industrial component (Chemistry yes, Astronomy no), then a Ph.D. is likely to get you a decent job.
T - Lots of industry jobs, but they're almost all open to anyone with an MS - if you put in the 3+ extra years for the doctorate, slight chance you get to be e.g. a Comp Sci professor, more likely you're just getting societal approval and/or personal fulfillment.
E- See T
M - There are jobs for which the Ph.D. opens the door, but aside from the occasional math-professor gig they're mostly for e.g. three-letter agencies or financial firms, which is probably not what you had in mind when you decided to become a mathematician.
Social Approval is a non-cog and verbal tilt trait, and de-coupled from intelligence.
Just came to the comments to see whether anyone reacted to this. Is this sentence a joke or does Scott really think that about getting a PhD in general? No job prospects at the end of PhD, what?
The lack of job prospects is certainly true for humanities PhD programs, which I was in until I had the good sense to drop out. Hence my “ouch”.
If functional mental disorders are primarily due to evolutionary mismatch, then there is no need to explain the benefits of "mental illness genes." While some people are genetically more prone to adult onset diabetes than are others, in the environment of evolutionary adaptation such diabetes was rare. We don't try to explain the benefits of diabetes genes. Likewise, while some people are more genetically prone to functional mental disorders than are others, in the environment of evolutionary adaptation it seems likely that such disorders were likewise rare.
Durkheim's research on the higher rates of suicide among Protestants than Catholics was the beginning of a long tradition of empirical research relating to the phenomenon of anomie, as opposed to cultural embeddedness with strong social roles and bonds, leading to adverse mental health outcomes. Here is a recent study (n = 8446) on how increased exposure to US culture increased suicide attempts among youth in the Dominican Republic,
"The increases in the propensity to attempt suicide for DR youths across these US cultural involvement indicators were both robust and large. For example, the propensity to attempt suicide ranged from 6.3% for those at the lowest end of the range of use of US electronic media and language to 13.3% for those at the highest end of the range of use of US electronic media and language. This central finding is congruent with the lower suicide or suicide attempt rates found for first-generation or less acculturated Latinos across multiple national and regional cohorts of Latinos."
Liah Greenfeld's "Mind, Modernity, and Madness" provides a neo-Durkheimian account that provides a coherent explanation for how increasing levels of anomie in modernity lead to increased rates of depression, bipolar, and schizophrenia. In traditional cultures, with humanity in the environment of evolutionary adaptation being the most "traditional" societies, there was neither need nor opportunity to construct a personal identity. A human being was one's roles. There was no "I" in the modern sense (cf. Julian Jaynes). One was unaware of the water in which one was swimming. Now we are all fish out of water, flopping around, gills desperately sucking in air in an attempt to maintain mental stability. For some of us it is easy, for others very difficult. The genetic material of fish works just fine in the water.
While Scott is not sympathetic to this explanation, I've never seen compelling evidence that shows that functional mental disorders were not due to evolutionary mismatch. Yes, whether or not functional mental orders have increased over time and across cultures or not remains contested. Depending on one's priors, the burden of proof shifts.
But studies of suicide provide less contested evidence that culture is a major influence on suicide rates. As far as I know, all such comparative studies are consistent with greater anomie, greater burden on constructing one's own identity (as opposed to a relative lack of the need to create an identity in more traditional cultures) resulting in higher rates of suicide.
Human beings evolved over many millions of years in diverse physical environments. But with respect to social structure, until the dawn of agriculture and empire, almost all adolescents:
1. Lived in a small tribal community of a few dozen to a few hundred with few interactions with other tribal groups.
2. These tribes would have shared one language, one belief system, one set of norms, one morality, and more generally a social and cultural homogeneity that is unimaginable for us today.
3. They would have been immersed in a community with a full range of ages present, from child to elder.
4. From childhood they would have been engaged in the work of the community, typically hunting and gathering, with full adult responsibilities typically being associated with puberty.
5. Their mating and status competitions would have mostly been within their tribe or occasionally with nearby groups, most of which would have been highly similar to themselves.
Could the dramatic divergence from the environment of evolutionary adaptation in any or all of these socio-cultural features result in increased "mental illness" for a genetic subset of human populations?
https://flowidealism.medium.com/evolutionary-mismatch-as-a-causal-factor-in-adolescent-dysfunction-and-mental-illness-d235cc85584
" I've never seen compelling evidence that shows that functional mental disorders were not due to evolutionary mismatch."
It seems to me an unlikely general explanantion, although is certainly is part of the explanation for, for example, high level of drepression in modern, societies.
But it seems to me quite unlikely that severe enough autism or schizophrenia either would not appear or would not be deleterious in the environment of evolutionary adaptation.
"Their genetic measure of non-cognitive skills... was still correlated at r = 0.31 with IQ"
Note that this is also a *genetic* correlation - the genetic influences of NonCog correlate at r = 0.31 with the genetic influences of IQ, not with actual measured IQ. The same is true for the Cog/NonCog relationship you mention with self-reported math ability and highest math class taken. (Also, assortative mating would inflate these correlations.)
Oh, that is a very good point. So these are basically "brain is broken, makes you both stupid and lazy/whatever" genes, I suppose.
Delay discounting is worse than alcoholism? I interpret this not as meaning that delay discounting is bad for success, but that high educational attainment is bad for success. Huge delay discounting = not going to college because college takes a long time (not obviously wrong); no delay discounting = being willing to go to college for 12 years to get a marginally nicer job (obviously wrong).
The way they use correlation assumes that all personality traits have linear effects on whatever they're measuring. If the function relating {number of genes "for" a behavior} to {educational attainment or IQ} has a tall U-shaped (or upside-down-U-shaped) curve, as some might, the correlation results will depend mainly on the outliers. For example, if a little conscientiousness helps you finish college, but a whole lot makes you such a perfectionist that you're likely to fail college, then the "correlation" between conscientiousness and finishing college isn't telling you how strong the effect of conscientiousness on finishing college is; it's telling you something about the skew of the U-shaped function.
For some reason, when people talk about genetics they like to forget that correlation is not causation.
Scott, you are usually vigilant for distrusting "correlation but we adjusted for confounders" studies. This is the same type of study!
What are "genes for intelligence"? In this context, they are genes that are correlated with intelligence. A gene that purely causes black skin (but does literally nothing else) would be counted as if it decreased IQ, because statistically people with black skin do worse on IQ tests.
There is no end to the number of possible confounders here; there is no exogenous source of randomness at all.
In general: I've often thought that a lot of people go to college and get phds just because they are familiar and comfortable with school and scared or unsure about entering the 'real world' (certainly true for me). I've often thought that a lot of the apparently high levels of mental weirdness in phd programs are largely related to this - who wants to effectively extend their childhood by staying in school, vs who is ready to 'grow up' and enter the real world.
FWIW, I know several people who got Ph.D.s in order to be able to pursue their favorite research. Of course, arguably, the desire to understand esoteric properties of nature is also more childish than the desire to settle down with a family and 2.5 children...
I have some experience living with people diagnosed as schizophrenics. I put it that way because I'm not sure how well understood schizophrenia is and how confident we can be in a clinical diagnosis.
Other than living with diagnosed schizophrenics, I know very little about the condition. My experience is that schizophrenics live in a fantasy world. They are unable to tell the difference between the real world of experience and the world inside their head. I know one in particular who can talk at length of his day to day life in Vietnam, when in fact he has never been there. He appears to be unable to distinguish between fantasy and reality.
On the other hand, my impression of highly successful scientists, engineers, mathematicians, linguists, etc... is they can build mental maps that enable them to navigate their special practices. It seems to me they comprehend very complicated systems, such as molecular biology or microelectronics using some type of mapping onto real world experiences.
What I mean to say is what schizophrenics and mathematicians, e.g., have in common is the ability to live in an alternate reality. I would say visualizing the complex folding of a protein molecule is not too far removed from imagining that you are in fact Sgt. Barry Sadler in 1969.
As a high-IQ person who withdrew from college for mental health reasons, this is really interesting personally
> you can't make a six digit number of people sit down and take IQ tests.
Are you kidding? This happens multiple times a year.
> you can't make a six digit number of people sit down and take IQ tests.
Actually: https://biobank.ndph.ox.ac.uk/ukb/field.cgi?id=20016
But in general, you're right.
It's amazing how dominant UK Biobank is in genetics right now (at least behavioural genetics). I was at the IGSS conference (https://cupc.colorado.edu/conferences/IGSS_2021/) and more than half the presentations were using that data.
+1 I'm sure there are better ways of measuring intelligence out there, but they take more than 45 minutes per subject and a scantron.
ICAR16. 15 minutes and done.
Overall, I'd be worried about confounds here. We have a very noisy measure of "genes for IQ" - noisy because GWAS is noisy, and because the IQ measure itself is noisy (just a quick 12 point test IIRC). Then we deduct that from "genes for educational attainment". What's left? Maybe "genes for non-cognitive skills". But maybe "genes for IQ, that we didn't measure very well". Indeed, "non-cognitive PGS" predicts IQ.... And then there are all the possible environmental confounds. I think I'd rather see a measure of non-cognitive skills and then a GWAS that targets that directly.
However, that is just a lazy first take, and I should stop shooting my mouth off about a coauthor's paper.
Counter proposition: EA genes are a superset of IQ genes, so social skills and creative skills would be part of this EA-IQ set. But then we would see Psychosis tilt (lack of autism), Extraversion, Agreeableness, Conscientiousness, Longevity, Feminization (older mothers, tied to GNC) being part of a bundle?
Not sure if you care, but autism and ADHD are not considered "mental illnesses".
If it's in the DSM, doesn't that mean it's a mental illness? ...
Therefore, getting your condition voted out of the DSM mean that yo were "curred"?
If schizophrenia genes increase education, that would also be fitness reducing.
There's a lot of inference to worry about here, but I'm already stuck on this, from the paper: "By construction, NonCog genetic variance was independent of Cog genetic variance (r_g = 0)." What sense of "independence" follows from zero covariance? That's pretty clearly going to create some weird conditional relationships between their SNPs' imputed Cog and NonCog scores to maintain zero correlation. It seems they recognize this in the supplement but they don't really bother to interpret it.
I ask this because I follow Charles Murray on twitter in order to argue with his position that there is a race-iq causation - has anyone dared to see if there's a variance of the cognitive genes vs ethnicity?
Seems like this database would provide quite strong empirical evidence on the unfortunate topic
Theres already like 3 admixture analysis been done...even controlling for skin color and income you get the same result, more euro ancestry better results and vice versa. Truly horrifying topic to be honest and I was just really disturbed getting into it myself. Hoping to God there's a way out of this mess.
Interesting, does that apply relative to Eurasian and/or East Asian ancestry as well? It wouldn't surprise me if there was something we don't know about the correlations that gets figured out eventually. 'My' theory (Thomas Sowell's theory) has been that historical access to the large east-west landmass of Eurasia and things like density of ports, fertile plains and navigable waterways is the main proximate cause for culturally driven differential IQ results
I understand Murray does not believe that a genetic effect would mean anything about the value of an individual and certainly not groups but I fear the social effect of a widespread belief, whether reality based or not, of there being population genetic differences. The comments his tweets on the subject get suggest he isn't right to think there's nothing to be concerned about
Am I reading the chart correctly to note that SNPs associated with higher cognition are *negatively correlated* with conscientiousness (and extraversion, and agreeableness)? That's absolutely fascinating given that educational attainment (and a whole bunch of other traits associated with success, like wealth and income) are so strongly associated with conscientiousness, and suggests at least to me that most of the difference between educational attainment-promoting and cognition-promoting genes should logically have something to do with the trait.
It also seems like a fascinating counterexample to the idea you see brought up often in population genetics that "all good things are correlated". We see a discussion of a g factor, which includes all of cognition, often, and it seems frequently mooted that g itself is part of h, a general health factor. (Also, while I would certainly associate high-cognition-but-low-educational-attainment individuals with low conscientiousness, it boggles the mind to suggest that low conscientiousness, by itself, is associated with higher cognition. Why should that be true? Where's the tradeoff?)
Assuming I am reading the graph correctly, and blue is just 'high cognition', and not something like 'SNPs promoting high cognition but not educational attainment'. I could go back and try to read the chart more carefully, or read the article the chart comes from, but I think I'm too low conscientiousness to go and do so. One read-through is enough.
I've never been tested for ADHD and I don't know if I'll ever bother as my country has a dismal record of treating it, but I strongly suspect I have it. In any case I am extremely hyperactive and have been for a long time.
My life was basically total chaos up to a point. I finished university with a very mediocre result (around 2.4 I think). I aced courses that were interesting to me, and I failed Statistics 101 TWO times until I got angry the third time and got a B+.
Then I was lucky enough to find a technique that works for me - a sort of a Zen meditation where you just sit without moving for a long time. It doesn't remove my ADHD symptoms, but it calms me down a lot and allows me to focus and work.
I am a programmer now and I think there are some advantages, but only because I am treating it in some way - otherwise it would just be a total burden.
The advantages:
- I am better than others in scanning a large amount of code in a search for some obscure problem. In more general sense, I am just better in scanning in general - if I have some list in front of me and I need to find something, I do it faster than others.
- When something gets interesting to me, I feel like my mind gets totally obsessed and get actually angry if someone distracts me - I have shouted at people for that. This might sound bad, but allows me to do fast analysis of a large amount of data.
Disadvantages:
- I have a problem when I need to slow down and focus on one thing. I just cannot motivate myself to study, I have tried for years. The only way I learn new information is when I actively write code, because it is interesting and not boring.
- If I start procrastinating even a little it is very possible that my whole day will be spent in Reddit and YouTube - basically when my brain gets interested in some bullshit many hours can pass before I can stop myself.
- I work well when my work is interesting and badly when it is not. For example, it is interesting to write some new feature, less interesting to make a strategy to test it, so that might take much more time than it has to.
Hope this was interesting if not useful.
Resonates with me but...its kind of worrying? I have the strengths you mentioned, plus the weaknesses and I was hoping thered be a way to move beyond just leaning into my strengths. I suppose I could marry a lady who has the opposite of add whatever that is....haha
Oh man it really just kind of is this way. Keep searching for productive interesting things to work on that have a lot of payback
Can someone explain what "PGS analysis" means in the second graph? Maybe this is a dumb question but a Google search didn't help me at all.
PolyGenic Score. It is your predicted phenotype from your genotypre according to a genome-wide association study. Because the PGS over the whole sample has an R^2 of something like 0.05 to 0.3, it is a very noisy predictor of phenotype.
Sometimes people use PGRS, where the ‘R’ is for risk, because GWAS was first used for diseases.
Hypothesis: schizophrenics have a better connection to the supernatural, which is actually real. They're the most misunderstood mentally ill people, except maybe cluster B folks. I'm sorry to admit this hypothesis is more than a little bit inspired by stereotypes about chosen people.
There's a history of schizophrenia in my family. There's also a history of genius, creativity and outrageous financial success. The money is nice. But I miss my brother.
There is now genetic and epidemiological evidence that Autism and mental illness genes overlap (especially with ADHD, parkinsons). But this is not being communicated with the public. How long has this been covered up? I can't believe that in almost 100 years of autism research this has not been noticed or investigated until now.
Kirkegaard: "Told you about Verbal Tilt yo!" (/s)
But notice how verbal vs math is similar to non-cog vs cog or verbal tilt vs IQ, that could be elaborated upon?