I'm not convinced that "just world fallacy" is a genuine cognitive bias. While people may sometimes overestimate the justness of the world, I don't think it's a systematic error, they are just as likely (or more likely) to underestimate the justness of the world.
When people get what they want they certainly have a tendency to overestimate the justice of the procedure which produced that outcome. But when people _don't_ get what they want, they tend to underestimate the justice in that procedure. And since people tend to spend more time thinking about the things that they don't have than rejoicing in the things that they do, I think that people tend, if anything, to underestimate rather than overestimate the justice of the world.
I'd agree that the just world fallacy doesn't really seem like a cognitive bias in the normal sense - but it does lead to mistakes in a predictable direction like blaming the victim of a crime, rather than always in the same direction. Similarly, anchoring bias can make estimates wrong in either direction - but it is in a predictable way. The reason I don't think it's the same as most biases is that it's a mistaken assumption about the world with consequences, not a bias in estimation that might be due to neural systems using approximations and shortcuts.
When I think of the just world fallacy I'm thinking of something different. You have a tall, smart, good looking guy who grew up in an affluent well adjusted family. He married a lovely women and had 2 well adjusted children. He ended up working for a startup and made $20 million when he was 35 but continued to work as he enjoyed the challenged but no longer had any financial worries. All four grandparents were still alive and healthy in their 90s etc. etc. etc.
Many people would desperately like to believe that such people don't exist. Surely there is some deep dark secret lurking somewhere. But, in many cases these people not only exist but there is no deep dark secret either. People really don't like that. They desperately want to believe that life is fair and if things have gone well for someone it only means tragedy is about to strike.
This is just a special case of the gambler's fallacy and doesn't need a new name. People believe streaks of good luck have to be balanced by streaks of bad luck. We don't accept that randomness is actually random unless it's also uniform.
I don't think that is quite right. I don't think anyone applies the gambler's fallacy to be like: well rich ppl were lucky to be born in a great situation so we should expect them to be less lucky in their life. We seem to only apply that fallacy when we see a run of similar type events. Yet the case described above absolutely applies their.
It's what I call the RPG character generation fallacy. Lots of people didn't like the part where, in Dungeons and Dragons, you could roll crappy scores for all your attributes. So most post-1980s RPGs (and later editions of D&D as an option) used a point system for buying attributes and abilities. Everybody gets the same number of points, so if a character is better at one thing, they must be worse at something else.
This is fair. All PCs are created equal. If we live in a just world, the same principle must apply.
"Many people would desperately like to believe that such people don't exist. Surely there is some deep dark secret lurking somewhere. But, in many cases these people not only exist but there is no deep dark secret either. People really don't like that. They desperately want to believe that life is fair and if things have gone well for someone it only means tragedy is about to strike."
One of the big flaws of some woo woo personality typologies is that they presume people are by default broken. Not the case.
I suspect you're over-focusing on perceptions of justice in situations where you are directly helped/harmed. These are likely to be skewed by a separate bias towards overestimating your own desert.
What about perceptions of justice in situations where you are a neutral bystander?
I think one plausible mechanism for systematic just-world bias is generalization from fictional evidence. Just outcomes are vastly overrepresented in popular fiction. (Although I also think it's possible I am mixing up cause and effect here.)
I think of just-world as being about people getting their deserts. Not "if you're tall, handsome, healthy, rich, and smart, you'll have offsetting flaws," but rather "if you're successful and have a great life, you must have done something to deserve it, likewise if you're unsuccessful and miserable."
IIRC, there's research showing that humans are, statistically speaking, biased towards unrealistic optimism, and the best accuracy on this dimension is seen by those mildly depressed.
but the just world idea and optimism are kinda orthogonal. Just world fallacy is more often applied retrospectively (that happened so it must be just). I mean it's usually easier to level down than up and general optimism about the future is often about the pie getting bigger or you personally getting a bigger slice. I think you would need seperate research on the connection between optimism and the just world fallacy.
In particular, depressed people were found to be better at judging to what extent they have control over a partly-random situation; normals are overly optimistic by comparison. Martin Seligman at UPenn did some work along those lines and cites more of it in his book "Learned Optimism". So yes, to the extent seeing things accurately is good, a little depression could have advantages.
I've often thought of depression as a sort of hibernation adaptation; you certainly don't waste many calories on unnecessary activities. But I don't know if there's any support for this idea.
Crespi and Badcock are charlatans with no experience in the relevant fields, who cobbled together some things that sounded intuitively plausible at first blush and defended them despite overwhelming evidence to the contrary from just about everything adjacent to the relevant neurotypes. The sole reason their pseudoscience lived so long is that in 2011 the Wikipedia article for the hypothesis was written as a glowing endorsement by a supporter (violating a whole pile of site policies and guidelines with it) and stayed more or less the same for a decade, being read by about as many people as you'd expect a pop-psych summary on the fifth most visited website to be read.
I've rewritten the Wikipedia article from scratch, if you want the full form :P Quick overview:
>Autism and schizotypy consistently have similarities in the aspects where the imprinted brain hypothesis predicts they should be different; it's based around the assumption they should have radically different empathy and mentalizing deficits ('deficits' to use the word of the proponents, not necessarily the word I would personally use), when in fact they consistently show similar (e.g. specific deficits in cognitive empathy with preserved affective empathy)
>The surrounding evidence involving genetic disorders, and especially imprinting disorders, is *wildly* misinterpreted by C&B to the point of outright lying -- their claims about Prader-Willi syndrome and Angelman syndrome are almost admirable in terms of how bald-faced they are, in that while they are correct those disorders are sometimes called by overimprinting and have notably different behavioural phenotypes, they *brush over* (to say the least) the way those phenotypes line up to their autism/schizotypy claims, while their claims about non-imprinting genetic disorders are as a whole similarly questionable
>The comorbidities (to use a word I might not, again) just don't even begin to support the ideas rolling around here -- while there's reasonable room and all to discuss whether everyone diagnosed as autistic is, about 8-10% of autistic adults are full blown SZ, and the point where "schizo-autism" becomes a reasonable neurotype descriptor comes much earlier in the latter spectrum; the counter-claims of "actually, these diagnoses are spurious because..." tend to rely on things we now know not to be true (e.g. that autism and schizophrenia have opposite IQ profiles) and ultimately look like moving the goalposts
In the vein of "interesting things about autism that cause it to not be a failure", besides the obvious engineer and intelligence cases, is one about inability to distinguish fantasy from reality! Speaking from experience here as mildly autistic, i often have very very strong emotional reactions to fiction in a way that is similar to if those events happened in real life, and this is apparently quite common for aspies. This offers the interesting benefit that if a fictional story tries to relay an important message or lesson or moral, i'm unable to simply disregard is as "just a story", because it feels as emotionally important to me as if it happened irl. Thus, this aspie specific ability allows us to not just learn things, but truly experience things in ways that otherwise would only be possible through actual lived experiences. Of course, this comes with the tradeoff you may learn the wrong lessons from fiction, but it's an interesting power nonetheless.
Yes, but what are the effects of this in a world in which most people have far more "experiences" through movies and television than they do with actual people? And particularly experiences of the emotional variety? And particularly if those narratives are increasingly guided not by independent artists but by corporations, inevitability eager to reinforce messages that help them and the ruling class?
> I often have very very strong emotional reactions to fiction in a way that is similar to if those events happened in real life, and this is apparently quite common for aspies. This offers the interesting benefit that if a fictional story tries to relay an important message or lesson or moral, i'm unable to simply disregard is as "just a story", because it feels as emotionally important to me as if it happened irl.
Seems like it's worth considering the possibility that normal people react to fiction just as strongly as autistic people do, but have even stronger reactions to real life.
Some autistic people exhibit a tendency toward deeper immersion in fantasy in a way that normal people don't experience in either fantasy or reality. I'm reminded of a Ted talk from an autistic girl who was describing the lucidity of her fantasies to the point she felt she needed to scream.
An inability to distinguish fantasy from reality would be more related to imaginative resistance, and the cause of people demanding censorship of violent videogames and being unable to tolerate deviations from expected norms like people who can't accept 2B being a combat android that looks like and dresses like a fashion model, those who demand excessive realism in their fiction even if its a science fiction fantasy world.
This is speculation and probably not original, but it might be interesting to consider how e.g. autism fares now compared to in the past. Your mildly autistic Google engineer is probably doing quite well for themselves now, but might not have had the same opportunities before engineering was a thing. So certain genes that evolution was in the process of weeding out might not be selected again, or selected against as heavily, as societal incentives change, which also has implications as to which genes are considered failures by your previous definition.
That nerds have been evolutionarily useful at least since the beginning of agriculture was a thesis of my 1998 essay on "Nerdishness: The Unexplored Cornerstone of the Modern World:"
"As Mike Waller points out, cave-nerds probably made the stone axes for early cave-Big Men to hunt with. I suspect that nerdishness has been symbiotically related to the prosperity of communities. (Howard Bloom makes a similar point.) In nomadic hunter-gatherer tribes, nerds' object-orientation would not be very useful since objects tend to be heavy to carry. Similarly, in tribes that need just about every man to hunt, nerds' ineptness at making correct split-second decisions would tend to get them eaten by wild beasts, or at least shunned by women who want men who bring home meat. On the other hand, sedentary communities that have been able to free some men up from food provisioning or war-making, make greater degrees of specialization possible, allowing nerds to flourish as craftsmen. In turn, these nerdy technologists make the tools that allow even more men to stop hunting and farming and turn to nerd-work. Thus begins a virtuous cycle of economic growth."
> So certain genes that evolution was in the process of weeding out might not be selected again, or selected against as heavily, as societal incentives change, which also has implications as to which genes are considered failures by your previous definition.
Indeed. I had a similar reaction when I read the phrase "evolution has been trying hard to get rid of them". It's the *environment* (material and social) that does the selection, and environments vary.
Probably as quants in hedge funds, which is the stereotypical job that scoops up math graduates, then physics graduates, and only when it exhaust both pools it starts looking for people with a degree in finances.
See Scott's old article speculating about selection for intelligence-boosting genes tied to genetic diseases in ashkenazi jews compared to the rest of the European population?
The average google engineer, for whatever pleasures his life may afford him, is not particularly fertile, and his children are presumably not dramatically more likely to survive than those of others. He is from an evolutionary point of view doing quite poorly.
I've searched my family tree for others with ASD and thought a lot on the implications. If you think back to small town / farming life the requirements to make the rapid complex social adjustments that we are bad at are lower. Social relationships are simpler and sliwrr changing. You are thought of more as just "Joe, John and Mary's son" and less as a a generic person. My experience is that this makes our social deficiencies less important. Being a steady, detail oriented person who pays too much attention to the staightness of their plow furrows is not a bad trade off...
As a statistical claim it doesn't need the caveat, as the vast majority of births are in the Northern Hemisphere, enough to drown out the effect of the much rarer Southern Hemisphere births.
It doesn't *need* the caveat, but failing to add a variance-reducing parameter to your model when you know what the variance-reducing parameter is and could easily add it, is silly.
Now I'm confused about "it means you were in an especially vulnerable developmental stage duriing flu season" - if someone was conceived in March, isn't it not flu season during nearly all the time they are in utero? Or is specifically birth and right after birth the critical period?
"Children conceived in December (OR = 1.09 [95% CI = 1.02 – 1.17]), January (1.08 [1.00 –1.17]), February (1.12 [1.04– 1.20]), or March (1.16 [1.08 – 1.24]) had higher risk of developing autism compared with those conceived in July. Conception in the winter season (December, January, and February) was associated with a 6% (OR = 1.06, 95% CI = 1.02 – 1.10) increased risk compared with summer."
So it's really any winter month (where apparently winter includes March). The authors don't confidently attribute the results to flu in particular:
"Time of conception can provide clues about environmental factors that could be associated with autism. Environmental agents that predominate in California during December – March include virus infections and agricultural applications of certain pesticides."
Maybe it is neither a failure nor a tradeoff, maybe it is an evolutionary beneficial trait of evolution itself (let's call this meta-evolution).
Let's imagine that you're a drug researcher trying to determine the best dosage of a particular drug. You'd probably run an AB test with a range of different dosages and pick the one that makes the best tradeoff in terms of desired effect vs undesirable side-effects. But what happens if you expect environmental changes to alter the sensitivity to this drug over time and you want to keep your drug performing optimally. One thing that you could do is give a narrow spectrum of different dosages to different patients and continuously monitor differences in outcomes to tune the mean dosage that you use as your reference. You trade off optimal performance now against adaptability to future changes.
It's entirely possible that evolvability is itself an important evolutionary trait and what you're seeing here is the result of Mother Nature's (extremely unethical) AB testing framework. You see a spectrum of different levels of psychiatric disorder because without such a spectrum, evolution becomes unresponsive to changing environmental conditions and we are highly evolved in favor of being adaptive.
The problem is that evolution doesn't have foresight. All that matters is the reproductive ability of an organism right now, not what might be better for the population in the future. If there is one optimal solution, the population will converge on that, even if it means they'll be less fit in some future scenario.
For evolution to happen at all, you need population diversity. Without diversity, you have bananas, which have no ability to adapt at all.
This matters little if the environment is stable, since you will be converging on the same equilibrium, just at a different rate. If the environment is unstable, lack of population diversity becomes an evolutionary liability, because your entire population can be wiped out by a single fungus.
What I am saying here is that population diversity is itself an evolutionary trait, that will be heavily selected for when the environment is not static.
Do you really think that the human environment (particularly the social environment in which mental illness is most relevant) has been static over the latest period in our evolutionary history?
Mental illness may be a byproduct of an evolved trait that gives population diversity to allow us to adapt better to our constantly changing environment. It is selected for because better adaptability improves population survivability.
Is this comment in any way making a claim different from 'mutation rate is itself a trait subject to evolutionary pressures'? You are right that it is probably deleterious for most species for mutation rates to fall below a certain rate (for example, most bacteria have a common mutation rate, but some -- Paramecium being Wikipedia's example -- have anomalously very low mutation rates), but that doesn't suggest that specific deleterious mutations should be common. For some of these traits, as Scott discusses, it might be the case that many different specific errors have predictable negative consequences.
Mutation rate is one factor but likely not the most important. We have, after all, evolved a dual chromosome structure with different characteristics of dominant and recessive traits. We have evolved to reproduce sexually, with all of the implications of that for how traits are transmitted between generations.
If there are two ways that a particular evolutionary beneficial characteristic could potentially manifest, one as a dominant trait, the other as a recessive trait, what ends up happening and why?
Suppose you have a gene that raises your mutation rate: what happens? Generally, you suck slightly more than usual, so that gene goes away.
From a population point of view, a non-zero mutation rate is ultimately preferable, but for any organism involved, it isn’t, so within a population there would be selection for as low a mutation rate as possible, even if that was a poor longterm strategy.
> For evolution to happen at all, you need population diversity. Without diversity, you have bananas, which have no ability to adapt at all.
I see no reason why bananas can't adapt. All you need for that is for you offspring to be different from you in some way.
> If the environment is unstable, lack of population diversity becomes an evolutionary liability, because your entire population can be wiped out by a single fungus.
There is no such thing as an evolutionary liability. If some trait causes you to susceptible to being wiped out by a fungus, but means you have 1% more offspring, that gene will reach fixation.
> What I am saying here is that population diversity is itself an evolutionary trait, that will be heavily selected for when the environment is not static.
This seems to be the crux of your argument, that genetic variation itself can be selected for, which is the biggest part I disagree with, at least for the cases discussed here.
The messy, complex process of sexual reproduction and the sensitivity of the organism to the environment would appear to be the original means by which diversity is selected for and basically guaranteed.
But in many cases, particularly of rapid change, this may be insufficient to keep the group alive. That's where altruism comes in - a force towards treating others with various degrees of relatedness as kin. Thus increasing diversity, and sometimes survival.
As a force - not against individual fitness as I wrongly stated earlier - but working with it, this seems to me compelling.
That's true, except that part of the optimal solution is likely to have some degree of randomness, since it allows past organisms to have gotten past some local maxima into greater nearby maxima.
You're assuming a model of a static or temporally piecewise static environment where an organism gets to a "local maximum" (by which I guess you mean perfect adaptation to the environment) and then evolves away its randomness. As far as I'm aware, organisms that do this tend to die out in mass extinctions and are thereby selected against in the evolutionary race. I'm not a big expert on human prehistory, but this static environment model also doesn't really sound to me very much like the situation of the human race over the past 100k years or so.
For an organism that is in a constantly changing environment and nowhere near either a local or global evolutionary maximum at any point in its existence, the speed at which the organism can evolve is going to be a big evolutionary deal.
It may be true that species with less randomness die out more easily. I think this is what you mean by meta-evolution? But the pressure within each species is still going to be towards some maximum fitness. The only thing evolution selects for is your ability to have more kids (and for them to have more kids, etc).
I think part of the reason that randomness isn't selected for is that more randomness is bad. Thus, adding more of it is almost always going to decrease you or your kids' fitness. It's possible to propose a scenario where most random changes are good (e.g. you're somehow at a local minimum), in which case it's very possible that randomness will be selected for. But in most actual situations, more randomness just leads to more cancer. It isn't necessary for the environment to be static, just that wherever you happen to be, most randomness leads to less reproduction.
Of course, you then have to answer the question of why some species seem to have more randomness than others, given that we all started from the same place. It could be that it's a result of some other tradeoff. Or maybe there is some fitness benefit to it. But I don't think you can just say it's a result of group selection.
You're still talking in static terms. In evolutionary competition, the winner will be the organism closer to maximum fitness. If the maximum fitness is moving, how close you can get will depend on how fast you can move. Organisms that can evolve faster will end up with advantages over those that evolve slower.
When humans migrated from Africa to cold Northern climes, there was an evolutionary pressure towards lighter skin. However, there was also an evolutionary pressure towards higher skin-tone variance, because without that, there is no lighter skin. Quite possibly, there is an equilibrium condition that will then select against that variance once human beings are in a stable environment.
But here, you're kind of saying, "we'd expect to see cognitive variance diminishing after we hit a stable environment". I'm asking, "when did that happen?"
Evolution does not need foresight for this to be true. The speed at which an organism is capable of evolving is an important evolutionary trait right now. Organisms which have been subjected to rapid environmental changes in their past will have selected for those which can evolve quickly vs those which cannot evolve and die out as a result.
That's quite a, uh, controversial statement. Short of some examples of kin selection, it's generally *not* the case that individual maladaptive genes reach fixation.
There are some particular kind of super-long-term processes in which you can have group selection, which are analogous to the distinction between selection of genes versus the selection of individuals (because it is entirely possible for a maladaptive gene to reach fixation, assuming it is maladaptive only for the individual, and not for the gene itself).
For instance, imagine for a moment a world in which a larger size is always more adaptive for the individual, but larger size always eventually results in the extinction of the species as a whole. Over extremely long time frames, you will observe a form of group selection, in which the more predisposed a group is to grow larger, the faster and more reliably that group goes extinct, gradually selecting for groups that are not predisposed to grow larger.
(Not taking a stance on the broader topic involved here, just noting that group selection isn't entirely incorrect, just mostly incorrect.)
I don't have any particular specialization in this and I have zero interest in dying on this hill, but you're talking like no serious work has been and is being done on things like multilevel selection and cultural group selection (always amusing to me that folks that react scoffingly to group selection will suddenly change their tune when you add the magical word "cultural" to it).
I just piped up because these overly clean sanitized versions of evolutionary processes always strike me as off. To be that jackass: "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
Was this the best angle of attack on that? Dunno. But we know it happened, and it along with epigenetics undermines convention by introducing all sorts of mushy questions and facts into their precious and generally oversimplified "hard" models.
"Evolvability" is actually known to be strongly selected for in nature, which is why almost all complex organisms reproduce sexually, and why sex is so common in life in general.
The idea of AB testing like this is actually a reproductive strategy, but humans don't engage in it - it is something more frequently seen in creatures that produce vast quantities of offspring, as that allows the better ones to be selected for. Humans invest too much energy into their offspring, which makes this sort of thing deleterious for them.
Isn’t that kind of what Scott is saying here? Some parts of the spectrum are mostly trade off, and adaptive (in certain environments). Other parts are mostly failures, and maladaptive in essentially all environments.
If anything, both groups seem to mistake noise for signal. An autistic person hears a faucet dripping or feels a tag on a shirt, both of which are noise, and mistakes it for signal and gets distracted. A schizophrenic person sees a person walking on the street behind them, or hears someone talking on the radio, both of which are noise, and mistakes it for signal and thinks that someone is trying to kill them.
Your English is correct. I think the examples of autistic people mistaking signal for noise would be like the case where he mentioned that having a different shading on a face could make someone not see it as a face. Alternatively, someone who's schizophrenic might hear the wind through the trees and think they hear voices.
But, if Autie mistakes 2 faces of the same person as being different faces just because one had a shadow, I'd say that's because he took something that should be irrelevant (noise: the shading) and tried too hard to make it fit in his model (he treated it as signal). And if as you say my English is correct, that would be an example of Autie mistaking noise for signal... ?
Yeah, that sounds like mistaking noise for signal. I'm not sure if that's a common mistake autistic people make, but I don't really know enough to comment on how true Scott's characterization really is.
Let's phrase it differently: "matching with a too-sharp pattern" and "matching with a too-wide pattern." If you are matching with a pattern that is too sharp, you will perceive some input that is signal, ie. should activate the pattern, but gets read as meaningless - noise. Conversely, with a pattern that is too wide, you will perceive input that is noise, ie. should not activate the pattern, but read it as falling into the pattern - signal. Admittedly this phrasing is very confusing.
Oh, I wasn't seeing it that way. This probably explains the misunderstanding!
With autists, we are in the case of matching with a too-sharp pattern. You describe such case as: "you will perceive some input that is signal, ie. should activate the pattern, but gets read as meaningless - noise". I see that. Now, the way I see it:
It's not that since it doesn't activate the pattern and is read as meaningless, ignorable noise. On the contrary! Since it doesn't activate the pattern, it draws attention. Those deviations from your predictions register as useful information that needs to be explained (signal), as opposed to irrelevant deviations that don't matter (noise). Then, after paying attention, they probably can tell what's going on, but they can't help getting bombarded with the feeling that every little deviation is signal worth checking out.
I know almost nothing about schizophrenics, but from what I get from Scott's description of the model, the situation you describe would only happen if the schizophrenic was already expecting something bad or if the situation in context is, at first order, reasonably scary for anyone. In which case his failure at that moment would have been to ignore the detailed info and go with his prior while a normal person would have her attention drawn and quickly learned reality. But it's not as if a normal person (with the same prior) in that situation would have just treated it as noise and ignored it.
Stereotypically at least, autistics overreact to real signals - the tag on their shirt, an irritating sound, a face with a new shadow. These things aren’t “noise”, just “irrelevant signals”. They tend to the over-literal. But this might make them good at tasks that require literal thinking and attention to minor details and subtle changes. Schizophrenics really do seem to see signals in noise, tending toward the over-abstract. But this might make them good at tasks requiring creativity.
So sort of opposites, but I agree that simply flipping the noise/signal error is a bit too pat.
Eh, we’re arguing semantics for a metaphor. When you are talking about “signal to noise” ratio, noise does not usually refer to “other valid signals you want to ignore” but errors degrading the signal and random/pseudorandom background stuff, like static. A radio with a bad signal to noise ratio would always play static. A schizophrenic (in this metaphor, not necessarily in reality) would hear music in the static that wasn’t there. An autistic would be hearing all the channels at once, unable to tune into the channel they really want to.
Well, this is my main point of contention with the article. I think noise/signal is the wrong idea here.
The way I see the examples (schizophrenics connecting everything to everything concluding there is a major conspiracy against them, autistics focusing on minutiae of shadows on a face unable to recognize their friend) are showing how the two modes focus on different resolutions of world view.
If my resolution is too low (schizophrenia) everything is an overlapping blur, everything is connected via cosmic conspiracy and tries to track me via satellite and CIA operatives.
If my resolution is too high (autism) I cannot recognize my friend as the intricate detail of shadows in my high res picture doesn't match the picture I have stored of him.
Maybe it is just semantics, but it does not feel that way to me. In particular, I think the comments up the thread seem to match my issue with the piece (though worded differently).
I think you did a disservice here by with the explanation of tradeoffs involved in autism. For instance, I know some autistic people (particularly autistic AFAB people) who are definitely autistic, and definitely very able to function in society, but also are very creative and good at gestalt thinking. There is a tradeoff involved, but simplifying it to "people person" versus "average person" versus "stereotypical aspie engineer" reduces that tradeoff to the point of meaninglessness, in addition to making it seem more like a binary (trinary?).
In addition, what role is the "autism spectrum" in this explanation? Does it match with the triangle up and right from the "stereotypical aspie engineer" dot?
I wanted to add on to this that the post was quite informative! It definitely helped install some gears for me of why there's a seemingly equally-good tradeoff for autism for allism while there are also autistic people who need significant help to function in modern society.
A prime feature of autism is rigidity and repetitiveness, which is the opposite of creativity. Not to mention avoidance of the kinds of feelings and experiences that typically generate intense creativity.
Not saying it's impossible but more than likely they were ADHD (or perhaps co-occuring). The two are often mixed up, even by professionals.
While I 100% agree that's admirable and rare in the modern world (and needed), I don't know that I've ever encountered a definition of creativity that includes that.
Many artists have this quality, and it helps for sticking with one's vision after the fact. But that doesn't involve the actual generation of that vision which is what creativity is about.
Rather it's resoluteness and integrity and for those pushing back against it "pig-headedness". Often a good thing, but not the same thing.
A prime feature of artistry is rigidity and repetitiveness. It is most assuredly not the opposite of creativity. Creativity under arbitrary constrains is exactly what drives most art, and no art is any good until the artists goes through an obsessive period. I can't count how many examples there are of prickly artists with rigid schedule, which is exactly the demand for rigidity and repetitiveness.
But if you're genuinely curious, there have been actual studies done on this. They aren't promising.
Autism is a disorder which particularly requires acceptance of one's inherent strengths and limitations to be able to move forward. No one is helped, and many harmed, by denials and conflations of this type.
This is a really good read, with lots of "oh, that's me!" moments. Just making this comment because I wanted to offer some more concrete appreciation than a like.
This is almost certainly just...motivated reasoning. I want this to be true, so I'm going to ask.
My mother has schizophrenia -- the very bad, refusing-medicine, wanting-to-kill-people, writing-screeds-to-the CIA kind -- and so did my great-aunt on the other side of the family, less severely. My husband's dad had a psychotic episode about a decade ago that he seemed to white-knuckle himself out of, somehow. I'm terrified that my daughter or my future children will be schizophrenic, since the genetic dice seem loaded against them.
BUT! My husband and I aren't like this: "creative, attuned to interesting patterns, and charismatic, but also a bit odd and superstitious." If anything, I'm "responsible and perfectionist, but has trouble letting things go," and he is chill and was a math major.
Soooo does that mean we maybe don't have those pesky psychosis-causing genes?
P.S. I first found SSC when I was trying to understand my mom's illness. Scott was incredibly kind and generous with his expertise when I, a total stranger, sent him a frantic email asking "will my children be nuts??"
I don't know if being a math major is indicative of much. John Nash did game theory, which is officially economics but really a branch of math. His son also got a math PhD, but seems to have handled his schizophrenia more poorly and was still being taken care of by his parents up to the time of their deaths.
In this case, it's indicative of my husband's Bayesian soul. He's not a creative or paranoid type. Tangentially, I used to see John Nash wandering around school!
In case you're worried, even if you yourself were schizophrenic, the risk of your child having it is apparently only about 10%. And since you're not (as far as I know), you should expect it to be less.
JonathanD, I'm so sorry if you're stressed about this -- it is a terrible thing to worry about! And as far as I can tell, there are no such tests available to the public. Just in labs where researchers are still trying to figure it out.
It is what it is. The risk isn't that high, and we knew the risks when we had the kids. Thanks for looping back to answer me on a more or less dead thread.
It's not that I'm constantly worrying about it, it's just something we all have to be aware of. My oldest is ten, and as we talk to her about teenage things, we plan on telling her that whatever the case may be with other kids, she can't smoke pot. Not out of moral opprobrium, but because pot is known to trigger schizophrenia, and it's a risk she can't afford to take.
Likewise, we'll be telling all our kids at an appropriate age about delusions, and that if they ever see or hear something that doesn't seem very likely to be real, it probably isn't and they need to tell an adult right away.
Just, you know, if there was a test that gave you a heads up about risk level, that would be nice. Anyway, thanks again.
> I think most psychiatric disorders exist on a spectrum from mostly-tradeoff to mostly-failure (what we might call "high-functioning" and "low-functioning" versions of the same phenotype).
For completeness's sake, there's hypothesis that anorexia (and maybe other eating disorders) fit this category to. Gusinger 2003 (https://pubmed.ncbi.nlm.nih.gov/14599241/) argues that there's a tradeoff between "being able to tolerate starvation in order to flee famine" and the pathological "self-induced starvation and hyperactivity" behaviors that constitute anorexia. (This came up in a comment thread or two on some SSC articles)
I think Guisinger's formulation is a little "just-so," but I think there's something to this. The same behaviors can be induced in laboratory animals (see this article, which was recently cited in the Subreddit https://pubmed.ncbi.nlm.nih.gov/22231828/).
I only see the abstracts. Is there a way to get to the articles?
I can see how hyperactivity might help you escape an area that doesn't have enough food, but how would that explain a person with access to food refusing to eat?
> I can see how hyperactivity might help you escape an area that doesn't have enough food, but how would that explain a person with access to food refusing to eat?
The Guisinger paper has a few ideas, some more plausible than others. Think something like "extreme desire to conserve food during migration."
I haven't had a chance to read the whole thing, but I do now think the adaption to famine idea has merit. I don't think it should be associated with hunter-gatherers, however. I think it should be associated with an ancestor that wasn't smart enough to make an informed, conscious decision to relocate.
Why is anorexia so positively correlated with higher social class? I once stayed in an apartment building that had a floor dedicated to in-patient anorexia treatment and the anorexics radiated so much superciliousness when they walked by that I always felt like I must look to them like a peasant with cow droppings on my shoes.
Something has occurred to me: In order for anorexia to be adaptive, there has to be some way to turn it off; otherwise it's just slow suicide. The Guisinger paper suggests that what turned it off in the ancestral environment was social pressure to eat after the famine was over. But how would that work with nonhumans?
Perhaps not competing as much with other people for food during a famine has some advantages. Assume it's possible to get a little food without fighting for it, and not fighting saves energy and makes alliances more stable.
While of course being wary of evolutionary explanations of anything, this seems like a more intuitive explanation of it than what Scott said about perfectionism. Given how anorexia presents itself differently in different cultures (e.g. historically it was stomach pains or digestive issues in Asia), something more underlying seems more likely.
On the other hand, it was also much rarer in Asia before the spread of Western culture, and there's no reason why one factor can't influence the other. It could be the intersection of two or more different tradeoffs, along with a mess of other possible failure modes.
I think "failure modes" is the right way to think about this. Some people, for some reason, enter a "starve yourself and exercise incessantly" mode and can't get out of it. Multiple things might trigger it, but the same biological program is being activated.
Yeah the 'genetics' behind psychiatric disorders are a mess, for the simple reason that said psychiatric disorders aren't well-defined (psychiatrists soberly use the word 'spectrum'). With that in mind it's hard to say whatever about the genetics behind them beyond 'this variant correlates with this trait', apart from the absolute lowest hanging tree (rare mendelian disorders and the like).
Also, evolution doesn't necessarily 'eliminate' a deleterious variant, especially if said variant is recessive - it may reduce that variant's frequency somewhat.
Also, lots of reasoning by association: X is associated with Y, and Y is associated with Z, isn't that strange? Well correlation isn't transitive to begin with, so you could be just be looking at nothing.
Scott is interesting to read as a psychiatrist, but his grasp (and that of most of SSC/ACX) on genomics is tenuous to say the least. I feel like actua geneticists should go and engage him seriously but I'm already procrastinating on actual work by posting this, so whatever
Anorexia is not something you can talk about meaningfully without discussing culture, a subject Scott's avoided thus far (though maybe that's coming? Still it's weird for such a foundational factor to be almost completely ignored in setting his grounding - particularly when his profession as a whole has a long history of this, and for transparently self-serving reasons).
Here's a question: Is anorexia still as common among adolescent girls as it was a decade or two ago, or has it been displaced by the rise of transgenderism?
I read somewhere that there is a strong correlation between the two, perhaps pointing to some more general psyhciatric condition of body dysmorphia that expresses itself in culturally specific/socially acceptable reasons for feeling discomfort in one's body?
Interesting. Being able to tolerate starvation in order to flee famine would also correlate with being able to tolerate starvation in order to avoid obesity. At the high-functioning level that just makes you healthy, and maybe a supermodel. Throw in some failure to process just how not-fat you are supposed to be and how important it is to be really really not-fat, and you get unhealthy anorexia.
As with anything involving evolutionary biology, I'm worried that lacking expertise in the field I'm vulnerable to conjuring just-so stories, but this one is just so good at explaining my observations.
High functioning isn't necessarily the same as not disordered. Per the DSM-5 (and quoted from Wikipedia), a mental disorder is "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." A person can have a significant disturbance from something but also be able to function in society and achieve societal standards (the definition of high functioning). If a person were struck by sudden intense pain every day for an hour, they definitely have a disorder, but they could have be perfectly normal in other 23 hours of their day.
I understand there can be disorders but -- and maybe I'm evincing a disorder here :) -- that paragraph didn't make it clear that he actually clinically evaluated those people and confirmed their DSM status. I think it's useful to emphasize this point because it's possible there may be over-diagnosis of people "having disorders" simply by outward behaviors.
That's certainly possible but I think it would be socially useful for Scott to be a bit more pedantic to reduce the chances of a Zeitgeist forming of normal people diagnosing others with disorders with certain outward behaviors.
Which we all do and this isn't a totally wrong thing. An informed non expert can be better at identifying something like ASD than a psychiatrist who is expert in general but has backward ideas about what ASD is. The Spectrum communities are rife with stories of psychiatrists who strongly state a patient has no ASD because it doesn't fit their pet categories or what they were taught a decade ago. I know many Aspies or family or friends of such whose judgement on whether someone has ASD is a lot more reliable than a random mental health expert. Expert diagnosis is needed for insurance and HR reasons but not to help people better understand themselves or those close to them...
My non-clinical definition is that its a disorder if you don't like it and a feature if you are neutral to or like it. The stereotypical tortured intellectual might have a disorder while their mirror image that ended up puttering about a research lab surrounded by support staff and a caring spouse that makes sure they remember to put on new clothing every day does not.
>In fact, just being conceived in March raises your autism risk a bit - it means you were in an especially vulnerable developmental stage duriing flu season!
So, is it plausible that flu shots (for parents-to-be) actually reduce autism?
They say that March comes in like a lion and goes out like a lamb. March 1969 had been more like one of those Biblical angels with four lion heads and four lamb heads and a couple dragon heads for good measure, all spinning within a gigantic flaming wheel, and if you met its gaze for too long then you died.
Entire weeks repeated themselves, or skipped around, or moved backwards. There was a week when the weather stopped, and it was an even twenty-two degrees Celsius across the entire planet. The heavens turned gray and were replaced by a message saying “sky.smh not found, please repent transgressions and try again”. All animal cries were replaced by discordant buzzing sounds.
Nobody knew how long it lasted. Probably had been different lengths of time for each person, each shunted on their own separate timelines into weird calendrical eddies and whorls. Some people who had started off middle-aged ended the month with gray hair and soft, infinitely sorrowful voices. Others seemed to have grown younger. Most people looked about the same, but you could tell things had happened they didn’t want to talk about, days repeated dozens of times or weeks that happened backwards, or moments when timelessness had briefly encroached on time and for an infinitely long moment they had touched Eternity.
>Overall genes that increase risk for ADHD decrease risk for OCD, and vice versa, suggesting that at least one advantage of ADHD is lowering OCD risk.
With this assumption how would one account for people who have both ADHD-like tendencies (forgetfulness, distractibility, impulsivity) and OCD tendencies (obsessive checking and the like)? Would these contradict the "opposite" model, or could one explain the obsessive behaviors as a coping method for ADHD symptoms ("I have to check to make sure I turned the stove off because I know how easily I forget!")?
Both are failures of your brain to determine what it should prioritize.
Depends what you mean by obsessive checking. In my experience, ADHD-style obsessions with things are more an inability to stop thinking about something you want to stop thinking about, rather than the flat out compulsions you'll see from OCD people.
That's the problem with attempting to generally ground talk of these syndromes in genetics. It's based on a kind of biomania, the implicit belief that if you just drill down far enough all variances in behavior will be explained by genetics and biology.
Those are obviosly always involved but sometimes biology is simply the "proximal pathway" through which cultural or personal contextual factors affect the individual.
In this case, the common sense conclusion would be that past forgetfulness led to overcompensation in the form of obsessiveness. Sometimes the worst and most damaging effects of "mental illness" are not the "actual" symptoms but rather the strategies the individual employs to compensate or combat those symptoms.
This is fascinating. I was diagnosed with ADHD as an adult but I always had some mild OCD-like traits involving obsessive checking: turning around halfway to work to go home and make sure the fridge is actually closed, checking my alarm clock ten times before going to bed, always turning on the oven light when I turn the oven on so I have a visual cue to remember to shut off the heat, pulling over on the side of the road to make sure I actually put the gas cap back on, going over my work fifty times before turning it in to make sure I didn't make a silly mistake. Now that I'm on medication for ADHD I don't do any of that anymore! I know some people with actual OCD so I never thought that I had a full blown disorder but I also knew that my obsessive behavior was not normal. I didn't realize until reading your comment just now that my obsessive behavior is pretty much gone. It seems very likely that all of that was just overcompensation for past forgetfulness. I've always devoted an inordinate amount of time to "idiot-proofing" my life because I know I have a tendency to be, well, an idiot!
It doesn't make sense to say a trait X is Y% genetic. And if you're talking about heritability, the measure is population dependent and has little to do with genetic determinism.
Why doesn't it make sense? Even if it is population dependent, quantifying it can still be useful. Obviously there are things that are 0% genetic (first language) and things that are 100% genetic (eye color), but if schizophrenia rates are correlated with parental rates, even if it's not 100%, that's still useful to know.
I think the issue Onslp is getting at is that "heritability" is a technical term with a precise definition that is very different from what we think of when we say soemthing is "genetic," but educated people still fall into the trap of using them interchangeably.
First language is 0% genetic in terms of physical causes, but it's got high heritability.
One of the things I struggle with is the notion that schizotypy correlates with charisma, especially because schizotypal personality disorder is also characterized with strange communication patterns, inappropriate responses to social situations, inappropriate dress and presentation. Is there a study correlating schizotypy with charisma?
Sometimes I find your writing wonderfully nuanced and thought-provoking, other times the oversimplifications and unexamined assumptions perplex me.
In the first place I don't know whether to admire or be appalled at the attempt to tackle this complicated subject in these tiny snippets, but this is our culture and I guess at least someone is talking about it.
But treating mental illness as a homogeneous entity as you sometimes do is problematic. There is zero evidence that they all share anything but some degree of inconvenience to the sufferer. And as you rightly pointed out in your last post, far different etiologies can lead to the exact same symptoms (and similar etiologies to different symptoms). Sometimes the "health" in mental health is what appears to be central. Other times it appears to be an understandable (if unchosen) reaction to problems of living. Other times based on continuing once adaptive behaviors that are no longer so. And so on.
And it presumes that those who do not fit current DSM diagnoses are healthy, that there are no illnesses disguised by their social acceptabilty. I don't know how anyone with any experience of human beings can deny that something akin to the pathology of normalcy affects a significant minority, and is far worse and more sickly a thing than many DSM diagnoses. The foundation of your analysis ad hoc embraces psychiatric conventions of thought and "mental illness" when the discipline doesn't even define what is meant by mental or mind, much less any other foundational term.
And the conclusion that this heterogeneous concept of "mental illness" across the board is "mostly" a sign of failure genes is on its face questionable. Over half the population has a mental illness at some point. So... 40% or so of the population is a genetic maladaptation?
That a significant number of syndromes are a result of mostly maladaptive genes and flat out bad environment factors is incontestable. But a) these are overcounted due to methodological biases towards genetic explanations and b) that they look so similar to "trade-off" based syndromes is a sign of the deep problems of the current system.
And your discussion doesn't mention the most obvious source of trade-offs: sensitivity. The same exact "bad genes" implicated in some psychiatric symptoms are also responsible for thriving. In other words, the presence of some gene variants mean that, depending on the environment, they will lead to either sickness or its opposite [1].
And this corresponds precisely with everything we know about basic biology. You can have all the genes variants for something and not get it. We are district in the animal kingdom in the degree to which we are "programmed" to be highly biologically attuned to culture, such that it becomes sedimented in the body. That nature is nurture and vice versa, to a significant extent.
But your overwhelming focus seems to be on a simplistic treatment of genes and obviously unhealthy very early experiences at the behest of all else. I don't even know where to begin responding to a phrase like "it looks like evolution has been trying hard to get rid of them". I'm hoping that's just lazy phrasing.
The problem with the overemphasis on the "failures" is that a) individuals amenable to turning their "syndrome" into an advantageous trade-off are under-emphasized and b) the degree to which any psychiatric symptoms can be damaging, regardless of whether it meets a DSM threshold, is brushed under the table.
For an example of the latter, the vast majority of individuals with any ADHD symptoms to any extent are impaired, and impaired to the extent to which they display those symptoms [2]. The common sense conclusion from this is that ADHD is not an illness, its symptoms are signs that they are already sick. And full-fledged ADHD is simply an extreme form of that illness (that often emerges as consequence of disadvantage [3])
Successful individuals are simply those who go from being ADHD - having no ability to distance themselves from their worst excesses - to having ADHD. It's a developmental achievement in those without the worst genes/environ combo, and involves calming/channeling its co-occuring cyclothymic temperament [4] into an asset.
But does every syndrome operate like this? Of course not. But when you lump all these random syndromes together, it obscures more than it reveals.
It's a term coined by Erich Fromm in the 50's, but it's been so long I can't do the concept justice based purely on memory. But it's bound up in a criticism of the cowardice of psychiatry in refusing to do anything but reinforce social norms.
I'd forgotten he actually produced a whole book on it based on a lecture series, and a quick Google scholar search shows there's been a few recent attempts to revitalize the concept.
His views on it stem partially from his experiences in Nazi Germany. Him and others in the Frankfort school who experienced living here and there often spoke of the similarities - that the propoganda here was the same if not worse - just in a different direction.
When you're talking about hundreds of thousands of variants, that's not a meaningful statement.
Particularly when you notice that many disorders are extremes of a trait that share strong genetic overlap with those within "normal" variation. If you included these folks in the overall analysis, you'd find a similar overlap with DSM diagnoses. What does that indicate? Next to nothing.
In my experience, a lot of lawyers have ADHD (especially in the field of criminal law). This makes sense for similar reasons to why ER doctors would have ADHD - if you have trouble doing things before the last minute, doing long-term followup, and have a whole bunch of coping strategies around organizing chaos already, those fields are a much better fit than other areas of the appropriate profession would be.
The problem, of course, is that the difficulties of the disorder are still very much there. They just get in the way less.
On tradeoff is suggested by how all minorities usually fare in a society, with most of them repressed, but with one or a small group rising to the top by the very virtue of their unique perspective. The outlier of the outlier might be king for having something like two eyes.
It beautifully ties together all the different symptoms of autism into the predictive processing model, postulating that autistic phenotypes arise from weak inputs from top-down predictions, at different time scales and layers.* I really recommend this read.
It also fits the observation that many of these risk genes are in synaptic proteins, and their phenotypes often include abnormal dendritic spine physiology.
I think it fits very well into what you say. To the extent that autism is a tradeoff, it's a tradeoff of stronger bottom-up input (and weaker top-down priors) which makes your thinking less biased, lets you see things others ignore, and lets you forgo the intuitive wrong answers and actually do the math. The price is that you can't adjust yourself to sensory input, have trouble predicting where the flying ball will go, and find it hard to not take someone's words literally.
To the extent that autism is a failure, it's a failure of what Clark calls "precision-based weighing" - i.e. deciding accurately how much weight to give to bottom-up input vs top-down priors. And I suspect that schizophrenia shares much of that in common, and may well be a similar failure and the opposite tradeoff.
*At one level, it makes a person unable to adjust away a constant sensory input, so they stay highly sensitive to it and can't ignore. It also makes them really want a predictable, self-caused sensory input, i.e. stimming. At even shorter timescales, it leads to the very idiosyncratic motor problems. At longer timescales, it makes a person unable to use priors on human behavior when interpreting other people.
It appears a good fit with Markrams intense world theory, the "magical world" experience being one part of what overwhelms.
And autism seems to match well with other syndromes in which the essential trade-off is: you may have a higher ceiling - particularly in unique, very specialized fields - but in order to reach that peak greater stringency is required.
It seems most people's floor and ceiling is difficult to move much, so long as their experiences stay within some normal range. Autistics and some others seem much more variant and dependent on the environment.
In other words, their path to salvation is narrower and more treacherous than that of others. But if they can stick with it, come to terms with their strengths and weaknesses and form a way of life around it, their mountain-top will likely be particularly high. And undoubtedly more unique.
When u write a long comment after reading the first part of the post thinking it's going off in another direction only to find out that's the point Scott was making :-( :-)
But I would add that there is another kind of trade-off to be considered. The general performance/tolerance for errors trade-off that happens when you overclock a computer. It might not be that autisticy genes are ever helpful but the overclocked brains that tend to make the brightest engineers just have turned down the tolerance for that kind of screwup way down.
So you don't need to even suppose that autism like traits are somehow the same thing making one a better engineer.
Another possibility is that good engineers are the result of genes that put very few points into some kind of redundancy and that limited redundancy means that even a few aspie genes result in aspie behavior.
--
I don't think this is the most likely hypo for autism or ADHD just because the way the failure seems so close to behavior that's good in other circumstances but it's a hypo to consider.
"Overclocking" is Henry Harpending & Greg Cochran's explanation for the higher frequency of genetic disorders affecting a certain metabolic pathway in the brain for Ashkenazi Jews, and their higher IQ. Similarly, they point out that sickle cell alleles seem like a quick fix for a recent rise in malaria, whereas evolution would have picked something less harmful if it had a longer amount of time to find it.
It should be noted that Greg hypothesized this before the advent of GWAS and polygenic scores. None of these have validated his theory so far.
I think it's reasonable to say that Asheknazi IQ is in some way based on genes. But Greg's proposed mechanism (rare genetic disorders = smarts) probably isn't true.
I think that Greg's theory was that, like with sickle cell anemia, it was advantageous to be a carrier of an allele but not necessarily to have two copies (and the disorder).
You made me really uncomfortable with your justice system analogies. They seem so inapt that it's hard for me to accept the surrounding discussion.
> In a country biased towards finding people innocent, it only takes a tiny failure to let a murderer go free. In a country biased towards finding people guilty, it would take a huge failure, or many different failures in parallel.
For this to be correct, it would require a bizarre measurement of failure size.
The problem is this: the simplest way to let a murderer go free is to convict an innocent person instead. Problem solved! Nobody gets murdered twice; as soon as you convict one murderer, you're done. But the "we find everyone guilty" system will do this *all the time*. They do it on purpose! That's the whole point of a "we find everyone guilty" policy.
In order for this mistake to require a huge failure, suspecting the wrong person would have to be considered a huge failure. But that can't be right. For just one example, if a married woman dies, her husband is automatically suspect. I don't think that's a mistake. I don't think many people at all think that's a mistake. But it would certainly be a mistake if her husband was automatically guilty!
> Go back to the two ways a justice system can go wrong. First, it sets too many guilty people free. Second, it convicts too many innocent people. If you ask which tradeoffs cause each problem, you'll find they're opposites.
Well, no. There are more than two ways the justice system can go wrong. This perspective only makes sense if you consider "the justice system" to consist of nothing but court trials, into which suspects are deposited by some ineffable force. And in the larger perspective, we see that these two problems are not opposites at all! Every time you convict an innocent person, you automatically let a guilty person go free! Increasing the one metric necessarily increases the other! (Though not vice versa.)
There's a mathematical formalism called "detection theory" which separates bias (what is called "tradeoff" here) and sensitivity (the inverse of "failure" here), starting only from success or error rates to two opposite problem types. Its results are usually represented on a ROC curve, i.e. a graph with false alarm rate on the x axis and hit rate on the y axis; this can easily be adapted to put "rate of using precise cognition when the situation doesn't call for it" on the x asis and "rate of using precise cognition when the situation doesn't call for it" on the y axis. Let me refer to this image I've drawn in the usual style of the blog: https://imgur.com/a/7ucjzUA
There's a few differences of interpretation between this formulation and Scott's: Detection theory says you can have super-high bias even with high sensitivity, unlike Scott's model which tightens the possibilities for "tradeoff" as we approach no failure. It's just that bias matters less and less as you reach towards infinite sensitivity (no failure). At infinite sensitivity, bias is indeterminate, not zero. (At infinite bias on either side, sensitivity is also indeterminate.)
Interestingly, though Scott applies the model primarily to mental health here, he briefly touches on its applicability to social issues such as crime. One thing I've noticed is that an enormous amount of energy, political will, debate, advocacy, resources and thought are spent on where we set the bias term, and discussions that touch on the sensitivity term (anti-failure) tend to be rare and difficult to find. Presumably this is because moving the bias term is relatively easy and improving the sensitivity of anything can be very difficult, but this still tends to annoy me at times.
Ahh - For some reason, when I loaded the page an hour and a half ago, your comment wasn't showing up. (So I wrote effective the same comment, albeit far shorter, below.)
On the politics side of things, my impression is that this is because you can't have an argument about whether you should make a system more sensitive, everyone agrees yes. But you can very easily have an argument about where the bias should be placed. Worse than that, whichever direction the proposed sensitivity points in, people who think the bias should be pushed in the other direction interpret it as an argument to push the bias in that direction and fight back against it in that way. So, sensitivity proposals actually get turned into bias arguments and you get nowhere (for example, a sensitivity improvement that decreases false arrests would be interpreted by tough-on-crime advocates as an ideological push to be soft on crime).
A caveat is that people can and do argue about whether the cost of sensitivity increases justifies them.
I think you draw picture wrong according to your own description - “appropriate” should be on the Y acis and inappropriate on X axis. Similar to “ false alarm rate on the x axis”
And I think it's worth adding that the neurodiversity movement often conflates several things. Is x a desireable trait or not, should we search for a cure for x and should society avoid regarding x as a disability or negative so it's not viewed with pity and we don't treat those with x less seriously. I mean it's really hard for any trait to be totally neutral.
I mean consider being very short. Every person I've met under 5'1" wishes they were taller, being that short is almost purely a negative in today's society (even if it just keeps you from reaching high objects) yet we don't view being short as a disease or medical deficiency and it might not be worth looking for a cure for being only 5'. In other cases, like ADHD it's probably good to find a treatment (even if on net mild ADHD is beneficial it's even better to be able to turn off) but we don't view those who have it with much pity or treat their opinions as less important. But that's only possible because they (we) seem mostly like everyone else.
Rhetorically and psychologically it's really hard to say yes, all things considered, severe autism makes life worse and a cure would be good but I do think want you to tag sufferers with all the negative attitudes we associate with mental disability.
I mean hell, even if your a high functioning autistic person surely it would be great to have at least a temporary treatment letting you switch on the way normals see the world. I mean even you don't think it's better surely it would be useful to experience it did say 6 months.
Have you considered the possibility that for a high functioning individual, their way of viewing the world is inseparable from the way their brain functions? There is no "normal" in that sense, because we all view the world in a particular way, based on our personality, experiences, and brain functions.
Sure, I bet a lot of people who have ticks and trouble with certain situations would enjoy taking a break from it once in a while. But that applies to just about everyone, since nobody goes through life without worry or fear of certain situations.
I think it's important to differentiate between alternate modes of approaching life, and straight up negatives. I would say that's probably Scott's overall point with this post. Someone who is so autistic they are non-verbal and need daily care their whole lives would benefit from treatment or a cure. Someone who is good at computer programming or accounting because they have autistic-bordering traits may not. If you cure them of their overly literal thinking, maybe you cure them of their ability to do math?
Actually, sometimes being very short is a sign of a medical problem.
Also, I've heard of parents trying to get treatment for kids who are just short, but not unhealthy. On the other hand, I haven't found anything backing this up.
There's an operation to make adults taller-- break the thigh bones, then apply careful traction while they're healing. This can be worth up to 5" (12.7 cm).
Yes, that's why I was careful to pick the height in my example at 5' which is well above the threshold that anyone treats it as an appropriately treated medical condition. But, rereading my comment, I realize I wasn't nearly as clear on that point as I should have been.
A neighbor of mine sought hormone treatments for her short son. 20 years later he’s still 5’ tall and also 5’ wide. Not sure if the treatment is to blame for the latter, but it certainly didn’t work.
Being able to “switch” freely to view the world through different frames is a good point. “Useful” for the purposes of understanding others as well as becoming more naturally aware of your blind spots.
Certain types of meditation intend to go into lower level sensory information, which correlates with “autism as a higher prior in low-level information”.
Personally, I’ve been able to see low-level phenomena (ie breathing walls, white noise in visual stimuli, pixies in the blue sky) which are just normal for some people. I’ve asked others and these stimuli are foreign like they used to be for me!
It’s also possible to “reverse the stack” and go to higher level processing in meditation, though I’m not familiar with the effects.
My grandfather was rejected from WWII for being too short. His slightly taller brothers served in the war, and he was always bitter about it. By Vietnam the height requirement was supposedly relaxed and my uncle, also quite short, became a Marine. He enlisted partly because he knew it would be a kind of vindication for my grandfather.
I find it interesting ironic that it's pretty stereotypically high functioning autistic to ignore all the social implications and overtones of saying we should try to cure autism and/or avoid carrying fetuses with high risk of autism to term because autistic individuals have lower expected utility in our existing society.
Not saying that is true just find it ironic that the kind of claim which often provokes the most negative response by the autism nuerodiversity folks is something that I've found to be frequently voiced by individuals with a degree of Aspergers.
I find it fascinating to apply the tradeoff/failure framework to hiring, and was just having a debate with my household about this - the similarities are striking (innocent : guilty :: bad hire : good hire, where you want to catch the good hires and let the bad hires go, but it's unclear if a hiring process that is failing to hire as many engineers as it wants to get is failing or just very far on the spectrum).
We know, or at least strongly suspect, that a minority of people experience weird immune reactions to viral infection, some of which include psychiatric effects. This is getting much more attention now, with covid, but it has long been associated with viral infection in general, because it is more about the defective immune response than a feature of a specific virus.
I read a lot of 19c medical history, so I can compare today with a time in which everyone got a lot of infections, which they had to fight off without help. Obviously, we'd expect to see a lot more of this, and while I can't prove it, there seems to have been an accepted connection between viral illness and psychiatric symptoms in adults and teens, and many probable cases.
But there are surprisingly few mentions of anything resembling low-functioning childhood autism, and even fewer relating such a condition to a recent illness or illness during pregnancy. Both early childhood illness and illness during pregnancy were quite common at the time. This puzzles me. If it is at all related to immune function, which the March birth thing suggests, we should have seen more of it even well in to the 20th century--the 1918 flu caused a lot of weird psychiatric reactions, and other diseases were prevalent. Early 20th century records do have many more cases, under the name childhood schizophrenia, but few seem to have made any connection to viral infection. Maybe it's just not properly documented, but it strikes me as a real puzzle. Everything attributed to covid I'm used to reading about in historical accounts, and seem to be well-established immune responses. I think we've massively underrated the role of viruses in triggering certain medical conditions, and the wide variety of possible symptoms. But I don't see many descriptions matching childhood autism. Just something I wonder about, as there are other reasons to think there is a link between autism and some sort of immune dysfunction.
I think that is one of the better theories...that it correlates with a specific kind of immune dysfunction that made surviving infancy (or even a fetus making it to full-term) impossible in "pre-modern" disease conditions. And fully "modern" conditions only trace back to the late 1950s, with childhood vaccines and other advances. Even then, most adults living would have been survivors of a pre-modern era. As the "modern" era progressed, it doesn't seem terribly surprising that certain conditions seemed to come out of nowhere or get much more prevalent. There are a lot of other variables that must play a role, but we seem exclusively focused on new things that could have *induced* these changes. It is possible that removing past selection pressures has quite a bit to do with it.
It seems that this is an evolutionary version of a bias-variance tradeoff, which makes sense, since in any optimization system you'll find a tradeoff like this. And as with the examples here, the system can in theory minimize either, but typically there will be some of both.
Two comments... first, anyone here read Marco del Giudice's "Evolutionary Psychopathology: A Unified Approach"? He tackles the problem of the ontology of psychiatric conditions head on, with an attempt at a global hypothesis involving life strategies (fast and slow) and many other things. I skimmed through it, but it's way over my head! Any opinions?
Second, about depression, I've read a few articles lately converging on the idea that depression is a response to a distress. Quote from Johann Hari: "This pain you are feeling is not a pathology. It’s not crazy. It is a signal that your natural psychological needs are not being met. It is a form of grief – for yourself, and for the culture you live in going so wrong." Any opinions?
The incidence of mental disorders is higher in homosexuals than their straight counterparts. I don't know whether the correlation has been found to be genetically mediated.
a. Super complicated system requiring lots of stuff to go right.
b. Lots and lots of genetic or environmental problems that can mess that system up somehow.
c. Also some tradeoffs made that might be subject to balancing selection or something.
You could certainly imagine this for homosexuality, lack of interest in sex, weird sexual tastes, etc. How interested you are in sex and how flexible your interests and how masculine/feminine you are by default are all probably tradeoffs, with plusses and minuses. But also, in general, a gene that leads to lack of interest in sex with people you could make babies with is almost guaranteed to decrease the fitness of that gene.
>which I’m tempted to cynically attribute to their being less likely to remember to use contraception
Would it be considered out of line (or just more holistic medical practice than anyone can currently be arsed to provide) to talk to your ADHD patients about long-acting reversible contraception? That feels like a no brainier.
I feel like it would be out of line to just bring up this one point out o the blue, but maybe it could be usefully camouflaged in the middle of a long list of 'Lifestyle suggestions for people with ADHD,' presented in a nice official-looking brochure.
I guess one potential advantage of depression is that you might be less susceptible to certain cognitive biases, e.g. just world fallacy.
I'm not convinced that "just world fallacy" is a genuine cognitive bias. While people may sometimes overestimate the justness of the world, I don't think it's a systematic error, they are just as likely (or more likely) to underestimate the justness of the world.
When people get what they want they certainly have a tendency to overestimate the justice of the procedure which produced that outcome. But when people _don't_ get what they want, they tend to underestimate the justice in that procedure. And since people tend to spend more time thinking about the things that they don't have than rejoicing in the things that they do, I think that people tend, if anything, to underestimate rather than overestimate the justice of the world.
Yes, the idea that we are disposed to use our intellect and argumentative skill to maximize the resources we get seems like a better explanation here.
I'd agree that the just world fallacy doesn't really seem like a cognitive bias in the normal sense - but it does lead to mistakes in a predictable direction like blaming the victim of a crime, rather than always in the same direction. Similarly, anchoring bias can make estimates wrong in either direction - but it is in a predictable way. The reason I don't think it's the same as most biases is that it's a mistaken assumption about the world with consequences, not a bias in estimation that might be due to neural systems using approximations and shortcuts.
The just world fallacy has very good experimental evidence when it comes to evaluating other people.
When I think of the just world fallacy I'm thinking of something different. You have a tall, smart, good looking guy who grew up in an affluent well adjusted family. He married a lovely women and had 2 well adjusted children. He ended up working for a startup and made $20 million when he was 35 but continued to work as he enjoyed the challenged but no longer had any financial worries. All four grandparents were still alive and healthy in their 90s etc. etc. etc.
Many people would desperately like to believe that such people don't exist. Surely there is some deep dark secret lurking somewhere. But, in many cases these people not only exist but there is no deep dark secret either. People really don't like that. They desperately want to believe that life is fair and if things have gone well for someone it only means tragedy is about to strike.
This is just a special case of the gambler's fallacy and doesn't need a new name. People believe streaks of good luck have to be balanced by streaks of bad luck. We don't accept that randomness is actually random unless it's also uniform.
I don't think that is quite right. I don't think anyone applies the gambler's fallacy to be like: well rich ppl were lucky to be born in a great situation so we should expect them to be less lucky in their life. We seem to only apply that fallacy when we see a run of similar type events. Yet the case described above absolutely applies their.
It's what I call the RPG character generation fallacy. Lots of people didn't like the part where, in Dungeons and Dragons, you could roll crappy scores for all your attributes. So most post-1980s RPGs (and later editions of D&D as an option) used a point system for buying attributes and abilities. Everybody gets the same number of points, so if a character is better at one thing, they must be worse at something else.
This is fair. All PCs are created equal. If we live in a just world, the same principle must apply.
"Many people would desperately like to believe that such people don't exist. Surely there is some deep dark secret lurking somewhere. But, in many cases these people not only exist but there is no deep dark secret either. People really don't like that. They desperately want to believe that life is fair and if things have gone well for someone it only means tragedy is about to strike."
One of the big flaws of some woo woo personality typologies is that they presume people are by default broken. Not the case.
I suspect you're over-focusing on perceptions of justice in situations where you are directly helped/harmed. These are likely to be skewed by a separate bias towards overestimating your own desert.
What about perceptions of justice in situations where you are a neutral bystander?
I think one plausible mechanism for systematic just-world bias is generalization from fictional evidence. Just outcomes are vastly overrepresented in popular fiction. (Although I also think it's possible I am mixing up cause and effect here.)
I think of just-world as being about people getting their deserts. Not "if you're tall, handsome, healthy, rich, and smart, you'll have offsetting flaws," but rather "if you're successful and have a great life, you must have done something to deserve it, likewise if you're unsuccessful and miserable."
IIRC, there's research showing that humans are, statistically speaking, biased towards unrealistic optimism, and the best accuracy on this dimension is seen by those mildly depressed.
but the just world idea and optimism are kinda orthogonal. Just world fallacy is more often applied retrospectively (that happened so it must be just). I mean it's usually easier to level down than up and general optimism about the future is often about the pie getting bigger or you personally getting a bigger slice. I think you would need seperate research on the connection between optimism and the just world fallacy.
In particular, depressed people were found to be better at judging to what extent they have control over a partly-random situation; normals are overly optimistic by comparison. Martin Seligman at UPenn did some work along those lines and cites more of it in his book "Learned Optimism". So yes, to the extent seeing things accurately is good, a little depression could have advantages.
I've often thought of depression as a sort of hibernation adaptation; you certainly don't waste many calories on unnecessary activities. But I don't know if there's any support for this idea.
I am afraid it's the other way around. Depression (unipolar and bipolar) and anxiety make you more susceptible to cognitive distortions.
"If, as Badcock and Crespi claim..."
Crespi and Badcock are charlatans with no experience in the relevant fields, who cobbled together some things that sounded intuitively plausible at first blush and defended them despite overwhelming evidence to the contrary from just about everything adjacent to the relevant neurotypes. The sole reason their pseudoscience lived so long is that in 2011 the Wikipedia article for the hypothesis was written as a glowing endorsement by a supporter (violating a whole pile of site policies and guidelines with it) and stayed more or less the same for a decade, being read by about as many people as you'd expect a pop-psych summary on the fifth most visited website to be read.
Would you mind citing some of the evidence you reference? I’m curious to read more
I've rewritten the Wikipedia article from scratch, if you want the full form :P Quick overview:
>Autism and schizotypy consistently have similarities in the aspects where the imprinted brain hypothesis predicts they should be different; it's based around the assumption they should have radically different empathy and mentalizing deficits ('deficits' to use the word of the proponents, not necessarily the word I would personally use), when in fact they consistently show similar (e.g. specific deficits in cognitive empathy with preserved affective empathy)
>The surrounding evidence involving genetic disorders, and especially imprinting disorders, is *wildly* misinterpreted by C&B to the point of outright lying -- their claims about Prader-Willi syndrome and Angelman syndrome are almost admirable in terms of how bald-faced they are, in that while they are correct those disorders are sometimes called by overimprinting and have notably different behavioural phenotypes, they *brush over* (to say the least) the way those phenotypes line up to their autism/schizotypy claims, while their claims about non-imprinting genetic disorders are as a whole similarly questionable
>The comorbidities (to use a word I might not, again) just don't even begin to support the ideas rolling around here -- while there's reasonable room and all to discuss whether everyone diagnosed as autistic is, about 8-10% of autistic adults are full blown SZ, and the point where "schizo-autism" becomes a reasonable neurotype descriptor comes much earlier in the latter spectrum; the counter-claims of "actually, these diagnoses are spurious because..." tend to rely on things we now know not to be true (e.g. that autism and schizophrenia have opposite IQ profiles) and ultimately look like moving the goalposts
In the vein of "interesting things about autism that cause it to not be a failure", besides the obvious engineer and intelligence cases, is one about inability to distinguish fantasy from reality! Speaking from experience here as mildly autistic, i often have very very strong emotional reactions to fiction in a way that is similar to if those events happened in real life, and this is apparently quite common for aspies. This offers the interesting benefit that if a fictional story tries to relay an important message or lesson or moral, i'm unable to simply disregard is as "just a story", because it feels as emotionally important to me as if it happened irl. Thus, this aspie specific ability allows us to not just learn things, but truly experience things in ways that otherwise would only be possible through actual lived experiences. Of course, this comes with the tradeoff you may learn the wrong lessons from fiction, but it's an interesting power nonetheless.
Yes, but what are the effects of this in a world in which most people have far more "experiences" through movies and television than they do with actual people? And particularly experiences of the emotional variety? And particularly if those narratives are increasingly guided not by independent artists but by corporations, inevitability eager to reinforce messages that help them and the ruling class?
> I often have very very strong emotional reactions to fiction in a way that is similar to if those events happened in real life, and this is apparently quite common for aspies. This offers the interesting benefit that if a fictional story tries to relay an important message or lesson or moral, i'm unable to simply disregard is as "just a story", because it feels as emotionally important to me as if it happened irl.
Seems like it's worth considering the possibility that normal people react to fiction just as strongly as autistic people do, but have even stronger reactions to real life.
Some autistic people exhibit a tendency toward deeper immersion in fantasy in a way that normal people don't experience in either fantasy or reality. I'm reminded of a Ted talk from an autistic girl who was describing the lucidity of her fantasies to the point she felt she needed to scream.
I have somewhat similar experiences. Do you have exceptionally vivid dreams as I do?
I think there's a difference between distinguishing fantasy and reality and strong emotional reaction to fantasy and fiction
What you describe is not so much a matter of fantasy/reality distinction as it is about suspension of disbelief and alief https://en.wikipedia.org/wiki/Alief_(mental_state)
An inability to distinguish fantasy from reality would be more related to imaginative resistance, and the cause of people demanding censorship of violent videogames and being unable to tolerate deviations from expected norms like people who can't accept 2B being a combat android that looks like and dresses like a fashion model, those who demand excessive realism in their fiction even if its a science fiction fantasy world.
This is speculation and probably not original, but it might be interesting to consider how e.g. autism fares now compared to in the past. Your mildly autistic Google engineer is probably doing quite well for themselves now, but might not have had the same opportunities before engineering was a thing. So certain genes that evolution was in the process of weeding out might not be selected again, or selected against as heavily, as societal incentives change, which also has implications as to which genes are considered failures by your previous definition.
That nerds have been evolutionarily useful at least since the beginning of agriculture was a thesis of my 1998 essay on "Nerdishness: The Unexplored Cornerstone of the Modern World:"
"As Mike Waller points out, cave-nerds probably made the stone axes for early cave-Big Men to hunt with. I suspect that nerdishness has been symbiotically related to the prosperity of communities. (Howard Bloom makes a similar point.) In nomadic hunter-gatherer tribes, nerds' object-orientation would not be very useful since objects tend to be heavy to carry. Similarly, in tribes that need just about every man to hunt, nerds' ineptness at making correct split-second decisions would tend to get them eaten by wild beasts, or at least shunned by women who want men who bring home meat. On the other hand, sedentary communities that have been able to free some men up from food provisioning or war-making, make greater degrees of specialization possible, allowing nerds to flourish as craftsmen. In turn, these nerdy technologists make the tools that allow even more men to stop hunting and farming and turn to nerd-work. Thus begins a virtuous cycle of economic growth."
http://web.archive.org/web/20010204014000/http://isteve.com/nerds.htm
A recent articles ties autism to the beginning of technological innovation 70k-100k years ago : https://archive.is/dil8Z
> So certain genes that evolution was in the process of weeding out might not be selected again, or selected against as heavily, as societal incentives change, which also has implications as to which genes are considered failures by your previous definition.
Indeed. I had a similar reaction when I read the phrase "evolution has been trying hard to get rid of them". It's the *environment* (material and social) that does the selection, and environments vary.
I've heard that autistic people can also shine in finance, though I forget the exact role. Finance has been a thing for quite some time.
Probably as quants in hedge funds, which is the stereotypical job that scoops up math graduates, then physics graduates, and only when it exhaust both pools it starts looking for people with a degree in finances.
See Scott's old article speculating about selection for intelligence-boosting genes tied to genetic diseases in ashkenazi jews compared to the rest of the European population?
The average google engineer, for whatever pleasures his life may afford him, is not particularly fertile, and his children are presumably not dramatically more likely to survive than those of others. He is from an evolutionary point of view doing quite poorly.
Rich men marry beautiful women
I've searched my family tree for others with ASD and thought a lot on the implications. If you think back to small town / farming life the requirements to make the rapid complex social adjustments that we are bad at are lower. Social relationships are simpler and sliwrr changing. You are thought of more as just "Joe, John and Mary's son" and less as a a generic person. My experience is that this makes our social deficiencies less important. Being a steady, detail oriented person who pays too much attention to the staightness of their plow furrows is not a bad trade off...
"In fact, just being born in March raises your autism risk a bit"
The linked article seems to refer to being *conceived* in March as higher risk.
-Someone born in March
Thanks, fixed.
(for schizophrenia I think it really is born in March - yours, someone born in November)
While we're complaining, I feel like this statement should specify the Northern Hemisphere.
As a statistical claim it doesn't need the caveat, as the vast majority of births are in the Northern Hemisphere, enough to drown out the effect of the much rarer Southern Hemisphere births.
It doesn't *need* the caveat, but failing to add a variance-reducing parameter to your model when you know what the variance-reducing parameter is and could easily add it, is silly.
Now I'm confused about "it means you were in an especially vulnerable developmental stage duriing flu season" - if someone was conceived in March, isn't it not flu season during nearly all the time they are in utero? Or is specifically birth and right after birth the critical period?
Right, I can look at the article myself:
"Children conceived in December (OR = 1.09 [95% CI = 1.02 – 1.17]), January (1.08 [1.00 –1.17]), February (1.12 [1.04– 1.20]), or March (1.16 [1.08 – 1.24]) had higher risk of developing autism compared with those conceived in July. Conception in the winter season (December, January, and February) was associated with a 6% (OR = 1.06, 95% CI = 1.02 – 1.10) increased risk compared with summer."
So it's really any winter month (where apparently winter includes March). The authors don't confidently attribute the results to flu in particular:
"Time of conception can provide clues about environmental factors that could be associated with autism. Environmental agents that predominate in California during December – March include virus infections and agricultural applications of certain pesticides."
Maybe it is neither a failure nor a tradeoff, maybe it is an evolutionary beneficial trait of evolution itself (let's call this meta-evolution).
Let's imagine that you're a drug researcher trying to determine the best dosage of a particular drug. You'd probably run an AB test with a range of different dosages and pick the one that makes the best tradeoff in terms of desired effect vs undesirable side-effects. But what happens if you expect environmental changes to alter the sensitivity to this drug over time and you want to keep your drug performing optimally. One thing that you could do is give a narrow spectrum of different dosages to different patients and continuously monitor differences in outcomes to tune the mean dosage that you use as your reference. You trade off optimal performance now against adaptability to future changes.
It's entirely possible that evolvability is itself an important evolutionary trait and what you're seeing here is the result of Mother Nature's (extremely unethical) AB testing framework. You see a spectrum of different levels of psychiatric disorder because without such a spectrum, evolution becomes unresponsive to changing environmental conditions and we are highly evolved in favor of being adaptive.
I.e., you call it a bug, I call it a feature ...
The problem is that evolution doesn't have foresight. All that matters is the reproductive ability of an organism right now, not what might be better for the population in the future. If there is one optimal solution, the population will converge on that, even if it means they'll be less fit in some future scenario.
I agree with Pycea's take. See https://slatestarcodex.com/2020/05/12/studies-on-slack/
For evolution to happen at all, you need population diversity. Without diversity, you have bananas, which have no ability to adapt at all.
This matters little if the environment is stable, since you will be converging on the same equilibrium, just at a different rate. If the environment is unstable, lack of population diversity becomes an evolutionary liability, because your entire population can be wiped out by a single fungus.
What I am saying here is that population diversity is itself an evolutionary trait, that will be heavily selected for when the environment is not static.
Do you really think that the human environment (particularly the social environment in which mental illness is most relevant) has been static over the latest period in our evolutionary history?
Mental illness may be a byproduct of an evolved trait that gives population diversity to allow us to adapt better to our constantly changing environment. It is selected for because better adaptability improves population survivability.
Is this comment in any way making a claim different from 'mutation rate is itself a trait subject to evolutionary pressures'? You are right that it is probably deleterious for most species for mutation rates to fall below a certain rate (for example, most bacteria have a common mutation rate, but some -- Paramecium being Wikipedia's example -- have anomalously very low mutation rates), but that doesn't suggest that specific deleterious mutations should be common. For some of these traits, as Scott discusses, it might be the case that many different specific errors have predictable negative consequences.
Mutation rate is one factor but likely not the most important. We have, after all, evolved a dual chromosome structure with different characteristics of dominant and recessive traits. We have evolved to reproduce sexually, with all of the implications of that for how traits are transmitted between generations.
If there are two ways that a particular evolutionary beneficial characteristic could potentially manifest, one as a dominant trait, the other as a recessive trait, what ends up happening and why?
Suppose you have a gene that raises your mutation rate: what happens? Generally, you suck slightly more than usual, so that gene goes away.
From a population point of view, a non-zero mutation rate is ultimately preferable, but for any organism involved, it isn’t, so within a population there would be selection for as low a mutation rate as possible, even if that was a poor longterm strategy.
> For evolution to happen at all, you need population diversity. Without diversity, you have bananas, which have no ability to adapt at all.
I see no reason why bananas can't adapt. All you need for that is for you offspring to be different from you in some way.
> If the environment is unstable, lack of population diversity becomes an evolutionary liability, because your entire population can be wiped out by a single fungus.
There is no such thing as an evolutionary liability. If some trait causes you to susceptible to being wiped out by a fungus, but means you have 1% more offspring, that gene will reach fixation.
> What I am saying here is that population diversity is itself an evolutionary trait, that will be heavily selected for when the environment is not static.
This seems to be the crux of your argument, that genetic variation itself can be selected for, which is the biggest part I disagree with, at least for the cases discussed here.
> I see no reason why bananas can't adapt. All you need for that is for you offspring to be different from you in some way.
They're not different though, right? At least not if we're talking about the Cavendish; they're all genetically identical.
The messy, complex process of sexual reproduction and the sensitivity of the organism to the environment would appear to be the original means by which diversity is selected for and basically guaranteed.
But in many cases, particularly of rapid change, this may be insufficient to keep the group alive. That's where altruism comes in - a force towards treating others with various degrees of relatedness as kin. Thus increasing diversity, and sometimes survival.
As a force - not against individual fitness as I wrongly stated earlier - but working with it, this seems to me compelling.
That's true, except that part of the optimal solution is likely to have some degree of randomness, since it allows past organisms to have gotten past some local maxima into greater nearby maxima.
Once you get to a new local maximum though, the randomness will start to be selected out. Though maybe that's exactly what we're seeing.
You're assuming a model of a static or temporally piecewise static environment where an organism gets to a "local maximum" (by which I guess you mean perfect adaptation to the environment) and then evolves away its randomness. As far as I'm aware, organisms that do this tend to die out in mass extinctions and are thereby selected against in the evolutionary race. I'm not a big expert on human prehistory, but this static environment model also doesn't really sound to me very much like the situation of the human race over the past 100k years or so.
For an organism that is in a constantly changing environment and nowhere near either a local or global evolutionary maximum at any point in its existence, the speed at which the organism can evolve is going to be a big evolutionary deal.
It may be true that species with less randomness die out more easily. I think this is what you mean by meta-evolution? But the pressure within each species is still going to be towards some maximum fitness. The only thing evolution selects for is your ability to have more kids (and for them to have more kids, etc).
I think part of the reason that randomness isn't selected for is that more randomness is bad. Thus, adding more of it is almost always going to decrease you or your kids' fitness. It's possible to propose a scenario where most random changes are good (e.g. you're somehow at a local minimum), in which case it's very possible that randomness will be selected for. But in most actual situations, more randomness just leads to more cancer. It isn't necessary for the environment to be static, just that wherever you happen to be, most randomness leads to less reproduction.
Of course, you then have to answer the question of why some species seem to have more randomness than others, given that we all started from the same place. It could be that it's a result of some other tradeoff. Or maybe there is some fitness benefit to it. But I don't think you can just say it's a result of group selection.
You're still talking in static terms. In evolutionary competition, the winner will be the organism closer to maximum fitness. If the maximum fitness is moving, how close you can get will depend on how fast you can move. Organisms that can evolve faster will end up with advantages over those that evolve slower.
When humans migrated from Africa to cold Northern climes, there was an evolutionary pressure towards lighter skin. However, there was also an evolutionary pressure towards higher skin-tone variance, because without that, there is no lighter skin. Quite possibly, there is an equilibrium condition that will then select against that variance once human beings are in a stable environment.
But here, you're kind of saying, "we'd expect to see cognitive variance diminishing after we hit a stable environment". I'm asking, "when did that happen?"
Evolution does not need foresight for this to be true. The speed at which an organism is capable of evolving is an important evolutionary trait right now. Organisms which have been subjected to rapid environmental changes in their past will have selected for those which can evolve quickly vs those which cannot evolve and die out as a result.
Evolution doesn't just occur through individuals, but through groups. Sometimes traits are maladaptive for the individual but adaptive for the group.
That's quite a, uh, controversial statement. Short of some examples of kin selection, it's generally *not* the case that individual maladaptive genes reach fixation.
There are some particular kind of super-long-term processes in which you can have group selection, which are analogous to the distinction between selection of genes versus the selection of individuals (because it is entirely possible for a maladaptive gene to reach fixation, assuming it is maladaptive only for the individual, and not for the gene itself).
For instance, imagine for a moment a world in which a larger size is always more adaptive for the individual, but larger size always eventually results in the extinction of the species as a whole. Over extremely long time frames, you will observe a form of group selection, in which the more predisposed a group is to grow larger, the faster and more reliably that group goes extinct, gradually selecting for groups that are not predisposed to grow larger.
(Not taking a stance on the broader topic involved here, just noting that group selection isn't entirely incorrect, just mostly incorrect.)
I don't have any particular specialization in this and I have zero interest in dying on this hill, but you're talking like no serious work has been and is being done on things like multilevel selection and cultural group selection (always amusing to me that folks that react scoffingly to group selection will suddenly change their tune when you add the magical word "cultural" to it).
I just piped up because these overly clean sanitized versions of evolutionary processes always strike me as off. To be that jackass: "There are more things in heaven and earth, Horatio, Than are dreamt of in your philosophy."
Was this the best angle of attack on that? Dunno. But we know it happened, and it along with epigenetics undermines convention by introducing all sorts of mushy questions and facts into their precious and generally oversimplified "hard" models.
Shamans, priests, rabbis, ministers, mullas, imams... religion.
Meta-evolution isn't a thing outside the dreams of adaptationists
"Evolvability" is actually known to be strongly selected for in nature, which is why almost all complex organisms reproduce sexually, and why sex is so common in life in general.
The idea of AB testing like this is actually a reproductive strategy, but humans don't engage in it - it is something more frequently seen in creatures that produce vast quantities of offspring, as that allows the better ones to be selected for. Humans invest too much energy into their offspring, which makes this sort of thing deleterious for them.
Autism is a spectrum disorder, right? It seems to me that ought to be an important part of considering whether or not its's maladaptive.
Autism is a spectrum disorder, right? It seems to me that ought to be an important part of considering whether or not its's maladaptive.
Isn’t that kind of what Scott is saying here? Some parts of the spectrum are mostly trade off, and adaptive (in certain environments). Other parts are mostly failures, and maladaptive in essentially all environments.
"If autistics are too quick to mistake signal for noise, schizophrenics are quick to mistake noise for signal."
Shouldn't this be the other way around? (autistics see noise and think it's signal, and viceversa).
If anything, both groups seem to mistake noise for signal. An autistic person hears a faucet dripping or feels a tag on a shirt, both of which are noise, and mistakes it for signal and gets distracted. A schizophrenic person sees a person walking on the street behind them, or hears someone talking on the radio, both of which are noise, and mistakes it for signal and thinks that someone is trying to kill them.
My English is betraying me then. I interpreted "mistake X for Y" as X being the reality and Y being the wrong assumption.
Your English is correct. I think the examples of autistic people mistaking signal for noise would be like the case where he mentioned that having a different shading on a face could make someone not see it as a face. Alternatively, someone who's schizophrenic might hear the wind through the trees and think they hear voices.
But, if Autie mistakes 2 faces of the same person as being different faces just because one had a shadow, I'd say that's because he took something that should be irrelevant (noise: the shading) and tried too hard to make it fit in his model (he treated it as signal). And if as you say my English is correct, that would be an example of Autie mistaking noise for signal... ?
Yeah, that sounds like mistaking noise for signal. I'm not sure if that's a common mistake autistic people make, but I don't really know enough to comment on how true Scott's characterization really is.
Let's phrase it differently: "matching with a too-sharp pattern" and "matching with a too-wide pattern." If you are matching with a pattern that is too sharp, you will perceive some input that is signal, ie. should activate the pattern, but gets read as meaningless - noise. Conversely, with a pattern that is too wide, you will perceive input that is noise, ie. should not activate the pattern, but read it as falling into the pattern - signal. Admittedly this phrasing is very confusing.
Oh, I wasn't seeing it that way. This probably explains the misunderstanding!
With autists, we are in the case of matching with a too-sharp pattern. You describe such case as: "you will perceive some input that is signal, ie. should activate the pattern, but gets read as meaningless - noise". I see that. Now, the way I see it:
It's not that since it doesn't activate the pattern and is read as meaningless, ignorable noise. On the contrary! Since it doesn't activate the pattern, it draws attention. Those deviations from your predictions register as useful information that needs to be explained (signal), as opposed to irrelevant deviations that don't matter (noise). Then, after paying attention, they probably can tell what's going on, but they can't help getting bombarded with the feeling that every little deviation is signal worth checking out.
I know almost nothing about schizophrenics, but from what I get from Scott's description of the model, the situation you describe would only happen if the schizophrenic was already expecting something bad or if the situation in context is, at first order, reasonably scary for anyone. In which case his failure at that moment would have been to ignore the detailed info and go with his prior while a normal person would have her attention drawn and quickly learned reality. But it's not as if a normal person (with the same prior) in that situation would have just treated it as noise and ignored it.
This lines up with that other commenter who claimed that mild autism makes them respond to fiction more directly
Stereotypically at least, autistics overreact to real signals - the tag on their shirt, an irritating sound, a face with a new shadow. These things aren’t “noise”, just “irrelevant signals”. They tend to the over-literal. But this might make them good at tasks that require literal thinking and attention to minor details and subtle changes. Schizophrenics really do seem to see signals in noise, tending toward the over-abstract. But this might make them good at tasks requiring creativity.
So sort of opposites, but I agree that simply flipping the noise/signal error is a bit too pat.
Irrelevant signal is the very definition of noise though.
If my goal is to recognize my friend, a shadow on his face is just noise.
If I want to know whether I am outside or inside, paying attention to the lighting and shadows (on my friend's face) is the signal I am looking for.
Eh, we’re arguing semantics for a metaphor. When you are talking about “signal to noise” ratio, noise does not usually refer to “other valid signals you want to ignore” but errors degrading the signal and random/pseudorandom background stuff, like static. A radio with a bad signal to noise ratio would always play static. A schizophrenic (in this metaphor, not necessarily in reality) would hear music in the static that wasn’t there. An autistic would be hearing all the channels at once, unable to tune into the channel they really want to.
Well, this is my main point of contention with the article. I think noise/signal is the wrong idea here.
The way I see the examples (schizophrenics connecting everything to everything concluding there is a major conspiracy against them, autistics focusing on minutiae of shadows on a face unable to recognize their friend) are showing how the two modes focus on different resolutions of world view.
If my resolution is too low (schizophrenia) everything is an overlapping blur, everything is connected via cosmic conspiracy and tries to track me via satellite and CIA operatives.
If my resolution is too high (autism) I cannot recognize my friend as the intricate detail of shadows in my high res picture doesn't match the picture I have stored of him.
Maybe it is just semantics, but it does not feel that way to me. In particular, I think the comments up the thread seem to match my issue with the piece (though worded differently).
Noise in SNR can also refer to identifiable stuff that you happen to not care about.
This entire writeup reminds me of Clans of the Alphane Moon by Philip K. Dick. (Who, as far as I can tell, really hated people in mania.)
I think you did a disservice here by with the explanation of tradeoffs involved in autism. For instance, I know some autistic people (particularly autistic AFAB people) who are definitely autistic, and definitely very able to function in society, but also are very creative and good at gestalt thinking. There is a tradeoff involved, but simplifying it to "people person" versus "average person" versus "stereotypical aspie engineer" reduces that tradeoff to the point of meaninglessness, in addition to making it seem more like a binary (trinary?).
In addition, what role is the "autism spectrum" in this explanation? Does it match with the triangle up and right from the "stereotypical aspie engineer" dot?
I wanted to add on to this that the post was quite informative! It definitely helped install some gears for me of why there's a seemingly equally-good tradeoff for autism for allism while there are also autistic people who need significant help to function in modern society.
A prime feature of autism is rigidity and repetitiveness, which is the opposite of creativity. Not to mention avoidance of the kinds of feelings and experiences that typically generate intense creativity.
Not saying it's impossible but more than likely they were ADHD (or perhaps co-occuring). The two are often mixed up, even by professionals.
While I 100% agree that's admirable and rare in the modern world (and needed), I don't know that I've ever encountered a definition of creativity that includes that.
Many artists have this quality, and it helps for sticking with one's vision after the fact. But that doesn't involve the actual generation of that vision which is what creativity is about.
Rather it's resoluteness and integrity and for those pushing back against it "pig-headedness". Often a good thing, but not the same thing.
A prime feature of artistry is rigidity and repetitiveness. It is most assuredly not the opposite of creativity. Creativity under arbitrary constrains is exactly what drives most art, and no art is any good until the artists goes through an obsessive period. I can't count how many examples there are of prickly artists with rigid schedule, which is exactly the demand for rigidity and repetitiveness.
You're conflating "artistry" with creativity.
But if you're genuinely curious, there have been actual studies done on this. They aren't promising.
Autism is a disorder which particularly requires acceptance of one's inherent strengths and limitations to be able to move forward. No one is helped, and many harmed, by denials and conflations of this type.
This is a really good read, with lots of "oh, that's me!" moments. Just making this comment because I wanted to offer some more concrete appreciation than a like.
This is almost certainly just...motivated reasoning. I want this to be true, so I'm going to ask.
My mother has schizophrenia -- the very bad, refusing-medicine, wanting-to-kill-people, writing-screeds-to-the CIA kind -- and so did my great-aunt on the other side of the family, less severely. My husband's dad had a psychotic episode about a decade ago that he seemed to white-knuckle himself out of, somehow. I'm terrified that my daughter or my future children will be schizophrenic, since the genetic dice seem loaded against them.
BUT! My husband and I aren't like this: "creative, attuned to interesting patterns, and charismatic, but also a bit odd and superstitious." If anything, I'm "responsible and perfectionist, but has trouble letting things go," and he is chill and was a math major.
Soooo does that mean we maybe don't have those pesky psychosis-causing genes?
P.S. I first found SSC when I was trying to understand my mom's illness. Scott was incredibly kind and generous with his expertise when I, a total stranger, sent him a frantic email asking "will my children be nuts??"
I don't know if being a math major is indicative of much. John Nash did game theory, which is officially economics but really a branch of math. His son also got a math PhD, but seems to have handled his schizophrenia more poorly and was still being taken care of by his parents up to the time of their deaths.
In this case, it's indicative of my husband's Bayesian soul. He's not a creative or paranoid type. Tangentially, I used to see John Nash wandering around school!
In case you're worried, even if you yourself were schizophrenic, the risk of your child having it is apparently only about 10%. And since you're not (as far as I know), you should expect it to be less.
https://www.youtube.com/watch?v=568VFpwYOGs
I'm guessing Scott already told you that, but I figured I'd note it anyway.
You'll probably only know for sure if you get tested for risk genes.
Yep -- no such test exists outside labs.
Wait. Meaning no such test exist unless you know where to go or no such test exists except the experimental, not available to the public type?
My brother in law is schizophrenic and I have three kids, so I'd love to be able to get a better handle on the risk level.
JonathanD, I'm so sorry if you're stressed about this -- it is a terrible thing to worry about! And as far as I can tell, there are no such tests available to the public. Just in labs where researchers are still trying to figure it out.
It is what it is. The risk isn't that high, and we knew the risks when we had the kids. Thanks for looping back to answer me on a more or less dead thread.
It's not that I'm constantly worrying about it, it's just something we all have to be aware of. My oldest is ten, and as we talk to her about teenage things, we plan on telling her that whatever the case may be with other kids, she can't smoke pot. Not out of moral opprobrium, but because pot is known to trigger schizophrenia, and it's a risk she can't afford to take.
Likewise, we'll be telling all our kids at an appropriate age about delusions, and that if they ever see or hear something that doesn't seem very likely to be real, it probably isn't and they need to tell an adult right away.
Just, you know, if there was a test that gave you a heads up about risk level, that would be nice. Anyway, thanks again.
> I think most psychiatric disorders exist on a spectrum from mostly-tradeoff to mostly-failure (what we might call "high-functioning" and "low-functioning" versions of the same phenotype).
For completeness's sake, there's hypothesis that anorexia (and maybe other eating disorders) fit this category to. Gusinger 2003 (https://pubmed.ncbi.nlm.nih.gov/14599241/) argues that there's a tradeoff between "being able to tolerate starvation in order to flee famine" and the pathological "self-induced starvation and hyperactivity" behaviors that constitute anorexia. (This came up in a comment thread or two on some SSC articles)
I think Guisinger's formulation is a little "just-so," but I think there's something to this. The same behaviors can be induced in laboratory animals (see this article, which was recently cited in the Subreddit https://pubmed.ncbi.nlm.nih.gov/22231828/).
I only see the abstracts. Is there a way to get to the articles?
I can see how hyperactivity might help you escape an area that doesn't have enough food, but how would that explain a person with access to food refusing to eat?
> I can see how hyperactivity might help you escape an area that doesn't have enough food, but how would that explain a person with access to food refusing to eat?
The Guisinger paper has a few ideas, some more plausible than others. Think something like "extreme desire to conserve food during migration."
---
Here is the Guisinger paper: http://www.adaptedtofamine.com/wp-content/uploads/2015/01/guisinger-an-pr-2003.pdf
Here is the Klenotich and Dulawa chapter: https://www.researchgate.net/profile/Stephanie_Klenotich/publication/220016913_The_Activity-Based_Anorexia_Model/links/0a85e52d80de043a13000000.pdf
You may enjoy learning about Sci-Hub!
Thanks for the links!
I haven't had a chance to read the whole thing, but I do now think the adaption to famine idea has merit. I don't think it should be associated with hunter-gatherers, however. I think it should be associated with an ancestor that wasn't smart enough to make an informed, conscious decision to relocate.
Why is anorexia so positively correlated with higher social class? I once stayed in an apartment building that had a floor dedicated to in-patient anorexia treatment and the anorexics radiated so much superciliousness when they walked by that I always felt like I must look to them like a peasant with cow droppings on my shoes.
Maybe the personality type that increases risk of anorexia correlates to class.
Fairly massive confounder that these were the people who could afford those treatments
Something has occurred to me: In order for anorexia to be adaptive, there has to be some way to turn it off; otherwise it's just slow suicide. The Guisinger paper suggests that what turned it off in the ancestral environment was social pressure to eat after the famine was over. But how would that work with nonhumans?
Perhaps not competing as much with other people for food during a famine has some advantages. Assume it's possible to get a little food without fighting for it, and not fighting saves energy and makes alliances more stable.
While of course being wary of evolutionary explanations of anything, this seems like a more intuitive explanation of it than what Scott said about perfectionism. Given how anorexia presents itself differently in different cultures (e.g. historically it was stomach pains or digestive issues in Asia), something more underlying seems more likely.
On the other hand, it was also much rarer in Asia before the spread of Western culture, and there's no reason why one factor can't influence the other. It could be the intersection of two or more different tradeoffs, along with a mess of other possible failure modes.
I think "failure modes" is the right way to think about this. Some people, for some reason, enter a "starve yourself and exercise incessantly" mode and can't get out of it. Multiple things might trigger it, but the same biological program is being activated.
Yeah the 'genetics' behind psychiatric disorders are a mess, for the simple reason that said psychiatric disorders aren't well-defined (psychiatrists soberly use the word 'spectrum'). With that in mind it's hard to say whatever about the genetics behind them beyond 'this variant correlates with this trait', apart from the absolute lowest hanging tree (rare mendelian disorders and the like).
Also, evolution doesn't necessarily 'eliminate' a deleterious variant, especially if said variant is recessive - it may reduce that variant's frequency somewhat.
Also, lots of reasoning by association: X is associated with Y, and Y is associated with Z, isn't that strange? Well correlation isn't transitive to begin with, so you could be just be looking at nothing.
Scott is interesting to read as a psychiatrist, but his grasp (and that of most of SSC/ACX) on genomics is tenuous to say the least. I feel like actua geneticists should go and engage him seriously but I'm already procrastinating on actual work by posting this, so whatever
Anorexia is not something you can talk about meaningfully without discussing culture, a subject Scott's avoided thus far (though maybe that's coming? Still it's weird for such a foundational factor to be almost completely ignored in setting his grounding - particularly when his profession as a whole has a long history of this, and for transparently self-serving reasons).
Here's a question: Is anorexia still as common among adolescent girls as it was a decade or two ago, or has it been displaced by the rise of transgenderism?
I read somewhere that there is a strong correlation between the two, perhaps pointing to some more general psyhciatric condition of body dysmorphia that expresses itself in culturally specific/socially acceptable reasons for feeling discomfort in one's body?
Interesting. Being able to tolerate starvation in order to flee famine would also correlate with being able to tolerate starvation in order to avoid obesity. At the high-functioning level that just makes you healthy, and maybe a supermodel. Throw in some failure to process just how not-fat you are supposed to be and how important it is to be really really not-fat, and you get unhealthy anorexia.
As with anything involving evolutionary biology, I'm worried that lacking expertise in the field I'm vulnerable to conjuring just-so stories, but this one is just so good at explaining my observations.
> I used to work in the business district of San Francisco, meaning I got to see a lot of very high-functioning people with mental disorders.
Why is it a disorder then?
High functioning isn't necessarily the same as not disordered. Per the DSM-5 (and quoted from Wikipedia), a mental disorder is "a syndrome characterized by clinically significant disturbance in an individual's cognition, emotion regulation, or behavior that reflects a dysfunction in the psychological, biological, or developmental processes underlying mental functioning." A person can have a significant disturbance from something but also be able to function in society and achieve societal standards (the definition of high functioning). If a person were struck by sudden intense pain every day for an hour, they definitely have a disorder, but they could have be perfectly normal in other 23 hours of their day.
I understand there can be disorders but -- and maybe I'm evincing a disorder here :) -- that paragraph didn't make it clear that he actually clinically evaluated those people and confirmed their DSM status. I think it's useful to emphasize this point because it's possible there may be over-diagnosis of people "having disorders" simply by outward behaviors.
Scott being a psychiatrist, I took "see" here to mean "see as a patient", not "see on the street".
That's certainly possible but I think it would be socially useful for Scott to be a bit more pedantic to reduce the chances of a Zeitgeist forming of normal people diagnosing others with disorders with certain outward behaviors.
Which we all do and this isn't a totally wrong thing. An informed non expert can be better at identifying something like ASD than a psychiatrist who is expert in general but has backward ideas about what ASD is. The Spectrum communities are rife with stories of psychiatrists who strongly state a patient has no ASD because it doesn't fit their pet categories or what they were taught a decade ago. I know many Aspies or family or friends of such whose judgement on whether someone has ASD is a lot more reliable than a random mental health expert. Expert diagnosis is needed for insurance and HR reasons but not to help people better understand themselves or those close to them...
My non-clinical definition is that its a disorder if you don't like it and a feature if you are neutral to or like it. The stereotypical tortured intellectual might have a disorder while their mirror image that ended up puttering about a research lab surrounded by support staff and a caring spouse that makes sure they remember to put on new clothing every day does not.
This is not a bad summary of the main point of the ASD neurodiversity movement!
>In fact, just being conceived in March raises your autism risk a bit - it means you were in an especially vulnerable developmental stage duriing flu season!
So, is it plausible that flu shots (for parents-to-be) actually reduce autism?
I think you have it the wrong way. Clearly, autism causes flu shots.
Autistic people can be quite skilled at science and technology jobs, so that checks out!
Maybe autism and flu shots both cause March?
Well, COVID-19 caused an anomalously long month of March, vaccines are diluted virus, seems reasonable to me.
What is it now? March 350th 2020 I think. It's the longest march since 1969.
I think the reason is that march, auto and flew are all ways of traveling a long distance
They say that March comes in like a lion and goes out like a lamb. March 1969 had been more like one of those Biblical angels with four lion heads and four lamb heads and a couple dragon heads for good measure, all spinning within a gigantic flaming wheel, and if you met its gaze for too long then you died.
Entire weeks repeated themselves, or skipped around, or moved backwards. There was a week when the weather stopped, and it was an even twenty-two degrees Celsius across the entire planet. The heavens turned gray and were replaced by a message saying “sky.smh not found, please repent transgressions and try again”. All animal cries were replaced by discordant buzzing sounds.
Nobody knew how long it lasted. Probably had been different lengths of time for each person, each shunted on their own separate timelines into weird calendrical eddies and whorls. Some people who had started off middle-aged ended the month with gray hair and soft, infinitely sorrowful voices. Others seemed to have grown younger. Most people looked about the same, but you could tell things had happened they didn’t want to talk about, days repeated dozens of times or weeks that happened backwards, or moments when timelessness had briefly encroached on time and for an infinitely long moment they had touched Eternity.
Uh-oh… What will await us at the end of that distance? April is the cruellest month.
>Overall genes that increase risk for ADHD decrease risk for OCD, and vice versa, suggesting that at least one advantage of ADHD is lowering OCD risk.
With this assumption how would one account for people who have both ADHD-like tendencies (forgetfulness, distractibility, impulsivity) and OCD tendencies (obsessive checking and the like)? Would these contradict the "opposite" model, or could one explain the obsessive behaviors as a coping method for ADHD symptoms ("I have to check to make sure I turned the stove off because I know how easily I forget!")?
Specifically, if this model is perfectly true, no one should be diagnosed with both ADHD and OCD, which i presume is false.
And there is comorbidity between the two conditions: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6700219/
Both are failures of your brain to determine what it should prioritize.
Depends what you mean by obsessive checking. In my experience, ADHD-style obsessions with things are more an inability to stop thinking about something you want to stop thinking about, rather than the flat out compulsions you'll see from OCD people.
That's the problem with attempting to generally ground talk of these syndromes in genetics. It's based on a kind of biomania, the implicit belief that if you just drill down far enough all variances in behavior will be explained by genetics and biology.
Those are obviosly always involved but sometimes biology is simply the "proximal pathway" through which cultural or personal contextual factors affect the individual.
In this case, the common sense conclusion would be that past forgetfulness led to overcompensation in the form of obsessiveness. Sometimes the worst and most damaging effects of "mental illness" are not the "actual" symptoms but rather the strategies the individual employs to compensate or combat those symptoms.
This is fascinating. I was diagnosed with ADHD as an adult but I always had some mild OCD-like traits involving obsessive checking: turning around halfway to work to go home and make sure the fridge is actually closed, checking my alarm clock ten times before going to bed, always turning on the oven light when I turn the oven on so I have a visual cue to remember to shut off the heat, pulling over on the side of the road to make sure I actually put the gas cap back on, going over my work fifty times before turning it in to make sure I didn't make a silly mistake. Now that I'm on medication for ADHD I don't do any of that anymore! I know some people with actual OCD so I never thought that I had a full blown disorder but I also knew that my obsessive behavior was not normal. I didn't realize until reading your comment just now that my obsessive behavior is pretty much gone. It seems very likely that all of that was just overcompensation for past forgetfulness. I've always devoted an inordinate amount of time to "idiot-proofing" my life because I know I have a tendency to be, well, an idiot!
It doesn't make sense to say a trait X is Y% genetic. And if you're talking about heritability, the measure is population dependent and has little to do with genetic determinism.
Why doesn't it make sense? Even if it is population dependent, quantifying it can still be useful. Obviously there are things that are 0% genetic (first language) and things that are 100% genetic (eye color), but if schizophrenia rates are correlated with parental rates, even if it's not 100%, that's still useful to know.
I think the issue Onslp is getting at is that "heritability" is a technical term with a precise definition that is very different from what we think of when we say soemthing is "genetic," but educated people still fall into the trap of using them interchangeably.
First language is 0% genetic in terms of physical causes, but it's got high heritability.
That's probably a poor example because adoption studies would destroy it instantly.
The classic number of fingers is a good illustration of the limits of heritability, I think.
uhhhh section numbering go I,II,IV, whaaaat?
One of the things I struggle with is the notion that schizotypy correlates with charisma, especially because schizotypal personality disorder is also characterized with strange communication patterns, inappropriate responses to social situations, inappropriate dress and presentation. Is there a study correlating schizotypy with charisma?
Sometimes I find your writing wonderfully nuanced and thought-provoking, other times the oversimplifications and unexamined assumptions perplex me.
In the first place I don't know whether to admire or be appalled at the attempt to tackle this complicated subject in these tiny snippets, but this is our culture and I guess at least someone is talking about it.
But treating mental illness as a homogeneous entity as you sometimes do is problematic. There is zero evidence that they all share anything but some degree of inconvenience to the sufferer. And as you rightly pointed out in your last post, far different etiologies can lead to the exact same symptoms (and similar etiologies to different symptoms). Sometimes the "health" in mental health is what appears to be central. Other times it appears to be an understandable (if unchosen) reaction to problems of living. Other times based on continuing once adaptive behaviors that are no longer so. And so on.
And it presumes that those who do not fit current DSM diagnoses are healthy, that there are no illnesses disguised by their social acceptabilty. I don't know how anyone with any experience of human beings can deny that something akin to the pathology of normalcy affects a significant minority, and is far worse and more sickly a thing than many DSM diagnoses. The foundation of your analysis ad hoc embraces psychiatric conventions of thought and "mental illness" when the discipline doesn't even define what is meant by mental or mind, much less any other foundational term.
And the conclusion that this heterogeneous concept of "mental illness" across the board is "mostly" a sign of failure genes is on its face questionable. Over half the population has a mental illness at some point. So... 40% or so of the population is a genetic maladaptation?
That a significant number of syndromes are a result of mostly maladaptive genes and flat out bad environment factors is incontestable. But a) these are overcounted due to methodological biases towards genetic explanations and b) that they look so similar to "trade-off" based syndromes is a sign of the deep problems of the current system.
And your discussion doesn't mention the most obvious source of trade-offs: sensitivity. The same exact "bad genes" implicated in some psychiatric symptoms are also responsible for thriving. In other words, the presence of some gene variants mean that, depending on the environment, they will lead to either sickness or its opposite [1].
And this corresponds precisely with everything we know about basic biology. You can have all the genes variants for something and not get it. We are district in the animal kingdom in the degree to which we are "programmed" to be highly biologically attuned to culture, such that it becomes sedimented in the body. That nature is nurture and vice versa, to a significant extent.
But your overwhelming focus seems to be on a simplistic treatment of genes and obviously unhealthy very early experiences at the behest of all else. I don't even know where to begin responding to a phrase like "it looks like evolution has been trying hard to get rid of them". I'm hoping that's just lazy phrasing.
The problem with the overemphasis on the "failures" is that a) individuals amenable to turning their "syndrome" into an advantageous trade-off are under-emphasized and b) the degree to which any psychiatric symptoms can be damaging, regardless of whether it meets a DSM threshold, is brushed under the table.
For an example of the latter, the vast majority of individuals with any ADHD symptoms to any extent are impaired, and impaired to the extent to which they display those symptoms [2]. The common sense conclusion from this is that ADHD is not an illness, its symptoms are signs that they are already sick. And full-fledged ADHD is simply an extreme form of that illness (that often emerges as consequence of disadvantage [3])
Successful individuals are simply those who go from being ADHD - having no ability to distance themselves from their worst excesses - to having ADHD. It's a developmental achievement in those without the worst genes/environ combo, and involves calming/channeling its co-occuring cyclothymic temperament [4] into an asset.
But does every syndrome operate like this? Of course not. But when you lump all these random syndromes together, it obscures more than it reveals.
1. http://dx.doi.org/10.1038/mp.2016.114 & https://doi.org/10.3390/genes11111248 2. https://doi.org/10.1016/j.psychres.2019.02.003 & https://doi.org/10.1111/j.1469-7610.2011.02467.x 3. https://doi.org/10.1016/j.ssmph.2020.100548 4. https://doi.org/10.1016/j.jad.2012.04.034
Pathology of normalcy? Tell me more. Does it manifest differently in different cultures?
It's a term coined by Erich Fromm in the 50's, but it's been so long I can't do the concept justice based purely on memory. But it's bound up in a criticism of the cowardice of psychiatry in refusing to do anything but reinforce social norms.
I'd forgotten he actually produced a whole book on it based on a lecture series, and a quick Google scholar search shows there's been a few recent attempts to revitalize the concept.
His views on it stem partially from his experiences in Nazi Germany. Him and others in the Frankfort school who experienced living here and there often spoke of the similarities - that the propoganda here was the same if not worse - just in a different direction.
So it goes. Here at least is a quote of his on it: https://www.goodreads.com/quotes/7434279-the-pathology-of-normalcy-rarely-deteriorates-to-graver-forms-of
Thank you.
<a href="https://medium.com/the-bts-effect/on-pleasure-510c9795bff4">Essay</a>about a "normal" mother squelching her daughter's capacity for delight.
<a href="https://www.metafilter.com/190396/Other-people-are-going-to-think-its-weird#8064253">Discussion</a>.
I'm not sure what went wrong with the html.
Essay about a "normal" mother squelching her daughter's capacity for delight.
https://www.metafilter.com/190396/Other-people-are-going-to-think-its-weird#8064253
Discussion:
https://www.metafilter.com/190396/Other-people-are-going-to-think-its-weird
> There is zero evidence that they all share anything but some degree of inconvenience to the sufferer.
And a plethora of genetic risk factors. As pointed out.
When you're talking about hundreds of thousands of variants, that's not a meaningful statement.
Particularly when you notice that many disorders are extremes of a trait that share strong genetic overlap with those within "normal" variation. If you included these folks in the overall analysis, you'd find a similar overlap with DSM diagnoses. What does that indicate? Next to nothing.
In my experience, a lot of lawyers have ADHD (especially in the field of criminal law). This makes sense for similar reasons to why ER doctors would have ADHD - if you have trouble doing things before the last minute, doing long-term followup, and have a whole bunch of coping strategies around organizing chaos already, those fields are a much better fit than other areas of the appropriate profession would be.
The problem, of course, is that the difficulties of the disorder are still very much there. They just get in the way less.
On tradeoff is suggested by how all minorities usually fare in a society, with most of them repressed, but with one or a small group rising to the top by the very virtue of their unique perspective. The outlier of the outlier might be king for having something like two eyes.
Re: Autism, I've been very persuaded by this review: https://www.pnas.org/content/111/42/15220
"Autism as a disorder of prediction"
It beautifully ties together all the different symptoms of autism into the predictive processing model, postulating that autistic phenotypes arise from weak inputs from top-down predictions, at different time scales and layers.* I really recommend this read.
It also fits the observation that many of these risk genes are in synaptic proteins, and their phenotypes often include abnormal dendritic spine physiology.
I think it fits very well into what you say. To the extent that autism is a tradeoff, it's a tradeoff of stronger bottom-up input (and weaker top-down priors) which makes your thinking less biased, lets you see things others ignore, and lets you forgo the intuitive wrong answers and actually do the math. The price is that you can't adjust yourself to sensory input, have trouble predicting where the flying ball will go, and find it hard to not take someone's words literally.
To the extent that autism is a failure, it's a failure of what Clark calls "precision-based weighing" - i.e. deciding accurately how much weight to give to bottom-up input vs top-down priors. And I suspect that schizophrenia shares much of that in common, and may well be a similar failure and the opposite tradeoff.
*At one level, it makes a person unable to adjust away a constant sensory input, so they stay highly sensitive to it and can't ignore. It also makes them really want a predictable, self-caused sensory input, i.e. stimming. At even shorter timescales, it leads to the very idiosyncratic motor problems. At longer timescales, it makes a person unable to use priors on human behavior when interpreting other people.
It appears a good fit with Markrams intense world theory, the "magical world" experience being one part of what overwhelms.
And autism seems to match well with other syndromes in which the essential trade-off is: you may have a higher ceiling - particularly in unique, very specialized fields - but in order to reach that peak greater stringency is required.
It seems most people's floor and ceiling is difficult to move much, so long as their experiences stay within some normal range. Autistics and some others seem much more variant and dependent on the environment.
In other words, their path to salvation is narrower and more treacherous than that of others. But if they can stick with it, come to terms with their strengths and weaknesses and form a way of life around it, their mountain-top will likely be particularly high. And undoubtedly more unique.
I think that's more or less the same theory as the Lawson/Rees/Friston paper I linked.
When u write a long comment after reading the first part of the post thinking it's going off in another direction only to find out that's the point Scott was making :-( :-)
But I would add that there is another kind of trade-off to be considered. The general performance/tolerance for errors trade-off that happens when you overclock a computer. It might not be that autisticy genes are ever helpful but the overclocked brains that tend to make the brightest engineers just have turned down the tolerance for that kind of screwup way down.
So you don't need to even suppose that autism like traits are somehow the same thing making one a better engineer.
Another possibility is that good engineers are the result of genes that put very few points into some kind of redundancy and that limited redundancy means that even a few aspie genes result in aspie behavior.
--
I don't think this is the most likely hypo for autism or ADHD just because the way the failure seems so close to behavior that's good in other circumstances but it's a hypo to consider.
"Overclocking" is Henry Harpending & Greg Cochran's explanation for the higher frequency of genetic disorders affecting a certain metabolic pathway in the brain for Ashkenazi Jews, and their higher IQ. Similarly, they point out that sickle cell alleles seem like a quick fix for a recent rise in malaria, whereas evolution would have picked something less harmful if it had a longer amount of time to find it.
It should be noted that Greg hypothesized this before the advent of GWAS and polygenic scores. None of these have validated his theory so far.
I think it's reasonable to say that Asheknazi IQ is in some way based on genes. But Greg's proposed mechanism (rare genetic disorders = smarts) probably isn't true.
I think that Greg's theory was that, like with sickle cell anemia, it was advantageous to be a carrier of an allele but not necessarily to have two copies (and the disorder).
You made me really uncomfortable with your justice system analogies. They seem so inapt that it's hard for me to accept the surrounding discussion.
> In a country biased towards finding people innocent, it only takes a tiny failure to let a murderer go free. In a country biased towards finding people guilty, it would take a huge failure, or many different failures in parallel.
For this to be correct, it would require a bizarre measurement of failure size.
The problem is this: the simplest way to let a murderer go free is to convict an innocent person instead. Problem solved! Nobody gets murdered twice; as soon as you convict one murderer, you're done. But the "we find everyone guilty" system will do this *all the time*. They do it on purpose! That's the whole point of a "we find everyone guilty" policy.
In order for this mistake to require a huge failure, suspecting the wrong person would have to be considered a huge failure. But that can't be right. For just one example, if a married woman dies, her husband is automatically suspect. I don't think that's a mistake. I don't think many people at all think that's a mistake. But it would certainly be a mistake if her husband was automatically guilty!
> Go back to the two ways a justice system can go wrong. First, it sets too many guilty people free. Second, it convicts too many innocent people. If you ask which tradeoffs cause each problem, you'll find they're opposites.
Well, no. There are more than two ways the justice system can go wrong. This perspective only makes sense if you consider "the justice system" to consist of nothing but court trials, into which suspects are deposited by some ineffable force. And in the larger perspective, we see that these two problems are not opposites at all! Every time you convict an innocent person, you automatically let a guilty person go free! Increasing the one metric necessarily increases the other! (Though not vice versa.)
Convicting an innocent person doesn't mean you let a guilty person go free. There may have been no crime in the first place.
Though you're right that convicting innocents and freeing guilty persons are not mutually exclusive, so the analogy maybe wasn't the greatest choice.
There's a mathematical formalism called "detection theory" which separates bias (what is called "tradeoff" here) and sensitivity (the inverse of "failure" here), starting only from success or error rates to two opposite problem types. Its results are usually represented on a ROC curve, i.e. a graph with false alarm rate on the x axis and hit rate on the y axis; this can easily be adapted to put "rate of using precise cognition when the situation doesn't call for it" on the x asis and "rate of using precise cognition when the situation doesn't call for it" on the y axis. Let me refer to this image I've drawn in the usual style of the blog: https://imgur.com/a/7ucjzUA
There's a few differences of interpretation between this formulation and Scott's: Detection theory says you can have super-high bias even with high sensitivity, unlike Scott's model which tightens the possibilities for "tradeoff" as we approach no failure. It's just that bias matters less and less as you reach towards infinite sensitivity (no failure). At infinite sensitivity, bias is indeterminate, not zero. (At infinite bias on either side, sensitivity is also indeterminate.)
Interestingly, though Scott applies the model primarily to mental health here, he briefly touches on its applicability to social issues such as crime. One thing I've noticed is that an enormous amount of energy, political will, debate, advocacy, resources and thought are spent on where we set the bias term, and discussions that touch on the sensitivity term (anti-failure) tend to be rare and difficult to find. Presumably this is because moving the bias term is relatively easy and improving the sensitivity of anything can be very difficult, but this still tends to annoy me at times.
Ahh - For some reason, when I loaded the page an hour and a half ago, your comment wasn't showing up. (So I wrote effective the same comment, albeit far shorter, below.)
On the politics side of things, my impression is that this is because you can't have an argument about whether you should make a system more sensitive, everyone agrees yes. But you can very easily have an argument about where the bias should be placed. Worse than that, whichever direction the proposed sensitivity points in, people who think the bias should be pushed in the other direction interpret it as an argument to push the bias in that direction and fight back against it in that way. So, sensitivity proposals actually get turned into bias arguments and you get nowhere (for example, a sensitivity improvement that decreases false arrests would be interpreted by tough-on-crime advocates as an ideological push to be soft on crime).
A caveat is that people can and do argue about whether the cost of sensitivity increases justifies them.
I think you draw picture wrong according to your own description - “appropriate” should be on the Y acis and inappropriate on X axis. Similar to “ false alarm rate on the x axis”
And I think it's worth adding that the neurodiversity movement often conflates several things. Is x a desireable trait or not, should we search for a cure for x and should society avoid regarding x as a disability or negative so it's not viewed with pity and we don't treat those with x less seriously. I mean it's really hard for any trait to be totally neutral.
I mean consider being very short. Every person I've met under 5'1" wishes they were taller, being that short is almost purely a negative in today's society (even if it just keeps you from reaching high objects) yet we don't view being short as a disease or medical deficiency and it might not be worth looking for a cure for being only 5'. In other cases, like ADHD it's probably good to find a treatment (even if on net mild ADHD is beneficial it's even better to be able to turn off) but we don't view those who have it with much pity or treat their opinions as less important. But that's only possible because they (we) seem mostly like everyone else.
Rhetorically and psychologically it's really hard to say yes, all things considered, severe autism makes life worse and a cure would be good but I do think want you to tag sufferers with all the negative attitudes we associate with mental disability.
I mean hell, even if your a high functioning autistic person surely it would be great to have at least a temporary treatment letting you switch on the way normals see the world. I mean even you don't think it's better surely it would be useful to experience it did say 6 months.
"don't think want" should be "don't want"
Have you considered the possibility that for a high functioning individual, their way of viewing the world is inseparable from the way their brain functions? There is no "normal" in that sense, because we all view the world in a particular way, based on our personality, experiences, and brain functions.
Sure, I bet a lot of people who have ticks and trouble with certain situations would enjoy taking a break from it once in a while. But that applies to just about everyone, since nobody goes through life without worry or fear of certain situations.
I think it's important to differentiate between alternate modes of approaching life, and straight up negatives. I would say that's probably Scott's overall point with this post. Someone who is so autistic they are non-verbal and need daily care their whole lives would benefit from treatment or a cure. Someone who is good at computer programming or accounting because they have autistic-bordering traits may not. If you cure them of their overly literal thinking, maybe you cure them of their ability to do math?
Actually, sometimes being very short is a sign of a medical problem.
Also, I've heard of parents trying to get treatment for kids who are just short, but not unhealthy. On the other hand, I haven't found anything backing this up.
There's an operation to make adults taller-- break the thigh bones, then apply careful traction while they're healing. This can be worth up to 5" (12.7 cm).
Yes, that's why I was careful to pick the height in my example at 5' which is well above the threshold that anyone treats it as an appropriately treated medical condition. But, rereading my comment, I realize I wasn't nearly as clear on that point as I should have been.
A neighbor of mine sought hormone treatments for her short son. 20 years later he’s still 5’ tall and also 5’ wide. Not sure if the treatment is to blame for the latter, but it certainly didn’t work.
Being able to “switch” freely to view the world through different frames is a good point. “Useful” for the purposes of understanding others as well as becoming more naturally aware of your blind spots.
Certain types of meditation intend to go into lower level sensory information, which correlates with “autism as a higher prior in low-level information”.
Personally, I’ve been able to see low-level phenomena (ie breathing walls, white noise in visual stimuli, pixies in the blue sky) which are just normal for some people. I’ve asked others and these stimuli are foreign like they used to be for me!
It’s also possible to “reverse the stack” and go to higher level processing in meditation, though I’m not familiar with the effects.
The comparison to shortness... I think I'm going to start introducing my wife as "she's a high-functioning short person, she owns a step-stool".
It’s another tradeoff spectrum. E.g. I can’t see out the peephole of my front door, but I have plenty of legroom on planes.
Aren't there height limits for some fighter pilots? And maybe for people operating tanks?
My grandfather was rejected from WWII for being too short. His slightly taller brothers served in the war, and he was always bitter about it. By Vietnam the height requirement was supposedly relaxed and my uncle, also quite short, became a Marine. He enlisted partly because he knew it would be a kind of vindication for my grandfather.
I find it interesting ironic that it's pretty stereotypically high functioning autistic to ignore all the social implications and overtones of saying we should try to cure autism and/or avoid carrying fetuses with high risk of autism to term because autistic individuals have lower expected utility in our existing society.
Not saying that is true just find it ironic that the kind of claim which often provokes the most negative response by the autism nuerodiversity folks is something that I've found to be frequently voiced by individuals with a degree of Aspergers.
I find it fascinating to apply the tradeoff/failure framework to hiring, and was just having a debate with my household about this - the similarities are striking (innocent : guilty :: bad hire : good hire, where you want to catch the good hires and let the bad hires go, but it's unclear if a hiring process that is failing to hire as many engineers as it wants to get is failing or just very far on the spectrum).
We know, or at least strongly suspect, that a minority of people experience weird immune reactions to viral infection, some of which include psychiatric effects. This is getting much more attention now, with covid, but it has long been associated with viral infection in general, because it is more about the defective immune response than a feature of a specific virus.
I read a lot of 19c medical history, so I can compare today with a time in which everyone got a lot of infections, which they had to fight off without help. Obviously, we'd expect to see a lot more of this, and while I can't prove it, there seems to have been an accepted connection between viral illness and psychiatric symptoms in adults and teens, and many probable cases.
But there are surprisingly few mentions of anything resembling low-functioning childhood autism, and even fewer relating such a condition to a recent illness or illness during pregnancy. Both early childhood illness and illness during pregnancy were quite common at the time. This puzzles me. If it is at all related to immune function, which the March birth thing suggests, we should have seen more of it even well in to the 20th century--the 1918 flu caused a lot of weird psychiatric reactions, and other diseases were prevalent. Early 20th century records do have many more cases, under the name childhood schizophrenia, but few seem to have made any connection to viral infection. Maybe it's just not properly documented, but it strikes me as a real puzzle. Everything attributed to covid I'm used to reading about in historical accounts, and seem to be well-established immune responses. I think we've massively underrated the role of viruses in triggering certain medical conditions, and the wide variety of possible symptoms. But I don't see many descriptions matching childhood autism. Just something I wonder about, as there are other reasons to think there is a link between autism and some sort of immune dysfunction.
Maybe the children who would have been born autistic in the present day just died in infancy back then?
I think that is one of the better theories...that it correlates with a specific kind of immune dysfunction that made surviving infancy (or even a fetus making it to full-term) impossible in "pre-modern" disease conditions. And fully "modern" conditions only trace back to the late 1950s, with childhood vaccines and other advances. Even then, most adults living would have been survivors of a pre-modern era. As the "modern" era progressed, it doesn't seem terribly surprising that certain conditions seemed to come out of nowhere or get much more prevalent. There are a lot of other variables that must play a role, but we seem exclusively focused on new things that could have *induced* these changes. It is possible that removing past selection pressures has quite a bit to do with it.
It seems that this is an evolutionary version of a bias-variance tradeoff, which makes sense, since in any optimization system you'll find a tradeoff like this. And as with the examples here, the system can in theory minimize either, but typically there will be some of both.
Two comments... first, anyone here read Marco del Giudice's "Evolutionary Psychopathology: A Unified Approach"? He tackles the problem of the ontology of psychiatric conditions head on, with an attempt at a global hypothesis involving life strategies (fast and slow) and many other things. I skimmed through it, but it's way over my head! Any opinions?
Second, about depression, I've read a few articles lately converging on the idea that depression is a response to a distress. Quote from Johann Hari: "This pain you are feeling is not a pathology. It’s not crazy. It is a signal that your natural psychological needs are not being met. It is a form of grief – for yourself, and for the culture you live in going so wrong." Any opinions?
Duh, thanks, that's where I originally found it... :)
"They seem to be general genes for having mental disorders, with a wide variety of negative effects"
I've been told that there is a correlation between these "general genes for having mental disorders" and homosexuality. Is that true?
The incidence of mental disorders is higher in homosexuals than their straight counterparts. I don't know whether the correlation has been found to be genetically mediated.
The rate of diagnoses is higher. Not sure how you'd compensate for right wingers being less likely to be gay and less likely to go to a psychiatrist.
It seems like the general pattern here is:
a. Super complicated system requiring lots of stuff to go right.
b. Lots and lots of genetic or environmental problems that can mess that system up somehow.
c. Also some tradeoffs made that might be subject to balancing selection or something.
You could certainly imagine this for homosexuality, lack of interest in sex, weird sexual tastes, etc. How interested you are in sex and how flexible your interests and how masculine/feminine you are by default are all probably tradeoffs, with plusses and minuses. But also, in general, a gene that leads to lack of interest in sex with people you could make babies with is almost guaranteed to decrease the fitness of that gene.
>which I’m tempted to cynically attribute to their being less likely to remember to use contraception
Would it be considered out of line (or just more holistic medical practice than anyone can currently be arsed to provide) to talk to your ADHD patients about long-acting reversible contraception? That feels like a no brainier.
I feel like it would be out of line to just bring up this one point out o the blue, but maybe it could be usefully camouflaged in the middle of a long list of 'Lifestyle suggestions for people with ADHD,' presented in a nice official-looking brochure.