Some of this terminology seems useful for talking about consciousness (which I suppose is a sort of psychiatric condition); there’s obviously a useful distinction between humans and algae, but trying to pin down an exact dividing line proves difficult, especially when you entertain thought experiments like removing one neuron at a time from a brain. Lots of debates about e.g. animal consciousness seem possible to phrase in terms of the shape of this graph. See also the Sorites Paradox.
My take on that has always been that while categories such as "heap" are not *necessarily* defined with objective cutoff points, if it makes sense to define it anywhere, it'd be the point where the grains/articles of object are stacked in a pyramidal or conical configuration. So the fewest possible grains of sand that could constitute a "heap" would be four- three arranged in a triangle and one on top. In practice, unless you arrange them very carefully, with tweezers or such, four grains are unlikely to constitute a heap, because if you're pouring them onto a surface, you should expect it to take a lot more than four grains before any of them achieve a configuration more than one layer deep.
How many layers does your pyramidal configuration need to count as a heap?
I feel like this misses the point. The common parlance of 'heap' does not include any definitions about pyramidal configurations or anything else. I believe the truest definition of "heap" is anything that pattern-matches to a heap for most people.
Sorite's paradox is pointing out that there's no 'correct' cutoff for these categories along a spectrum. Which is exactly what Scott was pointing out.
> The common parlance of 'heap' does not include any definitions about pyramidal configurations or anything else.
Probably not pyramids per se... however it is not unreasonable to expect a 'heap' to involve things heaped on top of each other, which do look like that in minimal form. As suggested in their bottom line, the point is that a single layer is certainly not a heap.
But you're also right, the point is a reference to a paradox about a Greek word, and it has no obligation to work anything like the English word 'heap' we translate it with; if that's a property of 'σωρός' that's lost in translation we can still understand the idea being represented.
Category boundaries exist but are fuzzy. No, there's no fixed number of grains we'll all agree on as a cutoff for using the word, "heap," but you could probably define a numerical metric of "heapness" and we'd mostly agree on its shape. There would be a minimum of 0 heapness at 0 grains, and maybe another 0 heapness at "enough grains to form a black hole, or at least enough to become spherical under its own gravity." There would be a maximum, determined by combining many people's judgments, maybe somewhere near human height.
This also applies to many "alignment charts" or discussions of "what is a sandwich, really?"
I think this arises when you take a dimension and apply it to a category. Heaps of sand isn't really an issue as you don't treat a heap of sand any different from one grain less than a heap of sand, but in medicine there are dimension disorders with categorical treatment options, like gets the drugs/diagnosis or not, gets hospitalised/outpatient/nothing re treatment. I really annoys me when this is done for no reason, like with tax and govt transfers. There is no reason you need the bands since computers were invented we don't need the simplification of the math, you could just have a formula where the tax you pays go up faster percentage wise than your total income does. It could asymptote or have an s shape or whatever, it would need highscholl maths only, but with the bands people keep getting welfare cliffs and getting worked up when there income pops into a new band or the bands shift.
The simplest way to define the semantics of "heap" is to treat it as some function mapping entities in the world to (True, False): some things are heaps and some things aren't (you could add a third category called "Underspecified" or something, but this doesn't help much because you run into the same pardox at the boundary between True and Underspecified).
Given this background, there are a few approaches to "solving" the paradox:
1) Allow for uncertainty: even if "X is a heap" is treated as being something that "is" True or False for each pile of grains, language users who are asked "is it a heap?" will, as Bayesians, have different degrees of confidence that the answer is "yes", from 0% (definitely not a heap) to 100% (definitely a heap). It is maintained that objects are either heaps or not heaps, but knowledge of which category they fall into can only be known approximately. In this formulation, an observer could rationally be 80% confident that a particular pile of grains is a heap--perhaps even once they have all relevant information available to them.
2) Reject that "heap" is function that maps entities to (True, False), and treat it as a function that maps from entities to the range [0:1], just as you might for scalar adjectives like "full" or "dark". Piles of grains range from being 0% heapish to 100% heapish. In this formulation, an observer might be 100% confident that a particular pile of grains is 80% heapish.
3) Both of the above at the same time. Heapishness is scalar as in (2) AND knowledge about heapishness of a particular pile is only ever approximate. In this formulation, an observer might be 80% sure that a particular pile is at least 90% heapish, and simultaneously 90% sure that a particular pile is at least 80% heapish.
4) Define meaning entirely in terms of usage, instead of the other way around: "is a heap" is literally the same thing as "speakers tend to declare that it is a heap when talking about it (assuming cooperative, sane, competent speech)." It is meaningless to decide whether things " really are" heaps: instead, we only ever decide--a real-world practical decision--whether in a particular moment we would like to declare (to ourselves or others, and possibly in the context of a philosophical debate) that something is a heap, or to declare that it is not a heap, or to not make any statement at all as to its heaphood (which is by far the most common decision). Our choice of behavior (and thus whether or not it "is" a "heap") will be determined in some complex way by the number of grains along with many other contextual and psychosocial factors. In this formulation, if across history nobody ever spoken or thought about whether a particular pile is a heap or not, the question of whether it is a heap or not is literally meaningless.
For all of these, the relationship between number of grains of sand and heapishness, or certainty of heaphood, or likelihood of someone calling it a "heap", is probably something like a sigmoid function or "s-curve". You could if you wanted to find the number of grains where the y value is exactly 50%, but there's nothing particularly interesting about that number of grains, and the exact number will be affected by other factors like the color of the sand, who the observer is, how much coffee they had that day, and so on.
Personally I find (4) the most interesting, as it implies that the "meanings" of words are not dictionary-like definitions, but are real physical processes extended in space and time, encompassing the grains of sand themselves, light, our sensory organs and nervous systems, our cognition and memory, children's learning of language, and our moment-to-moment motivations for wanting to use language at all. Whether or not something will (or "should") be called a "heap" becomes the same sort of question as whether or not you will (or "should") put ketchup on your burrito, or drive to Seattle, or commit suicide--and just as complicated to answer.
With consciousness in particular, an obvious-to-me conclusion is that some people are more conscious than others. Since that *sounds* so bad, you might also say that the same person can be more or less conscious at different times (excluding the obvious: sleep).
Consciousness could very well be categorical. What if we define it, for example, as "are you thinking about yourself?." Then, you either are or are not conscious at any particular point in time.
(I think my father has some sort of strong opinion that this change was wrong and might have written an article about it somewhere, but I shouldn't be giving false information just out of family loyalty)
Oh, it's very arbitrary and probably will just result in more people using antihypertensives without increasing survival rates, but I guess the AHA sees value in scaring more people to take care of their cardiovascular health. European guidelines still use the 140 SBP cutoff
I suspect that the Number-Needed-to-Harm drops significantly with these aggressive hypertension definitions, especially as I see them being adhered to in cohorts where judgement should *definitely* be on person-centred basis i.e elderly care (without knowing the exact prevalence, my priors lean towards more elderly die from orthostatic syncopes and subsequent neck of femur fractures due to aggressive antihypertensive treatment than die from a SBP of 140-150...)
That was Freud's radical insight long ago, which the DSM initially rejected but is now slowly backing away from.
The problem is that the arbitrary cut off brushes under the table the suffering and impairment of those with subclinical manifestations - which is often similar to that over the threshold (1).
Once you admit dimensions, retaining simple categories is problematic. Particularly if one takes into account the absurd levels of mental illness co-occurence. If most with the flu also have a cold (removing the virus from the equation) - and a few with the flu have mad cows disease or strep throat or something else - and no one "in the wild" has just the flu - is the flu a separate meaningful disease?
After a while everything begins to blur and you're left with something like a p factor or some version of "problems of living". Which is likely for the best. We live in a world deeply hostile to any biologically sane way of life. Until we deal with that, we're just hacking at a never-ending stream of branches.
Does this article really signal proficiency in math? It mostly seems like it's arguing for a more useful way to think about categories vs spectrums. The most "mathy" parts are the statistical tests but Scott doesn't really explain how they work or what exactly they are measuring.
Understanding that sometimes numbers represent a measure of discrete categories and sometimes they represent a point on a spectrum is an important part of moving past a layman's concept of math, and seeing as MDs generally do a bare minimum of math (its nurses who transform a dose into an actual injection or rate) IMO this is raising the bar.
It's interesting that the most abstract form of math is category theory and category theory is, well, all categorical. Something either is or isn't an object of a category and two objects either are or are not linked by an arrow. Category theory subsumes all of math. Bartosz Milewski in his youtube course asks the question whether this reveals something ontological about the universe, or whether this merely reveals something about the way our brain work (he things it's the latter). If we are constrained by the structure of the brain to think categorically, rather than dimensionally, this sure does reveal why the human race is so plagued with tribalism.
Hey, aspiring category theorist and amateur mathematician here. I know this is entirely tangential to the discussion, but I don't agree with you on this. Certainly the definitions of what makes something "a category" are categorical (in the sense of the word used elsewhere in this thread), but that's true of literally every definition in math. If you don't specify exactly what your terms mean, then you can't expect other mathematicians to understand them in the same sense you do, and you can't write any formal proofs using those terms.
Tangentially, I'd also argue that category theory as it is practiced is not nearly as "categorical" as you are making it out to be; nobody ever asks questions like "is such-and-such an object in C?", and rarely do they ask questions like "is there an arrow X -> Y in C?" or even "is C a category?" -- they're much less interesting than questions like:
- "what properties does the category C have?" (e.g. does it have finite limits, finite colimits, infinite limits/colimits, a monoidal structure, a cartesian closed structure, is it a topos?)
- "what do these properties tell us about the internal structure of C?" (e.g. if C is cartesian closed, we can think of C as a C-enriched category)
- "can we infer properties of C from how C is related to other categories?" (e.g. adjoint functors preserving limits/colimits, adjoint functor theorems)
- "if we know what C is like, what does that tell us about other categories built out of C?" (e.g. slice category of a topos is a topos)
etc. I don't really know how to classify these kinds of results on a categorical/dimensional axis, they feel more like "building up a picture of what the field looks like" than "deciding whether something does or does not fit into Bucket X".
I'll leave the questions about the structure of the human brain or why humans are plagued with tribalism to someone who is more of an expert on those matters.
(I'm a long time reader of SSC, this is my first time posting! Apologies if this post is deemed unnecessary; but I hope it is both truthful and kind.)
I would call it mathematical (or statistical) literacy more than proficiency. Tragically even basic literacy is vanishingly rare.
My (non medical) PhD program long ago seems in retrospect to have been 3-4 years of dissecting peer-reviewed! articles in important! journals to diagnose where the authors violated assumptions, waved their hands, misrepresented their own data, and cherry-picked findings.
I work professionally in empirical studies now, and struggle constantly with colleagues and decision-makers who read the abstract (or only the title) and think so-and-so has been proven beyond dispute. (See also: problems with "systematic reviews" and "meta-analyses".) It is a huge problem with respect to policy & societal health/wellbeing (cf. the 'rona).
I don't think the article shows advanced Maths skills on Scott's part, but it does (as well as being extremely well written) show Scott's ability to communicate conceptual ideas - including mathamatical ones - with clarity and wit.
Scott himself has claimed numerous times he is bad at math. Of course bad by his standard is still above average, and more indicating that his math SAT section was below 600. He just worked his ass off to be statistically literate enough to understand scientific studies, even if he can't do any complicated math. This shows that nearly any doctor that took the time and effort to learn statistics, could understand this. There is just no incentive for an average doctor to put this effort in.
Well, if my doctor recommends something and I find a study that contradicts his advice, then it is not easy to have a conversation about the data with him. This has been true with all doctors I have seen through life, for myself or family members. Recent example, a discussion on hormone replacement therapy and blood pressure. They simply reassert their original recommendation. It just seemed like Scott here would be different and welcome a deep discussion.
In my experience, most doctors are working under intense time pressure, and are basing their recommendations on whatever it says on UpToDate. They may or may not be able to engage with you on the substance of the article you bring to the table. However, making a recommendation that is not supported by UpToDate (or whatever set of guidelines informs their practice) opens them to liability when you suffer a harm.
Matching a diagnosis with a treatment is definitely rule-based. I'm not asserting that this is a desirable feature of the current system, just trying to provide some context for why you might be having frustrating experiences with doctors.
"My guess is most professionals, and an overwhelming majority of laymen, are actually confused on this point, and this messes them up in a lot of ways."--how much of this is just reification as a crutch for computational efficiency given limited time/energy/attention?
The thought processes you are moving away from are simpler than the ones you are moving toward, and docs are pressed for time. Likewise, laypeople are mostly probably trying to make decisions about their lives with framings like 'I think that man may be a narcissist so I probably won't go out on a date with him'
The reifications have costs but also benefits and I'm glad you're digging deeper in your practice and inquiry. I worry somewhat about unintended consequences when categorical terms for dimensional characteristics escape the clinical setting and make their way into the ambient construct stew
As a side note, there are common categories in use for wealth like HNW, VHNW, UHNW, but they certainly don't capture the upper-end variation around someone like Bezos (for whom even 'billionaire' is off by orders of magnitude...)
I agree on a purely computational level categorical processes are similar, but practically they can be much harder. I know doctors who really obsess over whether a patient meets the criteria for bipolar, because they really need treatment, so they try to ask questions in a bunch of ways to figure out if a full manic episode that meets the necessary number of criteria really lasted three whole days or not. Whereas I just kind of get a gestalt impression that they were obviously bipolar, then treat them.
I would suspect it has less to do with computational complexity and more to do with communicational complexity, ie categorical diagnoses are easier to talk about and explain and teach, and therefore most people hear about these things in a categorical paradigm most of the time.
The next time someone asks me about my mental health I’ll feel tempted to tell them that I’m “the Jeff Bezos of absentmindedness”. For a while now I’ve intuitively been moving away from the taxonometric definitions of mental illness so it is nice to read something that really goes into the math of it. Even in the cases where there is a definitive “reason” for the mental illnesses listed in the study, like the flu virus for the flu, it usually seems like it’s either some kind of traumatic brain injury (difficult to reverse for now) or something genetic (impossible to reverse fo now) so holding out for a magic bullet is implausible... It’s all symptom management. But symptom management is often what can help someone get over the arbitrary-feeling “line” between “Person doing poorly” and “Person doing well”!
Thank you for this excellent article. I found the concept of a "resident of Extremistan" to be an excellent phrase for describing someone like myself. I fall into the 2% or less in seven major categories, including autism spectrum and bipolar disorder.
I just signed up here as a founding member after Jonathan V. Last at The Bulwark gave a recommendation.
Statistical question / hypothesis regarding substance use disorder:
Many analytic methods don't deal well with massing at 0 and/or 1 (whatever the boundary condition is). There are large proportion of the population who are teetotalers, and a small proportion of the population of drinkers drive a significant amount of alcohol consumption. I assume (probably incorrectly) that many substances have similar distributions of use per capita/time.
Question - wouldn't this bimodal distribution drive taxonicity in statistical analyses? Despite this, substance use is not cleanly divided in the clinical context where substance use is 1) comorbid, 2) often a coping mechanism, 3) not necessarily problematic.
It seems like language is biased towards categorical thinking - either you choose to use a word or you don't. You need to know from context that it's not what people really mean, that tallness is dimensional. Sometimes it's tricky to hint at dimensionality without being overly vague.
Yeah. I once tried to make a conlang, and one of the things I planned to include was a variety of "intensity modifiers" that let you turn adjectives like "red" into more nuanced constructions like "very red", "kind of red", "arguably red", "reminiscent of red things", "more red than you would think, given what it is", and so on, in order to address this very issue.
it's a decent test for an AI, to see if it understands that. Like if you give a GPT-3 chatbot a height and ask it to tell you "tall or not tall" I wonder how it does
Yes, isn't greater and greater specificity what language has always aspired to achieve? This is OT, but I've been trying to figure out if I'm turning into a cranky old maven who can't stand to see language evolve, or if we are indeed witnessing a rather remarkable period in history in which we are collectively demanding less and less specificity from it.
Social media is the most obvious catalyst for new expressions, but it also heartily embraces sloppiness of execution, asking consumers to intuit emotional meaning from fuzzy group reactions to imprecision. Is there any consensus about whether everyone who enjoys over-using the word "literally" is doing so with tongue in cheek or out of ignorance? Regardless, what is one to do when one really needs a word like literally? Do we have to say "literally literally," since "literally" now has a new cheeky/ignorant definition?
Scott calls the term IED an "acronym," while I always thought it was an "initialism"-- but now I see the dictionaries are allowing the broader definition, which I guess simply means not enough people care about maintaining specificity in this particular category. But I'm getting the feeling the loss of specificity is snowballing these days, like the melting of the polar ice caps. And we're already suffering some of the political consequences of this cultural evolution. Or maybe I'm just becoming a cranky old fart.
This would make an interesting new topic in DataSecretsLox. Here, I'm afraid, it has hit an already almost dead thread, and you'll get not much of a response. There are some smart (sometimes grumpy, but on solid grounds one can learn from!) old men over there (apology to everyone misrepresented!), as could be seen in the discussions around computer interfaces, so as crosspost should be worth it.
I think language isn't geared toward specificity. English especially seems to offer vagueness as a benefit. I think of "love" and its thousand plus meanings, or all the other words that have different meanings (another example being hot meaning temperature or spiciness).
Vagueness allows for tentative communication with plausible deniability (there was a discussion of this on SSC once, re flirtation, IIRC), and the everyday diplomacy based on it.
Hence the existence of a spectrum from scientific use of language (where every term comes with a well-referenced definition, in extremis: calculus) to fuzzy … let's call it 'sounding out of an emotional or affective resonance'.
Language does all that. We are the ones to be aware of what to expect/utter in what situation; and a misinterpretation of the situation tends to backfire in every way, from comical to meh to tragedy.
Is “person with autism” really misguiding? I mean, we already have and use terms such as “people of color”, or “people of size” and I don't think anyone really assumes those are binary categories.
For what it's worth, a lot of autistic people reject "person-first" language, though for different reasons than Scott suggests in the post. The problem with "person with autism" (they argue) is not so much that it's reifying autism as a clear category, but that it's presenting autism was something separable from oneself. At least that's how I understand it; my experience in the field is fairly limited.
"people of X" are in practice used to draw sharp categorical differences - color at least is. of size is probably more fat activists adopting general critical social justice lingo to get legitimacy for their campaign of destigmatizing slow murder.
“People of color” is a special case IMO. It’s a way to describe a non-white person without implying that white is some kind of default or ideal. And while “colored person” might seem to be a reasonable analog of “white person”, it has too much cultural baggage.
I think it does still imply that white is the default. It's also sort of like taking Lothrop Stoddard's view of race and concluding "That's basically right, just flip the normative rankings".
1. Why is CCFI of 0.5 chosen as the border between categorical and dimensional? In other words, it seems "being categorical" or "being dimensional" is itself dimensional - is this true?
2. Question about your practice (ignore if inappropriate) - comparing the post here and on your site, I see you've kept the personal tone, which surprised me. Did you get any feedback from patients/non-SSC-readers about the content there? It might be too soon to know, of course.
This is a great point (Scott Lawrence makes the same observation below, and his comments are also worth reading) but I think it's worth being careful to delineate the map and the territory. CCFI is a model. Even though a cutoff of 0.5 is arbitrary (so the model suffers from the problem of the heap) it doesn't follow that the distinction *out there in the world* is dimensional. It could be that "being categorical" or "being dimensional" is categorical, but given a particular case, we have to make a guess based on a dimensional scale because we don't have enough dimensions of data to fully separate the categories. Presumably the 0.5 hyper-parameter has worked reasonably well in the past for delineating cases like the flu.
I wonder if the positive feedback loop of addiction makes it categorical. "Propensity to become addicted to cigarettes" could be dimensional, but "is currently addicted to cigarettes" (or "has ever been addicted to cigarettes") could be more like separate categories, because it means "consumed enough nicotine for the positive feedback loop to kick in." How much that positive feedback loop affects you varies, but that's like variance in the strength of the flu which makes the "has flu" lump wider than the "doesn't have flu" lump.
Maybe also relevant that "how much nicotine would it take to set off the positive feedback loop" and "how strongly will the positive feedback loop affect you if it gets triggered" are strongly correlated (I imagine) as two aspects of "propensity to become addicted to cigarettes." So the people who are more strongly affected by that positive feedback loop are also more likely to have triggered it (since that takes less nicotine for them).
My personal impression of autism is that it's both. Aspergers seems to me to be more dimensional, whereas low-functioning Autism (for lack of a better term) seems more taxonic.
I dont have any particular study to back this up, its just more of a result of all the stuff I have read about Autism. Its probably also influenced by my belief that it might be wrong (for social reasons) to lump the two together into one diagnosis.
I think the major problem is that Aspergers is a syndrome, the causes are barely understood, and people whose symptoms are right in the middle do exist.
But still, to me it feels like having an unable-to-walk syndrome which includes everything from sprained ankles to loss of legs.
They are unable to come up with criteria that distinguish high-functioning autism from Asperger, but also high-functioning autism from low-functioning autism. That's why they gave it one label in the newest DSM.
In the past, the criteria for distinguishing HFA from Aspergers was based on history – if there was a delay in developing functional language, then HFA; if there was no delay in developing functional language, then Aspergers. That criteria worked about as well as any other criteria in the DSM does – sure, there were practical difficulties (history of childhood language development isn't always available for adults, the parents may no longer be around, and even if they are, their memories of decades ago may be imperfect), and as always there are problems with unclear boundaries (timing of functional language development is a smooth continuum), but nothing that rises to the level of "unable to come up with criteria". Honestly, I don't think the DSM-5 authors actually had a very good justification for many of their decisions, and I think those who criticise their decisions are in the right.
And if difficulties reliably distinguishing Aspergers from HFA justify merging the two conditions, well there are also difficulties reliably distinguishing ADHD from ASD. (A number of studies demonstrate the substantial overlap between the two conditions, where to draw the line between them varies from clinician to clinician, and the line has moved over time.) So if it is difficult to reliably distinguish ASD from ADHD, should we merge them?
It can also be difficult to reliably distinguish autism spectrum disorders from schizophrenia symptom disorders. In adults, they can present with quite similar symptoms. The most important way to distinguish them is look at history – if the symptoms were present in childhood, that suggests ASD; if they weren't, that suggests SSD. So, the distinction is based on history – fundamentally the same as the HFA vs Aspergers distinction was. If that's a justification for merging HFA with Aspergers, maybe it is about time we merged the autism spectrum and the schizophrenia spectrum too?
One of the things I love about this blog is how you pull meaningful conclusions that have fascinating real-world implications out of math that goes over my head (but is still interesting to read an analysis of). I went into this one taking a gamble that it would be interesting to me, and hit pay dirt at the end. Cheers :) and again, welcome back!!
Has someone does the analysis to determine whether psychiatric disorders themselves split into two taxa of taxonic and non-taxonic, or if they lie on a spectrum between the two?
This was exactly my thought. If you look at the CCFI figure, it really doesn't look like the distinction between taxonic and non-taxonic is itself taxonic --- it's more of a continuum.
Which... it /can't/ be, right? Either there's a binary hidden variable or there isn't. That's the strong intuition that goes with this distinction. So one of two things is true. Either that's wrong, and the distinction between taxonic and non-taxonic is itself pretty worthless, and we should acknowledge that everything has aspects of both. Or, the fact that CCFI fails to pick up on the obviously taxonic distinction between taxonic and non-taxonic is a hint that hey, maybe this isn't a very reliable measure.
To rephrase: the thing CCFI measures appears to be non-taxonic. Is that a really bad proxy for a taxonic thing, or is taxonicness itself a continuum?
I think taxonocity is going to be measured along a continuum. Warning: this is hyper simplified.
You could create a bimodal distribution with no overlap, to represent two distinct taxons, and then use a distance function to quantify the average distance a point in peak A and a point in peak B.
If you move those two peaks closer to each other, the distance function would decrease along a continuum even if you never actually intersected the distributions at all.
Now let's say you actually begin to combine the two distributions on the plot. At first, there are still two obvious different peaks (with some overlap as Scott pointed out). At some point these distributions would be largely indistinguishable and there would be some arbitrary "gray area" around which it's unclear whether the two distributions are categorically different or merely dimensionally different.
I agree that the "bimodalness" of a distribution is a continuous thing. But, "bimodalness" itself is at best a crude proxy for "taxonicness". Take the flu example above --- as Scott mentions toward the end of that example, if you actually look at a plot of flu symptoms, you don't see two peaks, so that "having the flu" appears to be a non-taxonic thing. Of course, "having the flu" almost certainly /is/ taxonic, and we'd see that if we plotted "density of influenza virus particles" or something like that instead. There really is some nearly-binary latent variable.
And in your constructed example, the same is true. You can make the distribution unimodal, but it's still the case that there is a binary latent variable. It just so happens that you've failed to see it.
So... I worry that that's what's happening with CCFI. For schizophrenia, for instance, people seem to believe (I know nothing) that it's taxonic. That means that the CCFI measure was just "looking at the wrong plot" (either literally or metaphorically, since I dunno how CCFI works), and that on a better plot, a beautiful bimodal distribution would become visible.
While I don't have time to read the whole thing now, I think this is a helpful part from the introduction of the paper Scott linked to, to begin to build an intuition for how this process works:
To reduce the subjectivity of taxometric analysis, and thereby to
address each of the problems this entailed, Ruscio, Ruscio, and
Meron (2007) introduced a technique to produce comparison
graphs using parallel analyses of artificial categorical and dimensional data. These artificial data reproduced important characteristics of the empirical data (e.g., sample size, number of variables,
If you're statistically inclined, CCFI just uses Root mean squared residuals to determine the "error" between the observed distribution and these two theoretical distributions.
"These two fit values are then combined into the CCFI:
CCFI RMSRd ⁄ (RMSRd RMSRc) (2)
CCFI values range from 0 (strongest support for dimensional structure, obtained when RMSRd 0 and RMSRc 0) to 1 (strongest
support for categorical structure, obtained when RMSRc 0 and
RMSRd 0). A value of .50 is ambiguous (obtained when
The actual binary here is what Scott mentioned about the difference between flu and height. A person with flu is infected with influenza virus. That's a binary hidden variable. If we can't measure it directly or don't know whether it exists, all of these statistical tests on (possibly) correlated attributes we do know exist and can measure are attempts to infer the presence of the actual hidden binary variable.
To some extent, it does just come down to whether this is a useful way of thinking, though. Presumably a lot of people have at least one active influence virus cell inside of whatever skin/world boundary we consider to be within their body, but only at some level of viral load sufficient to overwhelm immune response long enough to manifest symptoms do we consider that person to "have the flu."
Remembering Scott's classic categories are made for man, this only really matters to the extent it changes treatment plans. Knowing infectious diseases are caused by germs means we can treat them by performing some intervention that either kills the germs directly or aids the immune system itself in doing so. If we don't know the underlying cause or whether there even is one, we can only treat the symptoms.
I immediately noted that there seemed to be a discontinuity in the CCFI figure - while it is continuous over certain ranges, there seemed to be a jump in the middle. Is this just an artifact of what was represented? Or, is there some meaning behind it?
Psychological constructs in general are just low-dimensional approximations of complex, high-dimensional processes. The relevant question is not "are they real?", but "are they useful?".
For a similar discussion, see section "Realist intuitions impede progress in psychology" on page 4 here: https://psyarxiv.com/xj5uq
This is a conventional take which doesn't even attempt to grapple with the true complexities of the issue. The question is not whether current diagnoses are dimensional or categorical but rather whether they exist as meaningful separate diseases at all. And most data points suggest the answer is no, that the current system is a joke. A nosology useful to clinicians for efficient collegial
communicate but with no benefit - and often harm - to the patient.
Mentioning riches as an example of a dimensional variable is apt because it appears to be as useful a variable as depression in understanding human beings (I.e. not). That's because a) it's unreliable - most people's wealth fluctuates, often quite drastically, over the course of their lives. And of course b) it doesn't actually exist - money is a social construct.
When we talk about wealth what are we really attempting to get at? The variable that scientists - and particularly epidemiologists - have found most approximates that is socioeconomic status, which takes into account education, profession.and economic resources.
Depression more than likely is but one pillar of a larger, actually useful category - perhaps "internalizing disorder". Or a more accurate system might blow up the whole thing and not even use the heterogeneous DSM concept of depression at all.
Any system must deal seriously not only with the current in-vogue biological factors, but also must account for culture and time bound disorders and the drastically different rates of recovery of mental illnesses depending on where one is (i.e. westernized vs non westernized regions). When outcomes for schizophrenia are better in sub saharan africa than the US, something is wrong.
Perhaps depression is comparable to pain. 'Pain' is also not a valid diagnosis, but merely a symptom, even though you can treat that symptom with pain killers. A person who is in pain because his leg is broken, needs treatment for a broken leg, not merely treatment for pain, although pain treatment can be temporarily helpful.
Yet a person who is in pain due to cancer, isn't helped with a splint...
Some people may be depressed because their regulation system works poorly. Others may have shitty lives and see no prospect of changing that. These are as similar as a broken bone and cancer.
---
And I agree that context matters a lot too. In fact, disorders are often defined based on dysfunction, so societal inclusion and acceptance can be the difference between something being regarded as a disorder or not. If the government likes to torture people and a decent subset of society celebrates torturers as heroes, a pathological desire to torture can be fully acceptable, if it is channeled in the right way.
Your first point is apt. Much work in the sociology of mental disorder argues just that. Once the DSM in it's infinite wisdom decided to go etiologically agnostic it introduced this issue. In DSM 5 it does try to limit this: "An expectable or
culturally approved response to a common stressor or loss,
such as the death of a loved one, is not a mental disorder."
"Expected or culturally approved response" is a notable qualifier and seems to more reflect capitalism's need for an infinitely malleable population without inconveniently deep desires or attachments than an underlying reality. So, in other words, a form of intellectual tokenism to cultural/contextual factors.
On the second point, again DSM 5 attempts to avoid this, particularly in light of past disasters like the classification of homosexuality as a mental illness. It added the qualification that the syndrome must cause the individual distress, or in old school language be "ego-dystonic". Although there are exceptions to this, notably OCD and many of the personality disorders. Not to mention cases where individuals are committed to mental institutions against their will. So, contradictions abound, there is no unified framework or understanding, just a patchwork quilt marrying the economic interests of the "healers", the legitimation of suffering and moral absolution needs of the population, and the social control interests of those in power.
Don’t forget that someone can be in acute distress, but not agree (either fairly reasonably, as judged by a group of ten average bystanders) or unreasonably (because it will be easier for the aliens to find him there, so they can keep beaming mind control rays at them), that being in a psychiatric hospital offers a good possible solution to their distress.
Also, at least here in Canada, it’s extremely difficult to get someone committed to a psychiatric hospital. They basically have to be sitting in Emerg with a loaded gun in their lap, explaining clearly who they will kill before killing themselves, or one day away from a heart attack because they’ve starved themselves so badly. The involuntary commitment orders are also short and hard to renew.
And like all attempts to deal with complex problems using the legal system, there are big advantages to this, as well as big. disadvantages. It does, though, greatly reduce the use of such measures as a method of social control. Valium for miserable middle class housewives in the 50s was much more effective.
Isn't OCD an anxiety disorder and expressly ego dystonic? There's a separate, basically "Conscientiousness up to 11" condition they call obsessive-compulsive personality disorder, OCPD.
Weirdly, the latest DSM revisions put OCD (along with hoarding) into their own section, took them out of anxiety disorders. Doesn't make any sense to me, as while OCD clearly has some of its own brain weirdness going on, the commonality of using behaviour to avoid/try to reduce anxiety (while actually feeding it, sigh) is there. In the case of OCD and hoarding, the anxiety is around the obsessive thoughts, and the avoidance is avoidance of DOUBT (did I actually turn off the stove properly? might I need this one day?)
My understanding is that those with OCD symptoms are particularly likely as compared to, say, hoarding or agoraphobia to not have a problem with it. Even if the latter 2 don't seek treatment, upon being asked they will generally admit it's problematic. But there are case studies of ocd where individuals will design their entire lives around these rituals in highly disruptive and costly (time or otherwise) ways that any observer will consider batshit crazy, but they are fine with it. The case studies generally emerge from them being seen for something else, and this will come up.
Having said all of that, I don't have any particularly deep knowledge of ocd, so I may be missing something. Personality disorders are the more hallmark examples of ego-syntonic, so it may be safer to stick with that.
Either way it's a problematic area. Telling someone with every flu symptom they don't have the flu if they don't believe it and/or if they don't have a problem with puking all day is problematic, but so is the nonconsensual shoving of stigmatic labels by powerful "helpers" onto the deviant.
The APA wants desperately to maintain the appearance of neutrality by medicalizing everything, but the truth is there is no escaping the moral dimensions of human behavior. Such questions can only be avoided to our detriment.
I've been under the impression that there's two obsessive-compulsive disorders - OCD which is an ego-dystonic anxiety disorder characterized by intrusive thoughts, and Obsessive-Compulsive Personality Disorder, which is ego-syntonic and more like an orderly, conscientious personality driven beyond 11.
Yeah, one thing I wish was more discussed in the mental illness discussion is cultural context.
I have Seasonal Affective Disorder. I get sleepy and cranky and low energy in winter, and don't want to do anything. Like most with SAD, I am of Scandinavian descent. It seems likely to me that SAD is less a "disorder" in the traditional sense than it is a reasonable evolutionary defense against harsh northern winters: in ancient times, people who stayed near the fire probably did better than people who went out and played in the snow. SAD isn't something that's "wrong" with me, per se--it's a mismatch between a reasonable biological defense mechanism and the modern-day reality of having to work 9-to-5, even in winter.
This sort of mismatch, or having disorders that only occur in some cultures, or whose symptoms vary depending on culture, seems to point to a good deal of societal influence.
Very much so. There's evidence (beyond common sense) for this on multiple disorders.
Also, recent work suggests that disorders are exaggerations of natural temperaments. And a variety of temperaments is good for group survival. For example a study found that groups that contain neurotics are much more likely to survive than those without because the neurotic is always on the lookout for danger and notices it right away while more positive or laid back people would ignore or not even notice early signs of danger (i.e. smoke in the case of a fire).
But some temperaments are maladaptive to their current environment. ADHD is a prime example. This (1) is a great overview of the evo psych of hiw it used to be individually useful and this (2) recent study found that indeed adhd gene variants are declining in modern populations. One wonders what effect this will have on group survival.
Indeed. It seems pretty obvious to me that society is not set up to satisfy our physical and mental needs, which evolved around a very different way of life, but to maximize other things.
That doesn't mean that we should go back to a hunter-gatherer lifestyle, but I think that we have to recognize that fairly normal human behavior can be dysfunctional in today's society.
Of course, a good therapy for SAD is light therapy, which deceives us into thinking it is not winter.
This is helpful. I do think it strange however that you go from a nuanced complex model of causality to using an oversimplified variable on the other end. As you're aware, there's no such thing as depression. There's depression-anxiety, depression-addiction, depression-heart disease, depression-addiction-anxiety-heart disease - but alone it may as well be a big foot sighting
Why does that matter? Because it renders either false or at best misleading some of your conclusions. For example, you say that bipolar is dynamic but adhd is not. But adhd has a high rate of co-occurence with bipolar, cyclothymia and cyclothymic temperament. So are you only talking about the 2 times in history bipolar and adhd have appeared alone, or are they somehow both dynamic and not dynamic when they join forces?
I made an oopsie - it's borderline and cyclothymia, not bipolar. But the question stands as there is significant overlap in many of the listed conflicting conditions.
The taxon-dimension distinction seems important in discussions of talent; while talent varies greatly, "genius" as a truly separate category doesn't seem to exist. But many people, including Freddie there, conflate the question of whether genius is real with the question of whether talent is real.
The "as a truly separate category" caveat is important, there. I'll never be as athletic as Wilt Chamberlain, but that's true in the sense that I'll never be as rich as Jeff Bezos. Not in the sense that I'll never be as rabbit-y as a rabbit.
This has significant implications for how we should handle extreme talent. If you go digging through the links in the link I posted, you'll see various people fretting over the social implications of saying that some people are geniuses and other people are not. They're right to fret; it's healthier socially, as well as more accurate, to avoid drawing that dividing line.
"You are more talented than I am" is a better way to approach the brilliant than "you are a genius and I am not".
"By their fruits you shall know them". The outcomes achieved by some individuals is so far above others that it is silly to democratize it. Just because they are categorically different does not mean they are another species - that's a straw man.
And the minute one gives an inch in the process of thinking to frets about social implications is the minute one stops thinking with any clarity or honesty. To think is to exaggerate and to exaggerate is to polarize and to polarize has negative social implications.
But they're not so far above all others. For every incredible supergenius there are a bunch of people almost as good, and for each of those people there are a bunch of people almost as good, and so on till you reach unexceptional people.
As for the social implications, well, the facts are pretty clear so it's a good time to think about what those facts mean.
By the same reasoning, intellectual disability would not exist, although it is obvious that below a certain level of intellectual ability, you aren't really playing the same game anymore.
Some intellectual disabilities are clearly identifiable as particular things. Down's syndrome is also called trisomy 21 because it results from trisomy on the 21st chromosome. You either have that or you don't.
But yes, there are disabled people who occupy the same smooth curve as the people we call geniuses.
If our measure is IQ, then I think there will be people with acute conditions like Down's with the same score as people who are just on the low end of the bell curve without any distinct condition. Both people will be expected to have comparable academic performance, but I think those with Downs would be expected to have less ability to live independently.
Nope; the ability to live independently is not different for two people who have the same IQ, when one has Down’s and the other doesn’t. Lots of other factors at play (education and supports, demands of the surrounding society, whether there are also major physical health issues,,,)
My understanding was that Arthur Jensen found that what might be called "familial" cases of low IQ to be more capable (outside the classroom) than "organic" cases. I can't remember where I originally read that, but in this interview he says most people with IQs above 40 or 50 are "biologically normal" and just part of the bell curve's range. He also says that even with a very low IQ "showing generally good judgment in the ordinary affairs of life should rule out a diagnosis of mental retardation".
Veryvery low IQ, which we classify as intellectual disability (lowest 2% of pop, w/dig functional impairment) pretty much never occurs randomly or even by normal genetic variation.
There has to be an exceptional cause; genetic abnormalities like Down’s, fetal alcohol syndrome and other teratogens, severe neglect and isolation.... Even most people of very low IQ that’s in the ‘normal’ range have usually experienced something that actively pushes intellectual ability down. Long periods of very poor nutrition....
But the other end of the curve doesn’t seem to be about anything extraordinary besides luck; more a combo of lots of luck within normal genetic variation, lack of factors that could seriously impair that ability, and opportunity to both develop ability and to show it. Plus whatever cultural/historical moment that this person’s ability fits into well.
That lack of down-pushing factos resembles the income–happiness relation. Increased income does not make happy, it only removes causes for unhappiness. And (broadly!) around 60k/y the marginal happiness vanishes, and money shifts into a different value for life ("keeping score" in one's (aspired to?) peer group for example).
That sounds doubtful to me. If there are an enormous number of genes (plus random developmental factors) of small effect and they add up to produce a bell curve, then there will be some people at the very low end without any of those large effect factors (same with at the high end). The main reason for that not to be the case is if there are enough people with those large effect factors to fill up that 2%, and those factors are indeed so large that even getting a tails on every binomial coinflip wouldn't go down that far. My thinking had been informed by what I'd heard about people classified as retarded, with IQs at least two standard deviations below the median. In a normal distribution that would be about 2.5% of the population.
An argument I've encountered that ADHD "really is" a specific medical condition, and not just a fancy name for below-average concentration skills, is that apparently when you give ADHD medicine to a person without ADHD, it actually makes them more jittery and less able to focus -- IOW the medicine has opposite effects on people with and without ADHD. Is that true, and does the argument prove what it sets out to prove?
That would still work in a dimensional model. Like, imagine everyone has an "ADHDness" score from -10 to 10. And ADHD diagnoses are clustered in 0-10, but medication moves you 10 points in the other direction, and past -10 you get negative side effects. So someone at 5 takes medication and goes to -5, someone who starts at -7 goes to -17 which is outside the normal distribution and gives negative side effects.
This doesn't seem true. Aside from the popular trope of speed users being famously productive while high, which from my limited experience seems true, amphetamine use to get through finals week in very difficult or competitive college majors is definitely real. It may be making people more jittery, but it doesn't decrease ability to focus.
Nope, stimulants make everyone more focussed. That’s one of the reasons we love our coffee, tea and nicotine so much.
People with ADD/ADHD just need more stimulants to get the same effect (with huge differences in how much of what type how often, unrelated to symptom severity, weight, age etc). That supports the dimensional claim.
Oh, forgot the second part; anyone, ADHD or not, who takes too high a dose of stimulants will feel jittery, less focussed etc. Too high just has to be higher for people with attention problems.
This is a smart, entertaining post and I learned a lot. I was also reminded of something that I am reminded of a lot, which is that a lot of very smart people don’t really know much about addiction or at least don’t think enough about the power of words in its weird realm.
The exclamation point after “gambling addiction” I found particularly troubling, and a little snarky. Spend some time at a GA meeting and there will be no question in your mind that it’s an extremistanic not dimensional phenomenon. There is no meaningful line that connects my thrice a decade purchase of lottery tickets and gambling addiction.
My own experience and long-term observations of folks in recovery tell the same story about alcohol. There are cats and dogs and to an alcoholic alcohol is catnip, which lots of dogs might have now and again but does not lead them to massively destructive behavior. Also, sadly, while Jeff Bezos could buy enough (I suppose) ivory back scratchers to exit his personal extremistanic state, the addict has a one way ticket. Pickles cannot revert to cucumbers, etc.
Why do I bother to write this? Because, again in my experience, one of the defining features of addiction is that there is a constant internal dialogue that amounts to “abstinence is an overreaction, I can [gamble, drink, have the occasional benzo, etc.] just like everybody else.” It’s a disease that works to convince the afflicted that they are healthy. So in my view it is really dangerous to casually propagate dimensionality theories about addiction. Down that path lies an enormous chasm of pain for the addict and those around them.
I have no idea how you could analyze this mathematically without any ability to estimate parameters of the actual generating process, but there certainly is a formalism in recurrent dynamical systems whereby some diverge and some don't. The actual sensitivity to each parameter is real-valued and on a spectrum, but "diverges or not" is a strict binary nonetheless.
That seems to be what is happening with addiction. Whatever feedback loop causes behavior to either reinforce and become more extreme or rewards to deaden and repetition to get boring tends toward extremism to the extent that it spirals completely out of control in some people, while converging to some steady state short of that in others.
That is maybe missing in this kind of purely statistical analysis that dimensional traits can produce binary outcomes because "disease" is really a recurrence relation. It's a function not only of the traits we're measuring, but the state of your body at all previous points in time.
Consider professional gamblers. Consider people that participate in every available lottery. Is there ‘no meaningful line’ between their behaviour and that of addicts? You compare only your extreme to the extreme of addicts and conclude it has to be taxonomic. That’s just using anecdata as evidence.
Some of this terminology seems useful for talking about consciousness (which I suppose is a sort of psychiatric condition); there’s obviously a useful distinction between humans and algae, but trying to pin down an exact dividing line proves difficult, especially when you entertain thought experiments like removing one neuron at a time from a brain. Lots of debates about e.g. animal consciousness seem possible to phrase in terms of the shape of this graph. See also the Sorites Paradox.
Sorites Paradox
1,000,000 grains is a heap.
If 1,000,000 grains is a heap then 999,999 grains is a heap.
So 999,999 grains is a heap.
If 999,999 grains is a heap then 999,998 grains is a heap.
So 999,998 grains is a heap.
If ...
... So 1 grain is a heap.
My take on that has always been that while categories such as "heap" are not *necessarily* defined with objective cutoff points, if it makes sense to define it anywhere, it'd be the point where the grains/articles of object are stacked in a pyramidal or conical configuration. So the fewest possible grains of sand that could constitute a "heap" would be four- three arranged in a triangle and one on top. In practice, unless you arrange them very carefully, with tweezers or such, four grains are unlikely to constitute a heap, because if you're pouring them onto a surface, you should expect it to take a lot more than four grains before any of them achieve a configuration more than one layer deep.
How many layers does your pyramidal configuration need to count as a heap?
I feel like this misses the point. The common parlance of 'heap' does not include any definitions about pyramidal configurations or anything else. I believe the truest definition of "heap" is anything that pattern-matches to a heap for most people.
Sorite's paradox is pointing out that there's no 'correct' cutoff for these categories along a spectrum. Which is exactly what Scott was pointing out.
> The common parlance of 'heap' does not include any definitions about pyramidal configurations or anything else.
Probably not pyramids per se... however it is not unreasonable to expect a 'heap' to involve things heaped on top of each other, which do look like that in minimal form. As suggested in their bottom line, the point is that a single layer is certainly not a heap.
But you're also right, the point is a reference to a paradox about a Greek word, and it has no obligation to work anything like the English word 'heap' we translate it with; if that's a property of 'σωρός' that's lost in translation we can still understand the idea being represented.
Two grains can be a heap if one grain is laid with its pointy side on top of the broad side of the other grain.
(If you squish them together a bit, two grains can even be a stack!)
Does anyone ever have to distinguish between a heap and a non-heap?
No, but they may have to distinguish between diagnosing someone with hypertension and not diagnosing someone with hypertension. :)
Category boundaries exist but are fuzzy. No, there's no fixed number of grains we'll all agree on as a cutoff for using the word, "heap," but you could probably define a numerical metric of "heapness" and we'd mostly agree on its shape. There would be a minimum of 0 heapness at 0 grains, and maybe another 0 heapness at "enough grains to form a black hole, or at least enough to become spherical under its own gravity." There would be a maximum, determined by combining many people's judgments, maybe somewhere near human height.
This also applies to many "alignment charts" or discussions of "what is a sandwich, really?"
I think this arises when you take a dimension and apply it to a category. Heaps of sand isn't really an issue as you don't treat a heap of sand any different from one grain less than a heap of sand, but in medicine there are dimension disorders with categorical treatment options, like gets the drugs/diagnosis or not, gets hospitalised/outpatient/nothing re treatment. I really annoys me when this is done for no reason, like with tax and govt transfers. There is no reason you need the bands since computers were invented we don't need the simplification of the math, you could just have a formula where the tax you pays go up faster percentage wise than your total income does. It could asymptote or have an s shape or whatever, it would need highscholl maths only, but with the bands people keep getting welfare cliffs and getting worked up when there income pops into a new band or the bands shift.
Linguist's perspective here.
The simplest way to define the semantics of "heap" is to treat it as some function mapping entities in the world to (True, False): some things are heaps and some things aren't (you could add a third category called "Underspecified" or something, but this doesn't help much because you run into the same pardox at the boundary between True and Underspecified).
Given this background, there are a few approaches to "solving" the paradox:
1) Allow for uncertainty: even if "X is a heap" is treated as being something that "is" True or False for each pile of grains, language users who are asked "is it a heap?" will, as Bayesians, have different degrees of confidence that the answer is "yes", from 0% (definitely not a heap) to 100% (definitely a heap). It is maintained that objects are either heaps or not heaps, but knowledge of which category they fall into can only be known approximately. In this formulation, an observer could rationally be 80% confident that a particular pile of grains is a heap--perhaps even once they have all relevant information available to them.
2) Reject that "heap" is function that maps entities to (True, False), and treat it as a function that maps from entities to the range [0:1], just as you might for scalar adjectives like "full" or "dark". Piles of grains range from being 0% heapish to 100% heapish. In this formulation, an observer might be 100% confident that a particular pile of grains is 80% heapish.
3) Both of the above at the same time. Heapishness is scalar as in (2) AND knowledge about heapishness of a particular pile is only ever approximate. In this formulation, an observer might be 80% sure that a particular pile is at least 90% heapish, and simultaneously 90% sure that a particular pile is at least 80% heapish.
4) Define meaning entirely in terms of usage, instead of the other way around: "is a heap" is literally the same thing as "speakers tend to declare that it is a heap when talking about it (assuming cooperative, sane, competent speech)." It is meaningless to decide whether things " really are" heaps: instead, we only ever decide--a real-world practical decision--whether in a particular moment we would like to declare (to ourselves or others, and possibly in the context of a philosophical debate) that something is a heap, or to declare that it is not a heap, or to not make any statement at all as to its heaphood (which is by far the most common decision). Our choice of behavior (and thus whether or not it "is" a "heap") will be determined in some complex way by the number of grains along with many other contextual and psychosocial factors. In this formulation, if across history nobody ever spoken or thought about whether a particular pile is a heap or not, the question of whether it is a heap or not is literally meaningless.
For all of these, the relationship between number of grains of sand and heapishness, or certainty of heaphood, or likelihood of someone calling it a "heap", is probably something like a sigmoid function or "s-curve". You could if you wanted to find the number of grains where the y value is exactly 50%, but there's nothing particularly interesting about that number of grains, and the exact number will be affected by other factors like the color of the sand, who the observer is, how much coffee they had that day, and so on.
Personally I find (4) the most interesting, as it implies that the "meanings" of words are not dictionary-like definitions, but are real physical processes extended in space and time, encompassing the grains of sand themselves, light, our sensory organs and nervous systems, our cognition and memory, children's learning of language, and our moment-to-moment motivations for wanting to use language at all. Whether or not something will (or "should") be called a "heap" becomes the same sort of question as whether or not you will (or "should") put ketchup on your burrito, or drive to Seattle, or commit suicide--and just as complicated to answer.
With consciousness in particular, an obvious-to-me conclusion is that some people are more conscious than others. Since that *sounds* so bad, you might also say that the same person can be more or less conscious at different times (excluding the obvious: sleep).
Consciousness could very well be categorical. What if we define it, for example, as "are you thinking about yourself?." Then, you either are or are not conscious at any particular point in time.
If that's your definition, there are definitely some people out there who are waaaay too conscious for my taste.
Very minor nitpick: hypertension guidelines have changed, and AHA defines stage 1 hypertension as SBP ≥130‐139 mm Hg or DBP ≥80‐89 mm Hg
Thanks, fixed.
(I think my father has some sort of strong opinion that this change was wrong and might have written an article about it somewhere, but I shouldn't be giving false information just out of family loyalty)
Oh, it's very arbitrary and probably will just result in more people using antihypertensives without increasing survival rates, but I guess the AHA sees value in scaring more people to take care of their cardiovascular health. European guidelines still use the 140 SBP cutoff
I suspect that the Number-Needed-to-Harm drops significantly with these aggressive hypertension definitions, especially as I see them being adhered to in cohorts where judgement should *definitely* be on person-centred basis i.e elderly care (without knowing the exact prevalence, my priors lean towards more elderly die from orthostatic syncopes and subsequent neck of femur fractures due to aggressive antihypertensive treatment than die from a SBP of 140-150...)
Scott would it be oversimplifying to say mental illness is on a spectrum and diagnoses are at best a way to communicate the cluster of symptoms?
I would say that diagnoses are on a spectrum too.
That was Freud's radical insight long ago, which the DSM initially rejected but is now slowly backing away from.
The problem is that the arbitrary cut off brushes under the table the suffering and impairment of those with subclinical manifestations - which is often similar to that over the threshold (1).
Once you admit dimensions, retaining simple categories is problematic. Particularly if one takes into account the absurd levels of mental illness co-occurence. If most with the flu also have a cold (removing the virus from the equation) - and a few with the flu have mad cows disease or strep throat or something else - and no one "in the wild" has just the flu - is the flu a separate meaningful disease?
After a while everything begins to blur and you're left with something like a p factor or some version of "problems of living". Which is likely for the best. We live in a world deeply hostile to any biologically sane way of life. Until we deal with that, we're just hacking at a never-ending stream of branches.
(1) https://www.sciencedirect.com/science/article/abs/pii/S0165178118305171
This was such a well-written article. I wish all doctors were this good at math. It would really help them make better decisions for their patients.
Does this article really signal proficiency in math? It mostly seems like it's arguing for a more useful way to think about categories vs spectrums. The most "mathy" parts are the statistical tests but Scott doesn't really explain how they work or what exactly they are measuring.
Understanding that sometimes numbers represent a measure of discrete categories and sometimes they represent a point on a spectrum is an important part of moving past a layman's concept of math, and seeing as MDs generally do a bare minimum of math (its nurses who transform a dose into an actual injection or rate) IMO this is raising the bar.
It's interesting that the most abstract form of math is category theory and category theory is, well, all categorical. Something either is or isn't an object of a category and two objects either are or are not linked by an arrow. Category theory subsumes all of math. Bartosz Milewski in his youtube course asks the question whether this reveals something ontological about the universe, or whether this merely reveals something about the way our brain work (he things it's the latter). If we are constrained by the structure of the brain to think categorically, rather than dimensionally, this sure does reveal why the human race is so plagued with tribalism.
Hey, aspiring category theorist and amateur mathematician here. I know this is entirely tangential to the discussion, but I don't agree with you on this. Certainly the definitions of what makes something "a category" are categorical (in the sense of the word used elsewhere in this thread), but that's true of literally every definition in math. If you don't specify exactly what your terms mean, then you can't expect other mathematicians to understand them in the same sense you do, and you can't write any formal proofs using those terms.
Tangentially, I'd also argue that category theory as it is practiced is not nearly as "categorical" as you are making it out to be; nobody ever asks questions like "is such-and-such an object in C?", and rarely do they ask questions like "is there an arrow X -> Y in C?" or even "is C a category?" -- they're much less interesting than questions like:
- "what properties does the category C have?" (e.g. does it have finite limits, finite colimits, infinite limits/colimits, a monoidal structure, a cartesian closed structure, is it a topos?)
- "what do these properties tell us about the internal structure of C?" (e.g. if C is cartesian closed, we can think of C as a C-enriched category)
- "can we infer properties of C from how C is related to other categories?" (e.g. adjoint functors preserving limits/colimits, adjoint functor theorems)
- "if we know what C is like, what does that tell us about other categories built out of C?" (e.g. slice category of a topos is a topos)
etc. I don't really know how to classify these kinds of results on a categorical/dimensional axis, they feel more like "building up a picture of what the field looks like" than "deciding whether something does or does not fit into Bucket X".
I'll leave the questions about the structure of the human brain or why humans are plagued with tribalism to someone who is more of an expert on those matters.
(I'm a long time reader of SSC, this is my first time posting! Apologies if this post is deemed unnecessary; but I hope it is both truthful and kind.)
I would call it mathematical (or statistical) literacy more than proficiency. Tragically even basic literacy is vanishingly rare.
My (non medical) PhD program long ago seems in retrospect to have been 3-4 years of dissecting peer-reviewed! articles in important! journals to diagnose where the authors violated assumptions, waved their hands, misrepresented their own data, and cherry-picked findings.
I work professionally in empirical studies now, and struggle constantly with colleagues and decision-makers who read the abstract (or only the title) and think so-and-so has been proven beyond dispute. (See also: problems with "systematic reviews" and "meta-analyses".) It is a huge problem with respect to policy & societal health/wellbeing (cf. the 'rona).
I don't think the article shows advanced Maths skills on Scott's part, but it does (as well as being extremely well written) show Scott's ability to communicate conceptual ideas - including mathamatical ones - with clarity and wit.
Scott himself has claimed numerous times he is bad at math. Of course bad by his standard is still above average, and more indicating that his math SAT section was below 600. He just worked his ass off to be statistically literate enough to understand scientific studies, even if he can't do any complicated math. This shows that nearly any doctor that took the time and effort to learn statistics, could understand this. There is just no incentive for an average doctor to put this effort in.
Well, if my doctor recommends something and I find a study that contradicts his advice, then it is not easy to have a conversation about the data with him. This has been true with all doctors I have seen through life, for myself or family members. Recent example, a discussion on hormone replacement therapy and blood pressure. They simply reassert their original recommendation. It just seemed like Scott here would be different and welcome a deep discussion.
In my experience, most doctors are working under intense time pressure, and are basing their recommendations on whatever it says on UpToDate. They may or may not be able to engage with you on the substance of the article you bring to the table. However, making a recommendation that is not supported by UpToDate (or whatever set of guidelines informs their practice) opens them to liability when you suffer a harm.
Well, but if it is so rule-based, a doctor could be replaced by expert-systems software, right?
Matching a diagnosis with a treatment is definitely rule-based. I'm not asserting that this is a desirable feature of the current system, just trying to provide some context for why you might be having frustrating experiences with doctors.
"My guess is most professionals, and an overwhelming majority of laymen, are actually confused on this point, and this messes them up in a lot of ways."--how much of this is just reification as a crutch for computational efficiency given limited time/energy/attention?
The thought processes you are moving away from are simpler than the ones you are moving toward, and docs are pressed for time. Likewise, laypeople are mostly probably trying to make decisions about their lives with framings like 'I think that man may be a narcissist so I probably won't go out on a date with him'
The reifications have costs but also benefits and I'm glad you're digging deeper in your practice and inquiry. I worry somewhat about unintended consequences when categorical terms for dimensional characteristics escape the clinical setting and make their way into the ambient construct stew
As a side note, there are common categories in use for wealth like HNW, VHNW, UHNW, but they certainly don't capture the upper-end variation around someone like Bezos (for whom even 'billionaire' is off by orders of magnitude...)
I agree on a purely computational level categorical processes are similar, but practically they can be much harder. I know doctors who really obsess over whether a patient meets the criteria for bipolar, because they really need treatment, so they try to ask questions in a bunch of ways to figure out if a full manic episode that meets the necessary number of criteria really lasted three whole days or not. Whereas I just kind of get a gestalt impression that they were obviously bipolar, then treat them.
I would suspect it has less to do with computational complexity and more to do with communicational complexity, ie categorical diagnoses are easier to talk about and explain and teach, and therefore most people hear about these things in a categorical paradigm most of the time.
The next time someone asks me about my mental health I’ll feel tempted to tell them that I’m “the Jeff Bezos of absentmindedness”. For a while now I’ve intuitively been moving away from the taxonometric definitions of mental illness so it is nice to read something that really goes into the math of it. Even in the cases where there is a definitive “reason” for the mental illnesses listed in the study, like the flu virus for the flu, it usually seems like it’s either some kind of traumatic brain injury (difficult to reverse for now) or something genetic (impossible to reverse fo now) so holding out for a magic bullet is implausible... It’s all symptom management. But symptom management is often what can help someone get over the arbitrary-feeling “line” between “Person doing poorly” and “Person doing well”!
Thank you for this excellent article. I found the concept of a "resident of Extremistan" to be an excellent phrase for describing someone like myself. I fall into the 2% or less in seven major categories, including autism spectrum and bipolar disorder.
I just signed up here as a founding member after Jonathan V. Last at The Bulwark gave a recommendation.
Statistical question / hypothesis regarding substance use disorder:
Many analytic methods don't deal well with massing at 0 and/or 1 (whatever the boundary condition is). There are large proportion of the population who are teetotalers, and a small proportion of the population of drinkers drive a significant amount of alcohol consumption. I assume (probably incorrectly) that many substances have similar distributions of use per capita/time.
Question - wouldn't this bimodal distribution drive taxonicity in statistical analyses? Despite this, substance use is not cleanly divided in the clinical context where substance use is 1) comorbid, 2) often a coping mechanism, 3) not necessarily problematic.
It seems like language is biased towards categorical thinking - either you choose to use a word or you don't. You need to know from context that it's not what people really mean, that tallness is dimensional. Sometimes it's tricky to hint at dimensionality without being overly vague.
Yeah. I once tried to make a conlang, and one of the things I planned to include was a variety of "intensity modifiers" that let you turn adjectives like "red" into more nuanced constructions like "very red", "kind of red", "arguably red", "reminiscent of red things", "more red than you would think, given what it is", and so on, in order to address this very issue.
it's a decent test for an AI, to see if it understands that. Like if you give a GPT-3 chatbot a height and ask it to tell you "tall or not tall" I wonder how it does
Yes, isn't greater and greater specificity what language has always aspired to achieve? This is OT, but I've been trying to figure out if I'm turning into a cranky old maven who can't stand to see language evolve, or if we are indeed witnessing a rather remarkable period in history in which we are collectively demanding less and less specificity from it.
Social media is the most obvious catalyst for new expressions, but it also heartily embraces sloppiness of execution, asking consumers to intuit emotional meaning from fuzzy group reactions to imprecision. Is there any consensus about whether everyone who enjoys over-using the word "literally" is doing so with tongue in cheek or out of ignorance? Regardless, what is one to do when one really needs a word like literally? Do we have to say "literally literally," since "literally" now has a new cheeky/ignorant definition?
Scott calls the term IED an "acronym," while I always thought it was an "initialism"-- but now I see the dictionaries are allowing the broader definition, which I guess simply means not enough people care about maintaining specificity in this particular category. But I'm getting the feeling the loss of specificity is snowballing these days, like the melting of the polar ice caps. And we're already suffering some of the political consequences of this cultural evolution. Or maybe I'm just becoming a cranky old fart.
This would make an interesting new topic in DataSecretsLox. Here, I'm afraid, it has hit an already almost dead thread, and you'll get not much of a response. There are some smart (sometimes grumpy, but on solid grounds one can learn from!) old men over there (apology to everyone misrepresented!), as could be seen in the discussions around computer interfaces, so as crosspost should be worth it.
*a* crosspost. Why, oh, why is there not way to edit?!??
I think language isn't geared toward specificity. English especially seems to offer vagueness as a benefit. I think of "love" and its thousand plus meanings, or all the other words that have different meanings (another example being hot meaning temperature or spiciness).
Vagueness allows for tentative communication with plausible deniability (there was a discussion of this on SSC once, re flirtation, IIRC), and the everyday diplomacy based on it.
Hence the existence of a spectrum from scientific use of language (where every term comes with a well-referenced definition, in extremis: calculus) to fuzzy … let's call it 'sounding out of an emotional or affective resonance'.
Language does all that. We are the ones to be aware of what to expect/utter in what situation; and a misinterpretation of the situation tends to backfire in every way, from comical to meh to tragedy.
And yes, there is some human-rabbit overlap in sizes.
https://www.catersnews.com/stories/animals/meet-the-worlds-biggest-easter-bunny-and-his-brood/
Answering the important questions here.
my takeaway is that if I catch a flu then I will be telling everyone to stay away from me, for I am a Flu Person
A Person of Flu.
Lol
Is “person with autism” really misguiding? I mean, we already have and use terms such as “people of color”, or “people of size” and I don't think anyone really assumes those are binary categories.
I think the preposition is important here: "of" vs "with".
If you say "person with color" it actually sounds kind of offensive, which I think reinforces Scott's point
I'm a person with mostly pink color, especially if I've been in the sun recently.
For what it's worth, a lot of autistic people reject "person-first" language, though for different reasons than Scott suggests in the post. The problem with "person with autism" (they argue) is not so much that it's reifying autism as a clear category, but that it's presenting autism was something separable from oneself. At least that's how I understand it; my experience in the field is fairly limited.
"People of size" is, at least, quite contentious, and I think this is one of the reasons for that.
"people of X" are in practice used to draw sharp categorical differences - color at least is. of size is probably more fat activists adopting general critical social justice lingo to get legitimacy for their campaign of destigmatizing slow murder.
“People of color” is a special case IMO. It’s a way to describe a non-white person without implying that white is some kind of default or ideal. And while “colored person” might seem to be a reasonable analog of “white person”, it has too much cultural baggage.
I think it does still imply that white is the default. It's also sort of like taking Lothrop Stoddard's view of race and concluding "That's basically right, just flip the normative rankings".
Babies are basically bunnies. Small, young, dumb, cute.
1. Why is CCFI of 0.5 chosen as the border between categorical and dimensional? In other words, it seems "being categorical" or "being dimensional" is itself dimensional - is this true?
2. Question about your practice (ignore if inappropriate) - comparing the post here and on your site, I see you've kept the personal tone, which surprised me. Did you get any feedback from patients/non-SSC-readers about the content there? It might be too soon to know, of course.
You're right, I don't understand CCFI well but I think the 0.5 is arbitrary.
I laughed out loud at at this (#1). Fantastic point.
This is a great point (Scott Lawrence makes the same observation below, and his comments are also worth reading) but I think it's worth being careful to delineate the map and the territory. CCFI is a model. Even though a cutoff of 0.5 is arbitrary (so the model suffers from the problem of the heap) it doesn't follow that the distinction *out there in the world* is dimensional. It could be that "being categorical" or "being dimensional" is categorical, but given a particular case, we have to make a guess based on a dimensional scale because we don't have enough dimensions of data to fully separate the categories. Presumably the 0.5 hyper-parameter has worked reasonably well in the past for delineating cases like the flu.
Ah, that's a good clarification, thanks.
I wonder if the positive feedback loop of addiction makes it categorical. "Propensity to become addicted to cigarettes" could be dimensional, but "is currently addicted to cigarettes" (or "has ever been addicted to cigarettes") could be more like separate categories, because it means "consumed enough nicotine for the positive feedback loop to kick in." How much that positive feedback loop affects you varies, but that's like variance in the strength of the flu which makes the "has flu" lump wider than the "doesn't have flu" lump.
Maybe also relevant that "how much nicotine would it take to set off the positive feedback loop" and "how strongly will the positive feedback loop affect you if it gets triggered" are strongly correlated (I imagine) as two aspects of "propensity to become addicted to cigarettes." So the people who are more strongly affected by that positive feedback loop are also more likely to have triggered it (since that takes less nicotine for them).
My personal impression of autism is that it's both. Aspergers seems to me to be more dimensional, whereas low-functioning Autism (for lack of a better term) seems more taxonic.
I dont have any particular study to back this up, its just more of a result of all the stuff I have read about Autism. Its probably also influenced by my belief that it might be wrong (for social reasons) to lump the two together into one diagnosis.
Why do psychologists think that autism and Asperger's syndrome are etiologically related anyway?
I think the major problem is that Aspergers is a syndrome, the causes are barely understood, and people whose symptoms are right in the middle do exist.
But still, to me it feels like having an unable-to-walk syndrome which includes everything from sprained ankles to loss of legs.
They are unable to come up with criteria that distinguish high-functioning autism from Asperger, but also high-functioning autism from low-functioning autism. That's why they gave it one label in the newest DSM.
In the past, the criteria for distinguishing HFA from Aspergers was based on history – if there was a delay in developing functional language, then HFA; if there was no delay in developing functional language, then Aspergers. That criteria worked about as well as any other criteria in the DSM does – sure, there were practical difficulties (history of childhood language development isn't always available for adults, the parents may no longer be around, and even if they are, their memories of decades ago may be imperfect), and as always there are problems with unclear boundaries (timing of functional language development is a smooth continuum), but nothing that rises to the level of "unable to come up with criteria". Honestly, I don't think the DSM-5 authors actually had a very good justification for many of their decisions, and I think those who criticise their decisions are in the right.
And if difficulties reliably distinguishing Aspergers from HFA justify merging the two conditions, well there are also difficulties reliably distinguishing ADHD from ASD. (A number of studies demonstrate the substantial overlap between the two conditions, where to draw the line between them varies from clinician to clinician, and the line has moved over time.) So if it is difficult to reliably distinguish ASD from ADHD, should we merge them?
It can also be difficult to reliably distinguish autism spectrum disorders from schizophrenia symptom disorders. In adults, they can present with quite similar symptoms. The most important way to distinguish them is look at history – if the symptoms were present in childhood, that suggests ASD; if they weren't, that suggests SSD. So, the distinction is based on history – fundamentally the same as the HFA vs Aspergers distinction was. If that's a justification for merging HFA with Aspergers, maybe it is about time we merged the autism spectrum and the schizophrenia spectrum too?
One of the things I love about this blog is how you pull meaningful conclusions that have fascinating real-world implications out of math that goes over my head (but is still interesting to read an analysis of). I went into this one taking a gamble that it would be interesting to me, and hit pay dirt at the end. Cheers :) and again, welcome back!!
At this point is I just assume anything. Scott writes will end up being interesting
Has someone does the analysis to determine whether psychiatric disorders themselves split into two taxa of taxonic and non-taxonic, or if they lie on a spectrum between the two?
This was exactly my thought. If you look at the CCFI figure, it really doesn't look like the distinction between taxonic and non-taxonic is itself taxonic --- it's more of a continuum.
Which... it /can't/ be, right? Either there's a binary hidden variable or there isn't. That's the strong intuition that goes with this distinction. So one of two things is true. Either that's wrong, and the distinction between taxonic and non-taxonic is itself pretty worthless, and we should acknowledge that everything has aspects of both. Or, the fact that CCFI fails to pick up on the obviously taxonic distinction between taxonic and non-taxonic is a hint that hey, maybe this isn't a very reliable measure.
To rephrase: the thing CCFI measures appears to be non-taxonic. Is that a really bad proxy for a taxonic thing, or is taxonicness itself a continuum?
I think taxonocity is going to be measured along a continuum. Warning: this is hyper simplified.
You could create a bimodal distribution with no overlap, to represent two distinct taxons, and then use a distance function to quantify the average distance a point in peak A and a point in peak B.
If you move those two peaks closer to each other, the distance function would decrease along a continuum even if you never actually intersected the distributions at all.
Now let's say you actually begin to combine the two distributions on the plot. At first, there are still two obvious different peaks (with some overlap as Scott pointed out). At some point these distributions would be largely indistinguishable and there would be some arbitrary "gray area" around which it's unclear whether the two distributions are categorically different or merely dimensionally different.
I agree that the "bimodalness" of a distribution is a continuous thing. But, "bimodalness" itself is at best a crude proxy for "taxonicness". Take the flu example above --- as Scott mentions toward the end of that example, if you actually look at a plot of flu symptoms, you don't see two peaks, so that "having the flu" appears to be a non-taxonic thing. Of course, "having the flu" almost certainly /is/ taxonic, and we'd see that if we plotted "density of influenza virus particles" or something like that instead. There really is some nearly-binary latent variable.
And in your constructed example, the same is true. You can make the distribution unimodal, but it's still the case that there is a binary latent variable. It just so happens that you've failed to see it.
So... I worry that that's what's happening with CCFI. For schizophrenia, for instance, people seem to believe (I know nothing) that it's taxonic. That means that the CCFI measure was just "looking at the wrong plot" (either literally or metaphorically, since I dunno how CCFI works), and that on a better plot, a beautiful bimodal distribution would become visible.
While I don't have time to read the whole thing now, I think this is a helpful part from the introduction of the paper Scott linked to, to begin to build an intuition for how this process works:
To reduce the subjectivity of taxometric analysis, and thereby to
address each of the problems this entailed, Ruscio, Ruscio, and
Meron (2007) introduced a technique to produce comparison
graphs using parallel analyses of artificial categorical and dimensional data. These artificial data reproduced important characteristics of the empirical data (e.g., sample size, number of variables,
marginal distributions, correlation matrices; Ruscio & Kaczetow,
2008), and they could be analyzed using the same procedural
implementation as the empirical data. This yields taxometric
graphs for data of known categorical and dimensional structure,
holding everything else constant. Rather than relying on generalpurpose prototypes, investigators could obtain comparison graphs
tailored to the data and analysis plan in a particular study. This
circumvented the first two problems, using a small number of
prototypes for idealized data conditions that had been analyzed
using just one procedural implementation.
The other two problems were addressed by developing the
comparison curve fit index (CCFI). The CCFI is an objective
measure of the extent to which the results for the empirical data are
a closer match to those for the artificial categorical or dimensional
comparison data.
If you're statistically inclined, CCFI just uses Root mean squared residuals to determine the "error" between the observed distribution and these two theoretical distributions.
"These two fit values are then combined into the CCFI:
CCFI RMSRd ⁄ (RMSRd RMSRc) (2)
CCFI values range from 0 (strongest support for dimensional structure, obtained when RMSRd 0 and RMSRc 0) to 1 (strongest
support for categorical structure, obtained when RMSRc 0 and
RMSRd 0). A value of .50 is ambiguous (obtained when
RMSRc RMSRd)."
The actual binary here is what Scott mentioned about the difference between flu and height. A person with flu is infected with influenza virus. That's a binary hidden variable. If we can't measure it directly or don't know whether it exists, all of these statistical tests on (possibly) correlated attributes we do know exist and can measure are attempts to infer the presence of the actual hidden binary variable.
To some extent, it does just come down to whether this is a useful way of thinking, though. Presumably a lot of people have at least one active influence virus cell inside of whatever skin/world boundary we consider to be within their body, but only at some level of viral load sufficient to overwhelm immune response long enough to manifest symptoms do we consider that person to "have the flu."
Remembering Scott's classic categories are made for man, this only really matters to the extent it changes treatment plans. Knowing infectious diseases are caused by germs means we can treat them by performing some intervention that either kills the germs directly or aids the immune system itself in doing so. If we don't know the underlying cause or whether there even is one, we can only treat the symptoms.
I immediately noted that there seemed to be a discontinuity in the CCFI figure - while it is continuous over certain ranges, there seemed to be a jump in the middle. Is this just an artifact of what was represented? Or, is there some meaning behind it?
For those like me who read the version of this post on Lorien: the only major addition for ACX is section III.
Psychological constructs in general are just low-dimensional approximations of complex, high-dimensional processes. The relevant question is not "are they real?", but "are they useful?".
For a similar discussion, see section "Realist intuitions impede progress in psychology" on page 4 here: https://psyarxiv.com/xj5uq
This is a conventional take which doesn't even attempt to grapple with the true complexities of the issue. The question is not whether current diagnoses are dimensional or categorical but rather whether they exist as meaningful separate diseases at all. And most data points suggest the answer is no, that the current system is a joke. A nosology useful to clinicians for efficient collegial
communicate but with no benefit - and often harm - to the patient.
Mentioning riches as an example of a dimensional variable is apt because it appears to be as useful a variable as depression in understanding human beings (I.e. not). That's because a) it's unreliable - most people's wealth fluctuates, often quite drastically, over the course of their lives. And of course b) it doesn't actually exist - money is a social construct.
When we talk about wealth what are we really attempting to get at? The variable that scientists - and particularly epidemiologists - have found most approximates that is socioeconomic status, which takes into account education, profession.and economic resources.
Depression more than likely is but one pillar of a larger, actually useful category - perhaps "internalizing disorder". Or a more accurate system might blow up the whole thing and not even use the heterogeneous DSM concept of depression at all.
Any system must deal seriously not only with the current in-vogue biological factors, but also must account for culture and time bound disorders and the drastically different rates of recovery of mental illnesses depending on where one is (i.e. westernized vs non westernized regions). When outcomes for schizophrenia are better in sub saharan africa than the US, something is wrong.
See:
https://www.nature.com/articles/d41586-020-00922-8
https://www.researchgate.net/publication/267383152_Counterflows_for_mental_well-being_What_high-income_countries_can_learn_from_Low_and_middle-income_countries
https://wchh.onlinelibrary.wiley.com/doi/pdf/10.1002/pnp.461
Perhaps depression is comparable to pain. 'Pain' is also not a valid diagnosis, but merely a symptom, even though you can treat that symptom with pain killers. A person who is in pain because his leg is broken, needs treatment for a broken leg, not merely treatment for pain, although pain treatment can be temporarily helpful.
Yet a person who is in pain due to cancer, isn't helped with a splint...
Some people may be depressed because their regulation system works poorly. Others may have shitty lives and see no prospect of changing that. These are as similar as a broken bone and cancer.
---
And I agree that context matters a lot too. In fact, disorders are often defined based on dysfunction, so societal inclusion and acceptance can be the difference between something being regarded as a disorder or not. If the government likes to torture people and a decent subset of society celebrates torturers as heroes, a pathological desire to torture can be fully acceptable, if it is channeled in the right way.
Your first point is apt. Much work in the sociology of mental disorder argues just that. Once the DSM in it's infinite wisdom decided to go etiologically agnostic it introduced this issue. In DSM 5 it does try to limit this: "An expectable or
culturally approved response to a common stressor or loss,
such as the death of a loved one, is not a mental disorder."
"Expected or culturally approved response" is a notable qualifier and seems to more reflect capitalism's need for an infinitely malleable population without inconveniently deep desires or attachments than an underlying reality. So, in other words, a form of intellectual tokenism to cultural/contextual factors.
On the second point, again DSM 5 attempts to avoid this, particularly in light of past disasters like the classification of homosexuality as a mental illness. It added the qualification that the syndrome must cause the individual distress, or in old school language be "ego-dystonic". Although there are exceptions to this, notably OCD and many of the personality disorders. Not to mention cases where individuals are committed to mental institutions against their will. So, contradictions abound, there is no unified framework or understanding, just a patchwork quilt marrying the economic interests of the "healers", the legitimation of suffering and moral absolution needs of the population, and the social control interests of those in power.
Don’t forget that someone can be in acute distress, but not agree (either fairly reasonably, as judged by a group of ten average bystanders) or unreasonably (because it will be easier for the aliens to find him there, so they can keep beaming mind control rays at them), that being in a psychiatric hospital offers a good possible solution to their distress.
Also, at least here in Canada, it’s extremely difficult to get someone committed to a psychiatric hospital. They basically have to be sitting in Emerg with a loaded gun in their lap, explaining clearly who they will kill before killing themselves, or one day away from a heart attack because they’ve starved themselves so badly. The involuntary commitment orders are also short and hard to renew.
And like all attempts to deal with complex problems using the legal system, there are big advantages to this, as well as big. disadvantages. It does, though, greatly reduce the use of such measures as a method of social control. Valium for miserable middle class housewives in the 50s was much more effective.
Isn't OCD an anxiety disorder and expressly ego dystonic? There's a separate, basically "Conscientiousness up to 11" condition they call obsessive-compulsive personality disorder, OCPD.
Weirdly, the latest DSM revisions put OCD (along with hoarding) into their own section, took them out of anxiety disorders. Doesn't make any sense to me, as while OCD clearly has some of its own brain weirdness going on, the commonality of using behaviour to avoid/try to reduce anxiety (while actually feeding it, sigh) is there. In the case of OCD and hoarding, the anxiety is around the obsessive thoughts, and the avoidance is avoidance of DOUBT (did I actually turn off the stove properly? might I need this one day?)
Yet another argument for dimensionality.
My understanding is that those with OCD symptoms are particularly likely as compared to, say, hoarding or agoraphobia to not have a problem with it. Even if the latter 2 don't seek treatment, upon being asked they will generally admit it's problematic. But there are case studies of ocd where individuals will design their entire lives around these rituals in highly disruptive and costly (time or otherwise) ways that any observer will consider batshit crazy, but they are fine with it. The case studies generally emerge from them being seen for something else, and this will come up.
Having said all of that, I don't have any particularly deep knowledge of ocd, so I may be missing something. Personality disorders are the more hallmark examples of ego-syntonic, so it may be safer to stick with that.
Either way it's a problematic area. Telling someone with every flu symptom they don't have the flu if they don't believe it and/or if they don't have a problem with puking all day is problematic, but so is the nonconsensual shoving of stigmatic labels by powerful "helpers" onto the deviant.
The APA wants desperately to maintain the appearance of neutrality by medicalizing everything, but the truth is there is no escaping the moral dimensions of human behavior. Such questions can only be avoided to our detriment.
I've been under the impression that there's two obsessive-compulsive disorders - OCD which is an ego-dystonic anxiety disorder characterized by intrusive thoughts, and Obsessive-Compulsive Personality Disorder, which is ego-syntonic and more like an orderly, conscientious personality driven beyond 11.
Yeah, one thing I wish was more discussed in the mental illness discussion is cultural context.
I have Seasonal Affective Disorder. I get sleepy and cranky and low energy in winter, and don't want to do anything. Like most with SAD, I am of Scandinavian descent. It seems likely to me that SAD is less a "disorder" in the traditional sense than it is a reasonable evolutionary defense against harsh northern winters: in ancient times, people who stayed near the fire probably did better than people who went out and played in the snow. SAD isn't something that's "wrong" with me, per se--it's a mismatch between a reasonable biological defense mechanism and the modern-day reality of having to work 9-to-5, even in winter.
This sort of mismatch, or having disorders that only occur in some cultures, or whose symptoms vary depending on culture, seems to point to a good deal of societal influence.
Very much so. There's evidence (beyond common sense) for this on multiple disorders.
Also, recent work suggests that disorders are exaggerations of natural temperaments. And a variety of temperaments is good for group survival. For example a study found that groups that contain neurotics are much more likely to survive than those without because the neurotic is always on the lookout for danger and notices it right away while more positive or laid back people would ignore or not even notice early signs of danger (i.e. smoke in the case of a fire).
But some temperaments are maladaptive to their current environment. ADHD is a prime example. This (1) is a great overview of the evo psych of hiw it used to be individually useful and this (2) recent study found that indeed adhd gene variants are declining in modern populations. One wonders what effect this will have on group survival.
(1) https://pubmed.ncbi.nlm.nih.gov/9401328/
(2) https://www.nature.com/articles/s41598-020-65322-4
Indeed. It seems pretty obvious to me that society is not set up to satisfy our physical and mental needs, which evolved around a very different way of life, but to maximize other things.
That doesn't mean that we should go back to a hunter-gatherer lifestyle, but I think that we have to recognize that fairly normal human behavior can be dysfunctional in today's society.
Of course, a good therapy for SAD is light therapy, which deceives us into thinking it is not winter.
I might not have made it clear enough that this is post 1 in a series. You can find an early version of post 2 at https://lorienpsych.com/2020/11/11/ontology-of-psychiatric-conditions-dynamic-systems/ and see if it addresses some of your concerns.
This is helpful. I do think it strange however that you go from a nuanced complex model of causality to using an oversimplified variable on the other end. As you're aware, there's no such thing as depression. There's depression-anxiety, depression-addiction, depression-heart disease, depression-addiction-anxiety-heart disease - but alone it may as well be a big foot sighting
Why does that matter? Because it renders either false or at best misleading some of your conclusions. For example, you say that bipolar is dynamic but adhd is not. But adhd has a high rate of co-occurence with bipolar, cyclothymia and cyclothymic temperament. So are you only talking about the 2 times in history bipolar and adhd have appeared alone, or are they somehow both dynamic and not dynamic when they join forces?
I made an oopsie - it's borderline and cyclothymia, not bipolar. But the question stands as there is significant overlap in many of the listed conflicting conditions.
This reminds me of a semi-recent post from Freddie DeBoer, on genius.
https://fredrikdeboer.com/2020/11/29/2049/
The taxon-dimension distinction seems important in discussions of talent; while talent varies greatly, "genius" as a truly separate category doesn't seem to exist. But many people, including Freddie there, conflate the question of whether genius is real with the question of whether talent is real.
"doesn't seem to exist".
Can't measure ≠ doesn't exist.
And “doesn’t correlate very strongly with easily-measured things”
The "as a truly separate category" caveat is important, there. I'll never be as athletic as Wilt Chamberlain, but that's true in the sense that I'll never be as rich as Jeff Bezos. Not in the sense that I'll never be as rabbit-y as a rabbit.
This has significant implications for how we should handle extreme talent. If you go digging through the links in the link I posted, you'll see various people fretting over the social implications of saying that some people are geniuses and other people are not. They're right to fret; it's healthier socially, as well as more accurate, to avoid drawing that dividing line.
"You are more talented than I am" is a better way to approach the brilliant than "you are a genius and I am not".
"By their fruits you shall know them". The outcomes achieved by some individuals is so far above others that it is silly to democratize it. Just because they are categorically different does not mean they are another species - that's a straw man.
And the minute one gives an inch in the process of thinking to frets about social implications is the minute one stops thinking with any clarity or honesty. To think is to exaggerate and to exaggerate is to polarize and to polarize has negative social implications.
But they're not so far above all others. For every incredible supergenius there are a bunch of people almost as good, and for each of those people there are a bunch of people almost as good, and so on till you reach unexceptional people.
As for the social implications, well, the facts are pretty clear so it's a good time to think about what those facts mean.
By the same reasoning, intellectual disability would not exist, although it is obvious that below a certain level of intellectual ability, you aren't really playing the same game anymore.
Some intellectual disabilities are clearly identifiable as particular things. Down's syndrome is also called trisomy 21 because it results from trisomy on the 21st chromosome. You either have that or you don't.
But yes, there are disabled people who occupy the same smooth curve as the people we call geniuses.
I was just referring to low IQ, without using any of the naughty words.
If our measure is IQ, then I think there will be people with acute conditions like Down's with the same score as people who are just on the low end of the bell curve without any distinct condition. Both people will be expected to have comparable academic performance, but I think those with Downs would be expected to have less ability to live independently.
Nope; the ability to live independently is not different for two people who have the same IQ, when one has Down’s and the other doesn’t. Lots of other factors at play (education and supports, demands of the surrounding society, whether there are also major physical health issues,,,)
My understanding was that Arthur Jensen found that what might be called "familial" cases of low IQ to be more capable (outside the classroom) than "organic" cases. I can't remember where I originally read that, but in this interview he says most people with IQs above 40 or 50 are "biologically normal" and just part of the bell curve's range. He also says that even with a very low IQ "showing generally good judgment in the ordinary affairs of life should rule out a diagnosis of mental retardation".
https://www.unz.com/isteve/here-s-my-upi-article-on-that-supreme-court
Veryvery low IQ, which we classify as intellectual disability (lowest 2% of pop, w/dig functional impairment) pretty much never occurs randomly or even by normal genetic variation.
There has to be an exceptional cause; genetic abnormalities like Down’s, fetal alcohol syndrome and other teratogens, severe neglect and isolation.... Even most people of very low IQ that’s in the ‘normal’ range have usually experienced something that actively pushes intellectual ability down. Long periods of very poor nutrition....
But the other end of the curve doesn’t seem to be about anything extraordinary besides luck; more a combo of lots of luck within normal genetic variation, lack of factors that could seriously impair that ability, and opportunity to both develop ability and to show it. Plus whatever cultural/historical moment that this person’s ability fits into well.
That lack of down-pushing factos resembles the income–happiness relation. Increased income does not make happy, it only removes causes for unhappiness. And (broadly!) around 60k/y the marginal happiness vanishes, and money shifts into a different value for life ("keeping score" in one's (aspired to?) peer group for example).
That sounds doubtful to me. If there are an enormous number of genes (plus random developmental factors) of small effect and they add up to produce a bell curve, then there will be some people at the very low end without any of those large effect factors (same with at the high end). The main reason for that not to be the case is if there are enough people with those large effect factors to fill up that 2%, and those factors are indeed so large that even getting a tails on every binomial coinflip wouldn't go down that far. My thinking had been informed by what I'd heard about people classified as retarded, with IQs at least two standard deviations below the median. In a normal distribution that would be about 2.5% of the population.
You might find these interesting.
"Complexity perspectives on behaviour change interventions"
https://mattiheino.com/2020/10/19/besp/
Youtube Channel: "Complex systems in behavioural sciences"
https://www.youtube.com/channel/UCR9nYEjzOCzQLjxDgKo0EZA
An argument I've encountered that ADHD "really is" a specific medical condition, and not just a fancy name for below-average concentration skills, is that apparently when you give ADHD medicine to a person without ADHD, it actually makes them more jittery and less able to focus -- IOW the medicine has opposite effects on people with and without ADHD. Is that true, and does the argument prove what it sets out to prove?
That would still work in a dimensional model. Like, imagine everyone has an "ADHDness" score from -10 to 10. And ADHD diagnoses are clustered in 0-10, but medication moves you 10 points in the other direction, and past -10 you get negative side effects. So someone at 5 takes medication and goes to -5, someone who starts at -7 goes to -17 which is outside the normal distribution and gives negative side effects.
This doesn't seem true. Aside from the popular trope of speed users being famously productive while high, which from my limited experience seems true, amphetamine use to get through finals week in very difficult or competitive college majors is definitely real. It may be making people more jittery, but it doesn't decrease ability to focus.
Nope, stimulants make everyone more focussed. That’s one of the reasons we love our coffee, tea and nicotine so much.
People with ADD/ADHD just need more stimulants to get the same effect (with huge differences in how much of what type how often, unrelated to symptom severity, weight, age etc). That supports the dimensional claim.
Oh, forgot the second part; anyone, ADHD or not, who takes too high a dose of stimulants will feel jittery, less focussed etc. Too high just has to be higher for people with attention problems.
Clearly we need a meta-meta-analysis to determine whether taxonicity is a categorical or a dimensional concept.
If you use the same methodology to determine if amputation is dimensional, how does it fare?
This is a smart, entertaining post and I learned a lot. I was also reminded of something that I am reminded of a lot, which is that a lot of very smart people don’t really know much about addiction or at least don’t think enough about the power of words in its weird realm.
The exclamation point after “gambling addiction” I found particularly troubling, and a little snarky. Spend some time at a GA meeting and there will be no question in your mind that it’s an extremistanic not dimensional phenomenon. There is no meaningful line that connects my thrice a decade purchase of lottery tickets and gambling addiction.
My own experience and long-term observations of folks in recovery tell the same story about alcohol. There are cats and dogs and to an alcoholic alcohol is catnip, which lots of dogs might have now and again but does not lead them to massively destructive behavior. Also, sadly, while Jeff Bezos could buy enough (I suppose) ivory back scratchers to exit his personal extremistanic state, the addict has a one way ticket. Pickles cannot revert to cucumbers, etc.
Why do I bother to write this? Because, again in my experience, one of the defining features of addiction is that there is a constant internal dialogue that amounts to “abstinence is an overreaction, I can [gamble, drink, have the occasional benzo, etc.] just like everybody else.” It’s a disease that works to convince the afflicted that they are healthy. So in my view it is really dangerous to casually propagate dimensionality theories about addiction. Down that path lies an enormous chasm of pain for the addict and those around them.
I have no idea how you could analyze this mathematically without any ability to estimate parameters of the actual generating process, but there certainly is a formalism in recurrent dynamical systems whereby some diverge and some don't. The actual sensitivity to each parameter is real-valued and on a spectrum, but "diverges or not" is a strict binary nonetheless.
That seems to be what is happening with addiction. Whatever feedback loop causes behavior to either reinforce and become more extreme or rewards to deaden and repetition to get boring tends toward extremism to the extent that it spirals completely out of control in some people, while converging to some steady state short of that in others.
That is maybe missing in this kind of purely statistical analysis that dimensional traits can produce binary outcomes because "disease" is really a recurrence relation. It's a function not only of the traits we're measuring, but the state of your body at all previous points in time.
Agree entirely.
Me too!
I agree, and had the same reaction to the exclamation point after gambling addiction.
Consider professional gamblers. Consider people that participate in every available lottery. Is there ‘no meaningful line’ between their behaviour and that of addicts? You compare only your extreme to the extreme of addicts and conclude it has to be taxonomic. That’s just using anecdata as evidence.