“Literal Banana” on Carcinization writes Against Automaticity, which they describe as:
An explanation of why tricks like priming, nudge, the placebo effect, social contagion, the “emotional inception” model of advertising, most “cognitive biases,” and any field with “behavioral” in its name are not real.
My summary (as always, read the real thing to keep me honest): for a lot of the ‘90s and ‘00s, social scientists were engaged in the project of proving “automaticity”, the claim that most human decisions are unconscious/unreasoned/automatic and therefore bad. Cognitive biases, social priming, advertising science, social contagion research, “nudges”, etc, were all part of this grand agenda.
For example, consider John Bargh’s famous (and now debunked) social priming studies: an experimenter would make subjects solve word games related to elderly people (eg WRINKLE, OLD, CANE). These subjects would then walk out of the laboratory more slowly than control subjects, because they’d been “primed” with the thought of old people, who move slowly. Again, this has since been debunked. But for a while, it seemed like half of all psych experiments were something along these lines. And they all sent the same message: “you” are not in command. You are like a leaf, being blown about by environmental factors beyond your control - how people phrase things, what your peers are doing, and which words you’ve encountered recently.
A third time: all of this has since been debunked. So Banana recommends (reading between the lines) that we go back, figure out how the automaticity paradigm affected our thinking, and un-propagate all of those updates. They suggest something like replacing causal explanations with phenomenology, a proposal which I am forced to admit I don’t understand in any way whatsoever.
And they end with a challenge:
I invite anyone to be the Lakatos to my Feyerabend, and present Here’s Why Automaticity Is Real Actually, as mine is an extreme case and does not pretend to be a measured, balanced examination of the subject.
Sure, let’s go.
The Core Of The Cognitive Biases Literature Replicates And Is Real
Suppose there is a good idea. People will be attracted to it. It will gather momentum. Eventually it will have made all the true claims it can make, but it won’t have used up its hype. Its momentum will carry it forward into making false claims and doing bad things.
After a while, it will get a reputation as “that idea which makes false claims and does bad things”. People will rush to dissociate themselves from it. The dissociation will itself gather momentum. Supporting the idea will look naive at best; more likely it will signal that you’re a predatory scammer. There will be a virtue signaling cascade to compete over how much you hate the idea.
Some examples:
Racism exists and is bad. But wokeness has become so annoying that lots of people have antibodies to talking about racism or acknowledging it. Now it’s hard to call out race-related problems without looking like a woke grifter.
Cryptocurrency has become an important part of poor countries’ financial infrastructure, so much so that I think it should objectively be considered a huge tech success story. But there have been so many scams and so much hype that people refuse to believe this, and continue to insist it has no possible use cases.
IQ is one of the most explanatory and best-replicated concepts in psychology. But everyone is so afraid of being “that guy” who drones on and on about his high IQ that they countersignal by saying IQ doesn’t exist or is meaningless or is just test-taking skills or whatever.
Likewise, cognitive biases are real, well-replicated, and have strong explanatory value. Grifters went on to argue that they controlled every facet of our lives, which made lots of people allergic to the whole field. But that’s an over-reaction, and we should go back to “merely” believing them to be real, well-replicated, and with strong explanatory value. Some examples, mostly taken from here:
The conjunction fallacy is not only well-replicated, but easy to viscerally notice in your own reasoning when you look at classic examples. I think it’s mostly survived critiques by Gigerenzer et al, with replications showing it happens even among smart people, even when there’s money on the line, etc. But even the Gigerenzer critique was that it’s artificial and has no real-world relevance, not that it doesn’t exist.
Loss aversion has survived many replication attempts and can also be viscerally appreciated. The most intelligent critiques, like Gal & Rucker’s, argue that it’s an epiphenomenon of other cognitive biases, not that it doesn’t exist or doesn’t replicate.
The big prospect theory replication paper concluded that “the empirical foundations for prospect theory replicate beyond any reasonable thresholds.”
These aren’t minor points; prospect theory won Kahneman the Nobel Prize. When people talk about “cognitive biases”, these are the kinds of things they’re talking about.
Most Priming Replicates And Is Real
Psychologists have been researching priming since the 1950s. Most forms of priming replicate just fine, but since nobody believes studies anymore, here are a few experiments you can try at home to see if they work on you:
1: Word Scrambles
Quick, unscramble these as fast as you can!
CHCURH
CIHRTISNA
REGIILON
PREITS
UJSES CRISHT
ANLEG
OGD
HEANEV
HLLE
PRGAROTOREY
PAELRY GTAES
[empty space to prevent you from accidentally seeing the answers before you want to]
[…]
[…]
[…]
[…]
[…]
Most people unscramble 6 as ANGEL and 7 as GOD, ignoring the more mundane words ANGLE and DOG. Although it’s reasonable to assume that entries in a list of religious words will be religious, most people don’t report thinking “I realized it could be either DOG or GOD, but I decided to go with GOD because of the context”. Their brain just hands them the word “GOD”.
Likewise, many people unscramble 10 as PURGATORY, even though there’s no U; fewer people get the correct answer (PREROGATORY). Although PURGATORY is a slightly (?) more common word than PREROGATORY, I propose very few people would make this mistake in a neutral (ie non-religious) word list; at worst they would say they couldn’t think of the answer.
2: The Stroop Effect - try to read the color (not the text) of the each of the following words as fast as you can:
Most people find the first set easy, because the text is positively priming the color, and the second set hard, because the text is negatively priming the color.
3: The Implicit Association Test - there have been some good studies showing the IAT doesn’t really predict racism. But as far as I know, nobody has ever challenged its basic finding - that white people are faster and more accurate at learning white-good black-bad reflex-level category associations than vice versa. You can easily test this for yourself here.
Hopefully these three exercises feel like I’m cheating - “This stuff is trivial and obvious! Surely it can’t be the dreaded priming, scourge of all honest scientists!” But priming only ever claimed to be the observation that our interpretation of stimuli can be slowed down / altered by other stimuli or the broader context, which is obviously true (doubly so if you understand predictive coding).
You can see why people might extend this to claims like “seeing a stimulus related to old people can make you walk slower”. The only problem with this claim is that it isn’t true. It’s an attempt to extend a true insight further than it will go.
Gravity is real, and you can see why “skyscrapers are impossible because they would immediately collapse under their own weight” is the sort of claim that a gravity-believer might stumble into. But in fact gravity isn’t strong enough to make the claim true.
Many Nudges Replicate And Are Real
For example, Haggag and Paci look at a dataset of 13 million New York City taxi rides. The credit card machine in the taxi offered default tip options of 2/3/4$ for rides shorter than 15 miles, and default tip options of 15/20/25% above 15 miles, making it ideal for a regression discontinuity design. Here were their findings:
The default option significantly changed the average tip customers gave. This isn’t a p = 0.04 effect in a lab, this is a real example with real money with 13 million data points. The authors then go on to replicate this in a different dataset of millions of taxi rides. Also, I met a psychologist who worked at Uber or Lyft (I can’t remember which), who confirmed that their company had replicated this research, and put lots of effort into deciding which default tip options to give customers because obviously it affected customer behavior.
But you shouldn’t need to hear that scientists have replicated this. If you’ve ever taken a taxi, you should have a visceral sense of how yeah, you mostly just click a tip option somewhere in the middle, or maybe the last option if the driver was really good and you’re feeling generous.
And if you’ve ever had your insurance send you a letter saying that you have been assigned to the Gold PMP ExtraCare Rx Deluxe North group by default, but that if you want to explore your options you can fax them a Change Of Assignment Form at any time, you know that one of the most common nudges - making the thing you want the default - works (here is the scientific version of that claim, if you insist).
Where Does This Leave Automaticity?
So where does this leave Banana’s concept of automaticity? If people are vulnerable to cognitive biases, priming, and nudges, must we conclude, as Banana hostilely summarizes, that:
We are automatons going around in our sleep, and our performance on a simple puzzle can take a major hit simply by being informed that our drink was bought at a discount. We are infinitely vulnerable to our environment, to suggestion, to parlor tricks, that we can experience a major loss of intellectual ability, walking speed, memory, just by exposure to some infinitely subtle stimulus.
I think a better analogy would be optical illusions.
Optical illusions are much like cognitive biases. They are cases where distractor stimuli confound our intuitive mental algorithms, giving us the wrong results:
These illusions aren’t fake. The replication crisis hasn’t harmed them. And like priming and cognitive biases, they’re cases where context and distractors can influence our answers. Does this make us automata, stumbling hopelessly through life?
Sort of. If by “we’re automata”, we mean that I don’t personally stare intensely at every object I see, measuring it against some hypothetical palette in my brain and rationally assessing my beliefs about its color, I guess I’m an automaton. I mostly just accept visual percepts handed to me by algorithms I know are sometimes wrong.
But here are things I don’t believe about optical illusions:
Since everyone else is such a dumb automaton, I can use my superior knowledge of optical illusions to excel at sports. I’ll just study every known optical illusion and how to defeat it, until my visual system is perfect. Then, while everyone else is deluded into thinking the ball is in a different place, I alone will be able to determine the ball’s true location, and win every game.
Since everyone else is such a dumb automaton, I can use my superior knowledge of optical illusions to excel at business. I’ll buy real estate, then contrive a series of clever illusions that make a dilapidated shack look like a beautiful mansion. By buying at shack prices and selling at mansion prices, I can get rich quick.
Since I’m such a dumb automaton, I can never really trust any of my decisions. I might think a bag of rice looks big when I go to the grocery store. But maybe the store hired visual neuroscientists to contrive optical illusions around it! Maybe the bag really just contains one grain of rice and can’t possibly feed me! I should only eat rice I grew myself from now on.
When people first discovered cognitive biases, people flirted with all these ideas (some rationalists definitely did, but so did behavioral scientists themselves). I think over the past fifteen years, we’ve learned that we do have some cognitive automaticity, in the same way we have some visual automaticity, but that clever plans like these mostly don’t work. Why not?
An easy but wrong answer: because optical illusions (and cognitive biases) are very weak. This isn’t quite right. Go back to those cubes above. That yellow square looks very yellow; that blue square looks very blue. Any phenomenon that can confuse your color vision so completely deserves more respect.
A better answer: because there’s some boundary condition for a combined function of strength, naturalness, robustness, and lack of prior knowledge. Very strong illusions like the one on the cube almost never occur in natural situations in real life, probably because those are the situations your visual system evolved to process. When they do, it’s usually pretty easy to get around them: view the scene from a different angle, squint, wait a couple of seconds. In the rare cases where optical illusions make a big difference in real life and are hard to get around, everyone already knows about them and has adjusted for them. For example, mirages are strong and persistent, but if you’re a desert nomad you already have a long tradition of dealing with them; a visual neuroscientist who understands the illusion on a scientific level won’t have much to add.
Cognitive biases are the same. They exist. They can be demonstrated in the lab. They tell us useful things about how our brains work. Some of them matter a lot, in the sense that if we weren’t prepared for them, bad things would happen.
But usually we know about these. Hyperbolic discounting is a cognitive bias. But when it affects our everyday life, we call it by names like “impulsivity” or “procrastination”. Our grandmothers’ grandmothers’ struggled against these and taught us to beware of them. Cognitive scientists have come up with formal models of them, but when we understand them properly, we aren’t surprised by their existence.
There might be exceptions in certain unnatural pastimes like investing in the stock market. Probably past generations of stock traders discovered some of these biases by accident, and try to pass them down to new Wall Street interns. But there haven’t been a hundred generations of stock traders, so the knowledge is still fragmentary and inconsistent. Maybe cognitive science has reached a point where it can supplement or codify this kind of wisdom - or maybe it hasn’t reached that point yet.
Automaticity Is The Lindy-est Idea Of All
Understood correctly, automaticity isn’t some weird claim by 21st century psych nerds. It’s a basic truth about the human condition.
It’s most obvious in the teachings of George Gurdijieff, the early 20th century mystic who made it a centerpoint of his cult. Wikipedia says:
Gurdjieff taught that people are not conscious of themselves . . .He asserted that people in their ordinary waking state function as unconscious automatons, but that a person can "wake up" and become what a human being ought to be.
But dig deeper and this is part of every traditional description of the human condition. Plato spoke about the conflict between our rational, emotional, and appetitive souls, and that in some people the rational soul fails at its duty to run the show. The Buddhist term “Buddha” means “awakened one”, in contrast to everyone else who was not fully awake; the slightest experience with meditation is enough to demonstrate that “mindfulness” is interesting precisely because of how mindless our actions usually are.
What posture are you in right now? Did you “decide”, based on rational considerations and your best self, to take that posture? Why are you reading this article now instead of doing something else? How long did you spend on that decision? Were you really awake and deeply absorbing the last paragraph? How long did you spend thinking about it? How long did you spend deciding that you were going to spend that amount of time thinking about it, and not some other amount of time?
These questions, taken seriously, will drive you insane. Plato and the Buddha are old enough to be safe, but this is prime cult recruitment material here. Tell people (as Gurdjieff did) that they are sheep-like automata drifting through life without conscious thought, and they’ll notice it’s basically true, freak out, and become easy prey for whatever grift you promise will right the situation.
Some people might have the time and energy to become enlightened and perform every action with complete consciousness. The rest of us will have to accept that it’s fine (and in fact more efficient) to walk by putting one foot in front of the other, without rationally calculating the ideal stride length each time. Instead of denying automaticity, we should accept it as the default human condition, abandoned only occasionally at times of great need.
Does that make us, as Banana warns, “infinitely vulnerable to our environment”? An enlightened Buddha would answer by denying the self/environment distinction. On this side of samsara, I would answer by denying the premise: just because we’re automata, doesn’t mean we have to be bad automata. No human roboticist would design a robot that lost half its horsepower whenever it heard a word relating to elderly people, and evolution didn’t design us that way either. Overall we’re pretty robust to a broad range of environmental perturbations. And one of the ways we’re robust is that when we notice red flags that people are trying to fool us, we switch out of our usual automaton mode and consider the situation carefully.
Still, we’re not infinitely robust. Banana mocks the behavioral psych idea of “social contagion”, because “our starting hypothesis should be that behaviors that spread in the population arise from social learning, rather than from a mysterious unconscious process of thoughtless copying”. Regardless of whether there are scare adverbs like “thoughtless” in there, I remain concerned by phenomena like how in 1700, everyone thought slavery was fine, even though now in the 2000s everyone hates it. If I had lived in 1700, would I have thought slavery was fine? Maybe! Would this have been because of “a mysterious unconscious process” or because of “social learning”? Yes.
I can’t help wondering if there’s some understanding of of “automaticity” or “being less automatic” that could have helped 1700-me question my belief - or wondering what equivalent automatic beliefs I should be questioning today.
Here's Why Automaticity Is Real Actually