654 Comments
User's avatar
User's avatar
Comment deleted
Mar 11, 2021
Comment deleted
Expand full comment
JKPaw's avatar

I don't know much about the current state of psychology, but it still emphasizes habituation, right? So I sometimes wonder if someone like Scott is trying to reconcile often contradictory goals: acceptance of reality vs a well-adapted, well-mediated self. I would think LSD can provide such a strong glimpse of an alternate reality (not hallucinations so much as perception without most of the regular filters) that it leads to diminished adaptability in certain people who enter the trip with an ill-fitting context.

I probably don't understand what your conclusion is driving at, as I would think there'd still be a ton of temperature contrast in people with autism and/or schizophrenia.

Expand full comment
User's avatar
Comment deleted
Mar 10, 2021
Comment deleted
Expand full comment
Melvin's avatar

That's a great point. It suggests that modeling political beliefs as if they're "priors" that can be updated with "evidence" is just not a great model. Political beliefs (unlike, say, beliefs about polar bears) are nearly impossible to shift by evidence, but pretty easy to shift by other means.

I try to be rational about politics, but it's really freaking hard. Mostly, I just hold a strong and belief that taxes on people in my tax bracket should be lower, but in order to believe this I somehow seem to wind up feeling obliged to hold a bunch of other beliefs about capital punishment and global warming and religion and exactly which politicians are and are not rapists.

Expand full comment
User's avatar
Comment deleted
Mar 10, 2021
Comment deleted
Expand full comment
Elena Yudovina's avatar

Huh? The pixels as restricted to the chess pieces are identical (well, to within whatever artefacts are introduced by the image rendering). The background pixels are, of course, different. I would guess that the image was generated by taking an image of smoke, selecting an area of it to be the chess pieces, and then manipulating the intensities separately on the selection (set to one level) and on its complement (set to two different levels).

Expand full comment
Godoth's avatar

Looks like the original comment was deleted. I can understand where he was confused. If you're under the impression that it was a deliberately taken photo with bad compression versus a digitally manipulated image designed to be deceptive, you're not going to believe your eyes.

Personally I think a lot of these illusions are rather overrated—at least the way most people talk about them; most of them are clever artistic tricks that only work because our minds are filling in gap information that doesn't exist in a 2D bitmap. It's less an illusion than a suggestive symbol.

Imagine if you cut out a cardboard silhouette of a woman and then put it in front of a backlit window. You walk by and the sidewalk, stop and say to yourself, 'there is a woman in that window,' then a man jumps out of the bushes. "Got you!" he says, "I've proven that your brain is unreliable. You see, that is not really a woman, but a cleverly designed movie standee! Your brain has tricked you into believing in something unreal!"

Of course you would say, "My brain is doing just fine. Why would I assume, given the limited information of a black person-shaped shadow in front of a bright light, exactly where a person should be, that a lunatic cognitive scientist would be ambushing people trying to point out perceptual difficulties in my neighborhood? I perceived the most likely thing given the context."

2D optical illusions are just a special case of this. There *is* no 2D world; 2D objects are always symbols of 3D things (or 4D things, I suppose). When your brain attempts on the fly to decode a 2D symbol into a 3D representation, it's always working with incomplete nonrepresentational information. It's no surprise that it perceives things in accordance with their symbolic effect rather than on a pixel-by-pixel analytic level, and if it did, it would be a rather useless brain.

Expand full comment
Nancy Lebovitz's avatar

For it's worth, I keep getting fooled briefly by human-sized naturalistic bronze statues. I react briefly as though they're people, and then I recalibrate. It's very annoying.

Expand full comment
Aapje's avatar

I kept getting fooled by seeing two rectangular posters from the edges of my eyes that apparently had the right proportions and offset to mimic a torso and head. Apparently, it doesn't take much.

Expand full comment
Dan L's avatar

It can both be true that most of the 2D illusions are contrived examples to exaggerate the effect, and that black and blue v. white and gold was one of the more viral scissor statements of 2015. https://en.wikipedia.org/wiki/The_dress

Expand full comment
Vaclav's avatar

I think the reason they are surprising/impressive is because visual perception *feels* like such a low-level, just-the-facts information feed. Or not even an information feed -- it just is (a big part of) our 'direct' awareness of what's out there.

Expand full comment
Vaclav's avatar

Hit 'post' by mistake, so continuing: If you've already fully internalised the fact that your brain is doing a whole lot of processing before anything reaches conscious awareness, optical illusions don't seem so strange. (Though I'd be surprised if you don't even feel an instinctive twinge of confusion.) But if you haven't, they can be part of the process by which you do.

Expand full comment
apxhard's avatar

> Why would I assume, given the limited information of a black person-shaped shadow in front of a bright light, exactly where a person should be, that a lunatic cognitive scientist would be ambushing people trying to point out perceptual difficulties in my neighborhood?

This makes sense if your belief is "illusions seem to exist primarily where people have constructed them intentionally." But then that raises the question, "where else is this happening?

How much of what you read appears naturally, with whole grain letters sprouting organically from the question-bush? And how much of what you read is constructed by human beings with agendas?

A good question to ask here is what percentage of your input pixels are natural, free-range pixels, photos which traveled to your eyes without being the result of some human brain somewhere trying to produce an outcome?

If you spend all day outdoors, naked, then you'll have a diet of 100% natural pixels. If you spend all day indoors, behind a screen, or near furniture, painted walls, tastefully arranged art - then very, very few of the pixels you consume are organic.

Now i should go back to my job, which involves helping the training of algorithms to select images from massive sets of pixels that humans arranged just so, using the sole criteria of 'which images will produce an emotional response in this person's brain' ?

Expand full comment
peak.singularity's avatar

Technically speaking, the "no blind spot" illusion is a 2D illusion, and doesn't involve any artistic tricks.

Expand full comment
Gerry Quinn's avatar

That chess illusion is so good I actually pasted the image into a paint program and moved a piece of the white knight up to the Black knight to prove to myself they were the same.

Expand full comment
Bruce H McIntosh's avatar

I've done the same on prior "they're actually the same shade" illusion graphics I've come across. It's jarring, ain't it?

Expand full comment
Neike Taika-Tessaro's avatar

Letting you know I'm a little confused by the words "photo", "physical" and "illumination" here - the smoke is the same because the entire picture is the same, they're just geometic shapes made to look like chess pieces (to better trigger the white/black priors). Unless I'm looking at a different picture?

Regarding the pixels, that can easily be, it's a jpg image, which means it's almost impossible to compare the top and bottom part pixel by pixel, due to compression artefacts. But if you copy the bottom part of the bottom image and move it to its corresponding place on the top image, you can see how any differences are much, much more subtle than your eyes are leading you to believe.

Expand full comment
Neike Taika-Tessaro's avatar

(Er, whoops, I noticed I made a silly mistake in my wording there by assuming knowledge context - the pictures the same except for the background, obviously. Words mean things! Sorry for the lack of precision.)

Expand full comment
Adam's avatar

The original comment is deleted, so I can't respond to it directly, but the first thing I did was run this image through a pixel analyzer available here: https://yangcha.github.io/iview/iview.html

They're not exactly the same intensity values between high pieces and low pieces, but they're very close.

Expand full comment
Muskwalker's avatar

Even if the image itself lost something in transmission, it's easy enough to replicate the effect.

Thrown together in GIMP (with bonus image showing just the text layers)

https://imgur.com/a/iLOlQnZ

Expand full comment
User's avatar
Comment deleted
Mar 10, 2021
Comment deleted
Expand full comment
Neike Taika-Tessaro's avatar

Anecdata to support your claim:

I have a trapped prior with one of my interpersonal relationships. The relationship isn't abusive. The prior is something like "my partner doesn't care about my problems at all"*. This was actually true for many years; it took a few breakdowns for him to notice I actually mean it when I say I need help. But he knows it now! He takes it seriously now! Unfortunately I'm still stuck in the emotion of assuming he doesn't care about my problems at all. At least rationally I know better - I get bits and pieces of evidence every other day that he does, in fact, care. But I'm really struggling to get back out of the mindset where I kneejerk assume he doesn't care. I like to think I'm getting better at it, but I sure feel a bit trapped in the prior.

Though presumably the fact I'm noticing it means I'm not formally trapped, just glacial at adjusting to the new evidence.

(* I am using "doesn't care about my problems" as a summary of a more complex emotion that's less easy to put into words.)

Expand full comment
Aapje's avatar

I don't think that noticing it makes you immune, because this is not a rational process where you carefully weigh all the evidence. Instead, you have a strong heuristic that has strong impact on your emotions, without you deciding to trust that heuristic. Your subconscious just does.

Expand full comment
Neike Taika-Tessaro's avatar

It's more a category question (hence the word "formally", really). From the article I got the impression 'being trapped in one's prior' very much also required the person in question being unaware of this. But regardless what we call my problem, yeah, it has basically the same failure modes. :)

Expand full comment
Aapje's avatar

I don't think that people become immune by awareness.

For example, pretty much all of us know that ads seek to manipulate us, they still work.

Expand full comment
Neike Taika-Tessaro's avatar

Yes. Sorry for being confusing. Let me try to word this in a third way and hope I don't talk past you again:

I'm not trying to claim I'm immune. I'm just saying "trapped in one's prior" read to me in the article as a very specific (quasi-)formal category that meant being both entrenched in an idea *and* being unaware of being entrenched in the idea.

In other words, I agree with you, I was just making a semantic comment in my original comment. That's why I used the word 'formally'. "I have nearly all these failure modes discussed in the article, but I guess maybe Scott was drawing a more precise category, and I might not fit into that more precise category purely on a formality."

Hope that helps clarify what I'm trying to say!

Expand full comment
Aapje's avatar

That's not how I read the article. An example Scott gives are phobias. Many people that have a phobia become aware that they have it. That realization doesn't suddenly resolve their phobia.

I don't draw a hard distinction between conscious and unconscious or rational or irrational. Nor do I think that the rational is always able to control the irrational and that the limitation is awareness of what the unconscious is doing.

So I don't see much value or correctness in treating phobias that people are aware of as an entirely separate thing.

Expand full comment
Faceh's avatar

Pretty sure I experienced this from the other side very directly. After years of what seemed to be a healthy(ish) relationship, my partner somewhat 'abruptly' determined that being with me was bad for her because she constantly experienced negative emotions around me, and by the time this was identified and open, ANYTHING I did was likely to trigger further negative emotions, even my direct attempts to fix the situation or reassure her that I understood and would work extra-hard to accommodate her feelings.

It basically became this situation:

A) Say something negative, they react very poorly to it.

B) Say something neutral, they interpret it negatively and react poorly to it.

C) Say something positive, they assume it is insincere, and may react negatively to it.

D) Say nothing at all, they'll assume you're *thinking* the worst thing and can react negatively to it.

An extreme no-win situation.

Expand full comment
Neike Taika-Tessaro's avatar

For completion's sake, there was nothing abrupt about my situation, although I can't rule out my partner viewed it that way (because he was discounting it each time I told him I was unhappy about something or asked him to stop doing it).

Sorry to hear you are/were in that situation, though! From the past tense I assume an unhappy end. Wishing you the best for future relationship endeavours!

For what it's worth, one way my partner and me have started communicating more, to combat any phenomena like what you mentioned in your point D, is to just make a *noise* imbued with some modicum of emotion (much like a mewl) to convey acknowledgement. We're both introverts and often don't feel like we have anything to add to a statement, but this way there's still a social signal, and that's working for us.

(Point C has thankfully never been an issue.)

Expand full comment
Faceh's avatar

Well the best way to put it is that things are still in motion. Could coalesce back into a relationship, I guess.

And once I had finally analyzed and REALIZED what was actually happening, I could tell that it *wasn't* abrupt, but rather had been building for months, possibly even a year. Possibly more. It just 'seemed' abrupt because she could simulate all the signs of commitment even as her doubts built. It just was forced to come to a head via an 'outside' force that she couldn't avoid.

Just as you said, she (accurately) called out some of my behaviors early in the relationship which sent the message that I didn't really care about her problems. And I kinda didn't, since I had my own to deal with. But by my nature I solve problems so I was still trying to fix hers.

And while I eventually came to really, truly care about her problems (and it took a bit longer for me to realize to take her seriously when she expresses her distress), by that point I think the pattern was almost locked in or already locked in.

So too little, too late, in a sense. But now that I understand fully what is going on, I at least can understand that there is no reason for anger or blame or, in a sense, even regret. Only healing and moving forward.

It just SUCKS to realize that somebody has essentially come to believe that you are a huge source of all their negative emotions, even when you can logically point out all the effort you've put in over the relationship to reduce their negative emotions.

Expand full comment
Neike Taika-Tessaro's avatar

Thanks for getting into more detail - that really sounds like such an unfortunate and painful situation! I hope the associated frustration doesn't drain too much happiness from your life. Best of luck in mending the shards!

Expand full comment
Faceh's avatar

Thanks! I'm holding together well. As I said understanding the issue made it easier to take, and I have a pretty solid map of my own mental states so I know *how* to navigate out of a funk if I can summon the will to do so.

And so far I can. Its the uncertainty as to how *her* situation will proceed that bugs me, mostly.

I do know that now is NOT the time to hold on tighter.

Expand full comment
Kyle's avatar

I'm super happy to see Leadership and Self-Deception mentioned here because it was what I was thinking about while reading too. I think Arbinger(the publishers of Leadership and Self-Deception) actually frame this issue in a better way than Scott does, however, and in a way that offers hope for correction.

The ways we see the world, others, and ourselves are colored by the ways the people we interact with see the world, others, and ourselves. The way out is a sort of yielding to our ethical responsibility to the other people in our lives that rids us of our need to see them in ways that justify our mistreatment of them. When you do, then you are able to see the world not only from your own, entrenched way of seeing it, but from the new way that they offer as well. Other people open up the possibility of possibility in a sense. They show us that the way we experience something is not the only way it can be experienced.

That doesn't mean we just blindly migrate over to their view, just that it opens up to as as a possibility, while our own view releases its hold on us and becomes a possibility again as well, rather than simply the "way things are" that we previously thought it to be.

To apply this directly to the question of "trapped priors as a basic problem of rationality", Arbinger's philosophy is greatly influenced by the work of Emmanuel Levinas who recasts reason/rationality as an interpersonal, dialectical achievement rather than an individual ability or characteristic. It is in discourse with other people that we learn what it means to be "rational".

From an essay by Levinas scholar Kevin Hauser, "The orthodox explanatory order, which explains our responsibility to one another in terms of normative reasons, is backwards. For while such reasons may explain what we are responsible for in a given case, they do not explain why we are responsible to begin with. Worse: we seem to have a standing responsibility to have reasons with which to justify our acts and attitudes. But the general responsibility to have justificatory reasons isn’t itself something reasons could justify. Levinas’s suggestion: Stop trying to explain interpersonal responsibility in terms of reasons. Start explaining reasons-giving as an expression of a responsibility-relation. We will then see we are not, first, responsible to others because of reason, or because there are we have reasons we ought to be. On the contrary: we are first responsible to others one another, and only this explains why and how we have reasons."

https://www.academia.edu/36787319/Levinas_and_Analytic_Philosophy_Towards_an_Ethical_Metaphysics_of_Reasons

It would seem to follow then, that the path to becoming more rational is inextricable from the task of becoming more ethically responsible to the other people in our lives. When we neglect our responsibilities to each other we close ourselves off from not only the source of our priors, but the possible means of their correction. When we re-open ourselves to the call of other people, in fulfilling our responsibility to them, we re-open ourselves to the new ways of seeing they offer. We find our "need to be right"(or for others to be wrong) gives way to a willingness to stand corrected.

Expand full comment
Nova's avatar

Just a note, that "wine study" was done on enology students, not sommeliers.

(A nice writeup on the myth)

https://sciencesnopes.blogspot.com/2013/05/about-that-wine-experiment.html

The study has be telephoned to death by the popular media thanks in no small part to the snobby gatekeeping mentality of the wine industry, whom people just want to say are full of bull. (I say this as some one with an amateur interest in beer, wine, and spirits. Someone with a decent palate who regularly does blind tastings)

Expand full comment
Scott Alexander's avatar

I've removed "wine experts" from the post (I'm not sure of oenology students count as "experts", though they seem much more expert than most people), but otherwise I don't think the link debunks it too thoroughly to use.

Expand full comment
Godoth's avatar

Hmm. While this is a judgment call, I submit that the debunking is pretty thorough. Experiment participants were apparently not allowed to describe the wines using the own words, and forced to use description words they had previously used in a genuine comparison of red and white wines.

I can't imagine how the study could have come out any other way unless participants had simply walked out on it.

Expand full comment
AJD's avatar

> and forced to use description words they had previously used in a genuine comparison of red and white wines.

Why wouldn't that entirely suffice to distinguish reds from whites? Unless the difference is indeed mild, and the bulk of the distinction in experience is priming to use different words in the red and white cases for very similar tastes.

> I can't imagine how the study could have come out any other way

You can't, not at all? Not even by using using words more established earlier as more characteristic of the actual wine type rather than the dyed one?

Expand full comment
Melvin's avatar

> Why wouldn't that entirely suffice to distinguish reds from whites?

Because there's a convention in the wine world that certain words are used to describe the taste of reds and certain words are used to describe the taste of whites (just like you'd call a woman "beautiful" and a man "handsome"), and maybe it's hard to step out of this convention even if you're pretty sure that someone has just handed you a glass of white wine with red dye in it.

If you really wanted to do an experiment to see whether wine semi-experts could tell that dyed white wine wasn't red wine, it would be pretty easy. In order to not clue them in that you were trying something sneaky with dye, you'd simply ask them to identify specific varietals or styles of wine.

I would predict that anyone with a reasonable degree of expertise in wine should be able to tell you with no hesitation "This is a Cabernet-Sauvignon, this is a Shiraz, and this one is really freaking weird, did you put red dye in a Chardonnay or something?"

Expand full comment
Kenny's avatar

Why wouldn't you just make the participants taste blinded, i.e. with a blindfold on? I thought they had done that study (tho maybe I'm thinking of what was 'debunked') and that the experts were pretty terrible.

Expand full comment
King Cnut's avatar

If we're trying to determine whether the perception is influenced by the colour of the wine, blindfolding people would also defeat the point, wouldn't it? Depends on what we're trying to work out. Sure there are any number of studies on how good experts are at assessing good wine, which I'd love to see.

Expand full comment
Godoth's avatar

I don’t think you understand what is meant; you should read about the study.

They literally had to assign ‘red wine’ descriptor words to one of the wines—either the dyed white or the undyed white. Not only would that not suffice, I consider it enough to spoil the results entirely.

Expand full comment
Garald's avatar

If I remember correctly, the study wasn't meant to show (and didn't show) that "oenology students can't distinguish red from white wines". Most red wines couldn't be mistaken for white wines by, well, pretty much anybody: too much tannins. Now, some particular red wines do taste like some white wines (in part because they are low in tannins). Often you have molecule A (e.g., an ester) appearing both in one of those red wines and one of those white wines. If you have a trained wine vocabulary and smell that molecule, you'll tend to describe it in one way if the wine is red (say, by comparing it to prunes) and in another one if the white is white. If you are given white wine died red, then, yup, you'll use the descriptor that is used for red wines. Interesting but not surprising.

Expand full comment
Spill's avatar

The 2001 paper that I presume you're referring to has two major issues (ignoring the experts vs. laypeople part) that I think make it untenable to claim that dying white wine red makes anybody think it tastes like red wine. Firstly, the original paper only reported data on *smell* and not taste, so claims about *taste* are completely outside the scope of the paper. Secondly, the actual experiment was to force people to choose which they thought better matched certain red/white smell descriptors: white wine or white wine with red food coloring. In other words, participants were given two identical-smelling wines and forced to incorrectly identify one of the two as a red wine. Of course they picked the red-colored one! But that absolutely does not mean they thought the dyed white wine smelled like red wine. It would be like if you gave someone a plastic flower and a plastic dinosaur and forced them to pick which one smelled like a flower, and then claimed that people couldn't tell the difference between plastic flowers and real flowers.

And on a more anecdotal level, a friend of mine organized a blindfolded wine tasting game with some friends and across the board, every single person (all nonexperts) was able to identify by taste which of six wines was the white wine (the others were red). White and red wines really do taste quite different, and it's quite easy to verify this for yourself! (Though it's probably not quite as easy to tell them apart by smell alone.)

The original paper: http://www.daysyn.com/Morrot.pdf

Expand full comment
Demeter's avatar

That article calls out a couple real misconceptions, but the author also makes some mistakes in his criticism of the paper.

Descriptive analysis is ideally done with a trained panel who agree on descriptors using reference standards prior to using those descriptors to evaluate wine. That's why the students were "limited" in the descriptors they used. It's the right way to conduct the study. Otherwise you wouldn't be able to tell any significant trends apart from the background noise of whatever random descriptors people come up with that week.

And the study's most informative result was that the students preferentially applied red-associated *aromatic* descriptors to red wine because they were biased by its color. Yes, the study is misrepresented in media, but it's still a revealing study and I believe it fits this post.

(I say this as a winemaker who has been trained in sensory analysis and who has done descriptive analysis)

Expand full comment
Deiseach's avatar

Extremely late to this, but time for the Thurber cartoon? https://i.pinimg.com/originals/29/79/73/297973ec23680ffa9453975fd142e685.jpg

Expand full comment
Jeru's avatar

There's another study, described in P Bloom's book HOW PLEASURE WORKS, where wine enthusiasts (people who enjoy wine but might not claim expertise) were in an fMRI while drinking wine from a straw. If the experimenter told them the wine was expensive, their brains would light up as they indulged in experiencing the flavor. If the experimenter told them it was cheap, their brain would stay quiet. Didn't matter what the actual stimulus was, only the context.

Expand full comment
vorkosigan1's avatar

This analysis is exactly correct, IMHO (or my trapped priors lead me to think so, anyway....) I think the fundamental question, unaddressed here, is "How do you catalyze people into recognizing, accepting, and acting on the fact that they have a trapped prior?" Anyone? Bueller?

Expand full comment
Tom's avatar

sounds like LSD

Expand full comment
Ondřej Kupka's avatar

You wish :-)

Expand full comment
vorkosigan1's avatar

You're not wrong.

Expand full comment
Melvin's avatar

I don't see how you would distinguish, either internally or externally, between a prior that is "trapped" and one that is just very strong for perfectly sensible reasons. If my strong prior against the existence of Bigfoot is failing to shift all that much, regardless of how many vague footprints and blurry photos I might see, do I have a stuck prior or just a strong well-founded prior?

I think phobias are a bit of an exception because the phobic, once safely away from dogs/etc, is rationally aware that their phobia is to some degree irrational.

Expand full comment
vorkosigan1's avatar

Ok, now try steel-manning it: What's the best argument you could make about it being easier than you posit to distinguish between the two?

For me, I'd look to consensus information, my prior mistakes involving trapped priors, and the extent to which I have a strong emotional attachment to the prior.

Expand full comment
Adam's avatar

That's also weak evidence, though. If you saw bigfoot yourself, in person at close distance while you were of clear mind, that would probably move you at least a bit. If it let you get close enough to tug on the skin and confirm it's the best costume ever made if it's a costume, even better.

Expand full comment
Melvin's avatar

Agreed, but how do we distinguish between "You have a strong prior that is moving very little in response to weak evidence" and "You have a stuck prior that refuses to move in response to this really really compelling evidence"?

If I could distinguish those, I could be perfectly rational all the time.

Expand full comment
Downzorz's avatar

I think the concept is less of "a prior that doesn't move in response to compelling evidence" and more "a prior that is reinforced whenever evidence is encountered, even neutral or weakly compelling contrary evidence." It becomes progressively easier to ignore or defy evidence of a contrary position the more evidence you have in favor of it; the trap isn't sprung for the first time when you encounter a talking ape in Canada, it goes off every time you see a blurry photo or read a misspelled rant claiming that bigfoot is real and take that as evidence that he isn't.

Expand full comment
Felix Melior's avatar

I often find myself more annoyed with people making bad arguments in favor of a position I share (for other, to my mind better, reasons) than I am with people arguing against my position. I think this phenomenon -- "a prior that is reinforced whenever evidence is encountered, even neutral or weakly compelling contrary evidence" is part of why I have such an allergic reaction to those bad arguments in favor of the shared position. I know, or at least expect, that the bad arguments will make people less likely to accept the claim.

Expand full comment
Ryan W.'s avatar

Make testable predictions, then test them? Especially from a data source that's less likely to be crafted towards a particular opinion? For example, popular news sources are more likely to exaggerate the harms of climate change than records of per hectare crop outputs are. Or, to take the other side of things; insurance company policies may be more reliable sources of testimony regarding the harms of climate change than news outlets.

Stephen Jay Gould made an interesting observation in The Mismeasure of Man where he noted that the margin of tolerance for an experiment correlated with bias in the output. The more carelessly you measure things, the more biased you're allowed to be.

(Yes, I know that that book has its own issues with academic dishonesty. But the observation itself remains interesting.)

Expand full comment
Paul Williams's avatar

In therapeutic situations, what you are proposing is essentially the cognitive part of CBT (cognitive behavioural therapy). And on its own cognitive therapy doesn't work. I didn't like the section above where Scott tried to seperate out emotional and cognition (i.e., a trapped prior could exist in the absence of an emotional component). I did not think that was helpful in understanding the problem of trapped priors, around politics or otherwise. The cognitive system never acts in the absence of emotional system.

I would posit that to change ANY trapped prior requires emotional engagement at some level, which you don't get by just making testable predictions. At minimum, you need to add to that an emotional component. For example, you might get people to focus on how they feel when they get the evidence back that disagrees with the prior. If people start to recognise that evidence against the prior makes them uncomfortable/threatened, but evidence for the prior makes them feel safe... then you can perhaps begin to help them reframe those uncomfortable/threatened feelings as positive (the beginning of change/growth/a wider perspective).

The evidence definitely trends in this direction in therapy (i.e.,EMDR, conversational therapy, graded exposure, all rely on emotional engagement and have +ve evidence).

Summary: IMO solutions to trapped priors must involve a focus on developing insight into, and changing, the emotional responses that underpin fixated cognitions.

Expand full comment
Ryan W.'s avatar

Thanks.

"The cognitive system never acts in the absence of emotional system."

It could in an AI though. I think I see what he was getting at, even if emotional involvement complicated things greatly.

I wonder if there's a kind of "survivorship bias" in the problems that end up requiring professional help vs the ones people can address themselves.

Expand full comment
David Friedman's avatar

Try making a strong prediction based on the prior that you can test, register the prediction with someone you trust, then test it.

Expand full comment
vorkosigan1's avatar

If the prior is sufficiently trapped, prediction failures will be rationalized away, unfortunately.

Expand full comment
Greg the Class Traitor's avatar

It appears you're the victim of a trapped prior, which is the belief that "other people" can't be talked out of their trapped priors. :-)

Expand full comment
vorkosigan1's avatar

Interesting. Are you ignoring the "sufficiently" in my statement? And I do believe that people can change trapped priors. Very or extremely trapped priors usually require more than talking, though.

Expand full comment
Alephwyr's avatar

The mechanism doesn't seem like it has to lead to irrational behavior 100% of the time. If it happens to correspond accurately to an expected value assessment (dog bites are unlikely, but really really bad; believing dogs are bad prevents dog bites; dog phobia is justified from an expected value standpoint) it might be functionally valid even if it's epistemologically invalid. Rational behavior is more important than rational belief, and there's no reason they have to correspond.

Expand full comment
Daniel P's avatar

Sure, but it certainly seems like it doesn’t bias you *towards* rational behavior.

Certainly it’s rational to avoid dogs if dogs give you panic attacks, but in the overwhelming majority of circumstances getting a panic attack really isn’t rational behavior: it’s rarely an optimal strategy.

Given my druthers I’ll take rational beliefs, or irrational beliefs that I’ve intentionally chosen because they’re useful. Hard pass on irrational beliefs that are too strong for me to interrogate.

Expand full comment
Alephwyr's avatar

It seems to me like the problem is the feedback loop: that in the expected value calculation of P(x) * x, the probability remains the same (which is fine) but the x keeps getting bigger without any change in external circumstances. IE, the problem isn't just "not updating", it's that updating is in fact occurring but in a ratchet-like fashion based on non-evidence.

Expand full comment
Pedro's avatar

Yes, it's important to distinguish degrees of belief from decisions. A perfectly rational agent might justifiably have low credence on some proposition but still act *as if* the proposition is true due to utility considerations

Expand full comment
Andy Jackson's avatar

Black swan dogs?

Expand full comment
EmilyPigeon's avatar

A half-swan half-dog, of any color, does sound scary.

Expand full comment
Cydex's avatar

I agree that there's something to say about the value of such priors. Knowing that a white wine is actually white and not red will heighten your perception of what makes a white wine stand out, and you'll be able to more precisely evaluate it. If your expectation is clashing so directly with the experience (wine is a white but looks red), you'll produce a less good assessment of the actual nature of the experience.

Expand full comment
DavesNotHere's avatar

I like your aphorism, “ Rational behavior is more important than rational belief, and there's no reason they have to correspond.” Is it original to you? How should I give attribution if I pass it on?

Expand full comment
Alephwyr's avatar

That particular phrasing may be unique to me. I am sure the idea is not new. I am not sure how one would go about citing me in this context, but I use Alephwyr as a handle in many places and have no problem being referred to as such.

Expand full comment
DavesNotHere's avatar

I would have “hearted” this and your original comment, but the heart icon is not showing up in my browser today.

Expand full comment
WaitForMe's avatar

The analysis here seems correct and incredibly important. If we could get people to recognize this concept of trapped priors within their own experience, maybe it could help us break out of this emotionally charged era and lead to greater cooperation and less conflict.

The problem as it appears to me is, how do we communicate this idea to the public? The way you've presented it here works well for communicating with people who tend to view concepts in rigidly logical ways, but I think this approach will fail with the general public. I'm thinking of my wife, who is a therapist who has done a lot of work with trauma. She has said essentially the same things presented here in our conversations, but she frames it in a way that is rooted almost entirely in personal experience. If I were to try to bring up the concept of trapped priors this way I don't know that it would click with her, or even if she did agree with the theory she might reject this way of thinking about it as unhelpful to her clients with trauma.

So how might we present this in a way that is easily digestible?

Maybe other people have already thought about this and found ways to present it to a general audience. I'd be interested to know if anyone has resources like that. I think the inability of science to communicate its results effectively is one of the greatest barriers to progress.

Expand full comment
Yotam's avatar

I suspect that thinking about trapped priors this way genuinely would be unhelpful to most of your wife's clients, unless they happened to be drawn to rationalism as a language and culture.

That said, the language of trauma is becoming more and more pervasive, at least in certain circles, and that may lead more people to recognize the role that trauma plays in motivated reasoning. Resmaa Menakem, for instance, is doing a great job of connecting the trapped priors of racism to trauma in ways that make it easier for people to un-trap them. So, if anything, building the general public's trauma literacy may be good for the cause of rationalism.

Expand full comment
Aapje's avatar

The prior of racism is one that (right now) seems to be heavily driven by propaganda that makes people believe they are victimized more. Zach Goldberg found a huge increase in the number of black people who say that they have ever been unfairly stopped by the police: https://twitter.com/ZachG932/status/1259259908664496131/photo/1

To what extent is the trauma narrative then actually helping people, rather than justifying an less excessive trapped prior?

Expand full comment
Adam's avatar

The research being talked about here is how historical trauma has been passed down through generations. Whether or not any sufficiently young black American hasn't experienced traumatizing levels of direct racism, they live with the effects passed down from older generations. Grandparents who lived through the Tulsa massacre, or parents who experienced desegregation. Ruby Bridges is still alive and only in her 60s. This guy draws the metaphor central to his work from his grandmother's hands being deformed by the permanent effects of picking cotton. His work isn't entirely racialized, either. It's about the effects of the powerful traumatizing the powerless throughout history, but that doesn't always map to white on black and his work includes quite a bit of analysis of how less powerful white people have been brutalized and traumatized by, well, all the crappy events of the last century and a half including wars that killed half the adult males of some regions, or the absolutely brutal existence of people living in company towns and tenements. This hasn't historically resulted in cross-racial harmony of the lower classes, either, but rather the reaction is more like the man who beats his wife, who then beats her kids, who then beat up weaker kids at school, who then go home and beat their dogs.

Presumably, this is part of how you end up with the observation in Scott's class post comments about how the wealthiest families who had everything taken under Mao generations later still ended up wealthy again. You can redistribute money but not trauma, and if some family lines are beating the hell out of their own families, those families are going to go on to have shittier lives and beat their future families too, and it's not clear when, if ever, it stops.

This is quite separate from any narrative that black Americans are presently still being traumatized by police. That narrative is certainly out there, but it isn't the one the comment you're responding to is talking about.

Expand full comment
Aapje's avatar

The problem with that narrative is that it seems to be able to explain just about any negative cultural behavior as being due to trauma, yet can't explain why bad events that happen to different groups don't result in very similar cultural behaviors. As such, it seems to be a just so theory that only 'works' in hindsight.

Furthermore, the explanations seem to usually be driven strongly by politics/biases.

The most abused ethnic group in history is surely the Jews, yet in so far that they have generational trauma, it didn't stop them from doing better than gentiles in the US. And you can explain all kinds of behavior by 'oppressors' as historical trauma.

Your claim that: "It's about the effects of the powerful traumatizing the powerless throughout history" is often used to stereotype groups, where extensive harms that happen to groups that are deemed to collectively be powerful, are treated completely differently than harms that happen to group that are collectively deemed to be powerless.

The historical trauma narrative seems to often be used to (selectively) blame one group for the issues of another group, rather than accept responsibility for one's own culture or personal choices.

Expand full comment
Nancy Lebovitz's avatar

There are different kinds of trauma, and it's quite possible that the loss of cultural continuity for people who were kidnapped into slavery was especially serious. Native Americans suffered considerable loss of continuity, though not quite as extreme.

Expand full comment
Aapje's avatar

Equating a loss of cultural continuity, which seems to be a pretty common experience (migrants, people who enter/leave cults, etc), to trauma, seems to be open to much of the same rebuttals.

How explanatory is this when migrants tend to do very well, even when they rapidly assimilate and thus lose cultural continuity?

And I never see this argument used to speak out against migration, due to the trauma that is would then often cause, if the loss of cultural continuity is actually a severe form of trauma. If people don't take it seriously most of the time, is it an actual reason, or just a rationalization?

In practice, all the debates I've had about these kind of issues just end up with the other person denying that these are universally applicable, when pressed about similar experiences that some more successful groups had and claiming that there is something unique about black Americans and their explanation only works in that context, which doesn't make it much of an explanation.

Expand full comment
David Friedman's avatar

Chinese immigrants to the U.S. were treated very badly for a long time, Japanese Americans even more recently, yet both groups are doing fine now.

The obvious alternative explanations for the wealthiest families being wealthy again are either genetics — they got wealthy in the first place because they were smart, hard working, whatever — or family culture.

Expand full comment
dionysus's avatar

"The research being talked about here is how historical trauma has been passed down through generations. Whether or not any sufficiently young black American hasn't experienced traumatizing levels of direct racism, they live with the effects passed down from older generations. Grandparents who lived through the Tulsa massacre, or parents who experienced desegregation. "

Where's the evidence for this? I know many, many Chinese people with grandparents who experienced truly horrific things. The Japanese invasion. The Chinese Civil War. Mao's Great Leap Forward and Cultural Revolution. Each of these killed millions to tens of millions of people, far worse than the Tulsa massacre or desegregation. Those young Chinese people know about these things, but it doesn't affect their daily lives. They don't go around showing any trauma symptoms. As a group, Chinese Americans are more successful than whites in the US, and the country of China has seen spectacular economic growth in recent decades.

Expand full comment
teddytruther's avatar

This seems like a pretty strange claim based on empirical data, which suggests there is a strong racial discordance in police stops. That suggests at least some amount of police stops are in fact unfair*, and it's appropriate for Black Americans to adjust their Bayesian priors in light of that fact.

*Obviously many 'unfair' stops still involve lawbreaking; the unfairness arises from selective enforcement of the law.

Expand full comment
Aapje's avatar

The data I gave didn't involve a comparison between races, but a comparison of survey answers by people of the same race before and after BLM became a thing.

So the only way in which a difference in police stops could explain those results is if policing of black people became much more unfair between 2006 and 2019. I've never seen anyone claim that.

Expand full comment
teddytruther's avatar

It's a two part process: the underlying difference in treatment by police, and the way that major national news stories - starting with Trayvon Martin in 2009 - made that difference more salient and available for incorporation into prior probability analysis.

I'm not saying that all of these black Americans reached their conclusions through dispassionate Bayesian analysis. But a dispassionate Bayesian would reach a similar conclusion about the likelihood of racism in police stops - and perhaps would have reached it more quickly.

Expand full comment
Aapje's avatar

Do you believe that media attention can only make people more accurate in their assessments?

Expand full comment
Gramophone's avatar

It didn't make the data more available - it CREATED data. Humans determine how common something is by how easy it is to think of examples.

Now think about what fun graphs like these do to people's frequency perceptions:

https://www.tabletmag.com/sections/news/articles/media-great-racial-awakening

It's the same kind of thing why we're concerned about airplanes falling but not everyday car accidents. Why school shootings feel very important and random murders and muggings don't even register. News articles are tragedies, stats are stats.

Expand full comment
Garrett's avatar

The New Jersey traffic study (an actual prospective study) showed that black were stopped for speeding offenses more because they actually were speeding more. Of those of an identifiable race, they were more likely to speed than white people and even more disproportionately more likely to speed at higher rates of speeds.

Expand full comment
teddytruther's avatar

Could you provide a link? I did some Googling and am unsure which specific paper you are referring to.

Expand full comment
TGGP's avatar

"Trauma" seems specifically related to the concept creep that's been going on in psychology.

Expand full comment
Melvin's avatar

I don't see it as all that useful. One man's "trapped prior" is another man's "prior that's actually just really strong due to a bunch of good reasons". I don't want to be accused of having a "trapped prior" every time I fail to change my opinions in response to weak evidence.

Me: "I don't believe homeopathy works"

Some jerk: "I took a homeopathic remedy for my cold, and it worked, therefore it does"

Me: "That evidence is incredibly weak, I still don't believe homeopathy works"

Some jerk: "You have a stuck prior!"

Expand full comment
WaitForMe's avatar

I could see it being used that way, and you may be right. It could be useful if it results in self reflection, and if possible should be introduced to people in that context. But this is probably a case of my theoretical world not actually matching reality.

Expand full comment
Aftagley's avatar

Yeah... the "so what" of the original argument kind of escapes me.

Expand full comment
WaitForMe's avatar

I think maybe I'm at such a low point in my faith in human cognition that I'm looking for a defect that, simply by recognizing it within ourselves, could allow us to escape the trap and improve our reasoning which might lead to a more productive discourse. This is probably fantasy.

Expand full comment
Wtf happened to SSC?'s avatar

> maybe it could help us break out of this emotionally charged era and lead to greater cooperation and less conflict.

Getting out of conflict is not the goal. Getting to a better world is. I refuse to listen to my political opponents because they have fooled me time and time again in the past into thinking they had principles, only to turn around and sell them out and make the world obviously worse. They took my good faith, they milked it, and then they sold me out.

If an approach continually leads you to the wrong answer and gets you surprised by the behavior of the people you're trying to model - and being open-minded towards the right does exactly that - then it's not rational to continue applying that approach. At some point you have to recognize that you have a modeling error that is currently undetectable to you and that you cannot rely on the apparent local correctness of an argument.

In ML terms: I keep having residuals that, in retrospect, point me too far to the right. So biasing left is how I get correct answers.

Expand full comment
WaitForMe's avatar

Getting out of conflict entirely is not the goal, that would require everyone to agree all the time which is never going to happen, but reducing conflict must be at least part of our aspirations in as much as war or extreme civil strife are not conducive to human happiness. I think I am hardly alone in thinking our current state of hatred for one another is not a state that will lead to progress.

I'll just posit one thought here, consider it or not. It is easier to see the faults in the policy preferences of the right because by and large they play the role of stopping things from getting done. You see them stop the raising of the minimum wage and say what the hell, obviously it should be more than $7.50 an hour. The faults and theat potentially posed by the by the left are harder to see because they don't appear until a program has passed. If you gave them free rein to pass what they wanted you might have more criticisms, but their agenda is not as easy to criticize because the future problems excess bureaucracy or regulation might create are not as visible as the problems that currently exist which the right refuses to address.

Expand full comment
Wtf happened to SSC?'s avatar

> I think I am hardly alone in thinking our current state of hatred for one another is not a state that will lead to progress.

No, it won't, but that state was almost unilaterally created by one side that refuses point blank to come to the table.

> The faults and theat potentially posed by the by the left are harder to see because they don't appear until a program has passed.

That has never stopped me - or others with similar biases, many of them right here in these very comments - from constructing narratives about how they will destroy the world.

The reason I no longer *listen* to such arguments, even though they occur to me easily, is that they've failed to pan out. Gay marriage *didn't* destroy the family. Obamacare outright saved my life, allowing me to pay back what was spent on me many times over. Military bluster *didn't* solve the problems of extremism. The 'war on Christmas' hasn't even managed to stop me from going caroling with friends in the most liberal city in America. So the problems that conservatives claim will follow from liberal policy have largely failed to materialize.

Meanwhile, the same people panicked about all of those things were happy to sleep on every norm-breaking behavior of the Trump admin, right up to an armed attack on the seat of government. And then they excused *that*, too! So they fail to predict obvious problems, then excuse them when they happen.

I didn't start out as a partisan. I just have eyes. Yes, it would be better if we weren't in a conflict, but those without swords can still die on them, and only a fool unilaterally disarms against a foe that has broken every law of war in the past.

Expand full comment
Mr. Doolittle's avatar

Not to nit-pick a small part of what you overall wrote, but I've heard "armed attack" used a number of times to describe the events of January 6, but there were no guns involved. I understand some people got hit by sticks or similar items, but none seriously injured. Why is it being described as "armed"?

Expand full comment
peak.singularity's avatar

A confusion with some of these people actually carrying firearms, but then these are also the people tat use the firearms as a symbol, so this would explain why there were so few deaths. (Also there's that truck full of explosives, does this count as "armed" ?)

Expand full comment
WaitForMe's avatar

I won't object to the claim that this state of hatred was created primarily by the right, but if it isn't owned up to in some fashion by both sides it can't improve. I vote democrat even if I object to some of their proposals because the republicans are a national embarrassment in their current form, and almost everyone I know is very progressive just because of the city I live in, and I can say confidently I have heard a lot hateful rhetoric toward the right in my personal life that I feel is unfair. The hatred definitely crosses boundaries.

I think the politicians of the right are especially more toxic and hateful than those on the left, but even if the ratio is 80/20 or 90/10 or whatever you perceive it to be, I think it's helpful to acknowledge that some on the right feel it is exactly the opposite and without some admission that the hatred is bipartisan we're going nowhere.

And I'm not going to comment on those policies in detail, but know that I'm not saying I don't agree that progress almost comes entirely from the left-side of the spectrum.

What I'm trying to say is that if the left got *everything* it wanted, without any reining in, we might end up with a lot of regulations that would make it difficult to do anything and hamper growth, or a lot of massive bureaucracy that would be inefficient and wasteful and lock us into a lot of spending that would be hard to get ourselves out of.

I think you can look at California to see some of the dangers of this. They have a very difficult time completing large projects because of all their regulations, a hard time building new housing, and despite the left being in complete control have some of the worst poverty in the nation. This isn't because their hearts are in the wrong place, it's because their solutions come with unintended consequences. It is the unintended consequences that worry people that I know, not the goals.

Expand full comment
Wtf happened to SSC?'s avatar

> I think the politicians of the right are especially more toxic and hateful than those on the left, but even if the ratio is 80/20 or 90/10 or whatever you perceive it to be, I think it's helpful to acknowledge that some on the right feel it is exactly the opposite and without some admission that the hatred is bipartisan we're going nowhere.

Well, I don't. I think tribal identity is essentially zero-sum and that anything critical of the left benefits the right and vice-versa. And I think the actions of political elites - people who are experts on the subject and very incentivized to be strategic about it - suggest that they agree with me.

> but if it isn't owned up to in some fashion by both sides it can't improve

Sure it can. Wars end when one side wins.

> What I'm trying to say is that if the left got *everything* it wanted, without any reining in, we might end up with a lot of regulations that would make it difficult to do anything and hamper growth, or a lot of massive bureaucracy that would be inefficient and wasteful and lock us into a lot of spending that would be hard to get ourselves out of.

Maybe, but you're making the unphysical assumption that society wouldn't respond to such a movement. Right now, we're in a rough policy stalemate, where general leftward social shifts are blocked by Republican structural advantages in the Senate/SCOTUS.

But if that stalemate broke and allowed dems to start moving leftward, they wouldn't just be able to do some forever. Democracies have strong negative feedbacks. Lots of people currently aligned with the left would stop being aligned with it long, long before the point you're worried about, meaning the point you're worried about would never actually happen.

> I think you can look at California to see some of the dangers of this.

The state with by far the largest economy and the 8th highest GDP per capita among US states? The one famous for being the origin of nearly all the high-energy innovation and breakout corporate successes of the last 30 years?

Democrats can't both be wealthy out of touch elitists *and* evil anti-business regulatory demons who will destroy all economic progress. And if they're one of them, it's the former: GDP per capita correlates quite strongly (r = 0.68) with Biden margin of victory. The effect size isn't small, either: GDP per capita goes up $4,799 for every 10 points of Biden margin. Sure, there's lots of confounding factors here, but there are *always* lots of confounders, and some of them work in the GOP's favor (e.g. low-population, high-resource-extraction states like Alaska, Wyoming, and the Dakotas).

I'd also point out that a *lot* of criticism of regulation - including on this very blog - is extremely focused on its first-order consequences and not second- or third-order downstream effects. So why does it only come under fire when the first-order conclusion is "that's why we should help black people" and not "that's why we should deregulate psychedelics"?

Expand full comment
Mr. Doolittle's avatar

You seem to agree with the idea that if the Democrats got all of the things they were currently interested in passing, there would be a massive backlash and the Democrats would lose upcoming elections for having gone too far.

I'm curious your thoughts on related items:

1) Do you have any concerns about the current slate of Democrat bills and proposed bills, and possibly eliminating the filibuster?

2) Do you think the Democrats should not pass their full agenda because they are going too far in the eyes of the general population?

3) Do you think they should try to pass actual bipartisan legislation instead of 50+1 bills in the Senate?

Expand full comment
nickiter's avatar

Brings to mind a conversation I had with a politically radical relative, whose extremism on politics was noticeably ratcheting up at the time. I shared with him a study about how easy it is to get partisans to agree with false statements by presenting them in an appropriately partisan context. He responded by explaining at length how the explicitly false statements on his side of the issue were actually *true* and the study just proved even more how right his political beliefs were.

Very educational conversation for me... Changed my view of political belief a lot.

I definitely notice my strong prior against assertions made by certain parts of the political spectrum, but I'm not sure how much to fight that - if my prior is "this person or group has lied or misled in a large portion of the things I've heard them say" that seems like a useful, rational thing to allow into my judgement of new assertions. Or maybe I have a stuck prior which leads me to misinterpret the things they say, thus making them seem to be liars? Bit of a rabbit hole.

Expand full comment
Loren Christopher's avatar

For people it can make sense to have a strong dismissive prior, but be very careful with groups. Humans are bad about assuming outgroup homogeneity, don't keep group borders clear in their minds, and modern communications tech makes the problem worse. Applying negative priors to people who didn't have anything to do with the experiences that caused those priors is endemic to the internet age.

Expand full comment
Pycea's avatar

"This person has lied before and so this thing they're saying now is dubious" is probably a valid inference. One thing to watch out for though is making up or assuming evidence and then updating based on that. "Those Ankh-Morporkians probably believe the world is flat, that just shows how dumb they are." Or in other words, if someone who's lied before says something weird, it's fine to not believe them, but you're double counting if you update without verifying that they're actually lying in this case.

Expand full comment
DavesNotHere's avatar

Maybe I am just cynical, but I think the real danger lies in trusting the trusted sources too much, rather than automatically distrusting the distrusted.

Expand full comment
David Friedman's avatar

The best way to fight it is to investigated the truth of such assertions, preferably ones where truth or falsity is fairly easy to establish. If it turns out you were wrong, that should weaken the prior.

My first reaction to Scott's chess piece picture was that what he said couldn't be true. So I copied a region of one piece to a graphics program, copied the same region of the corresponding piece to the same program, and sure enough they were the same color. I still don't see it, but I believe it.

On the other hand I've been trying to use that approach on other people via an old blog post where the fact I am trying to establish is one they can check for themselves quite easily, and have had close to zero success getting anyone to believe something he would much rather not believe. The last example was on DSL quite recently. Here is the post. It's only a relevant test for people on one side of current climate arguments — persuading people of things they want to believe is much easier.

http://daviddfriedman.blogspot.com/2014/02/a-climate-falsehood-you-can-check-for.html

Expand full comment
Sula Smith's avatar

You seem unsure if you would want to reduce this prior. I would say, probably yes if you're interested in having more accurate analyses, but that isn't necessarily going to improve your quality of life.

Anyways, if you do want change it, the obvious intervention is to expose your self to very reasonable or unusual members of your political outgroup, wven people who are close but not members, etc etc, and allowing yourself to come to believe that their truly are kind and intelligent members of the group. Important to remember all members don't have to believe all the same things for all the same reasons.

Expand full comment
DavesNotHere's avatar

Good advice. Maybe they know the answer to my question.

Expand full comment
Pedro's avatar

FWIW, some people think confirmation bias (i.e. interpreting seemingly unfavorable evidence in a way that favors your priors *and* almost exclusively searching for evidence that confirms your priors) is rational: https://www.kevindorst.com/stranger_apologies/confirmation-bias-as-avoiding-ambiguity.

Expand full comment
Kenny Easwaran's avatar

For people who aren't afraid of wading into the math and the algorithms, I think this paper does a good job of *proving* this: https://www.jstor.org/stable/43616913

(You can see the full article for free on the author's webpage here: https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxhbmRyZWFtYXZpd2lsc29ufGd4OjIzNmJkNTRhNmRjMjQzMGM )

Expand full comment
chipsie's avatar

This argument seems quite odd to me. He describes selective scrutiny as "epistemically rational", but doesn't really describe what that means. The way I generally use it, it means something like "the strategy that maximizes the extent to which your beliefs are correlated with the truth", but if any set of evidence always strengthens your own beliefs, you are effectively completely ignoring it which is worse for correctness in the long run.

Some other minor complaints:

- If two agents are following the same decision making strategy and are presented with the same evidence, and reach opposite conclusions, their strategy can't be any better than random chance (at least not in the binary case).

- This quote:

>Thus if you’re given a choice between two different cognitive searches—scrutinize Cruz’s argument, or scrutinize the NYT’s—often the best way to get accurate beliefs is to scrutinize the one where you expect to find a flaw.

>Which one is that? More likely than not, the argument that disconfirms your prior beliefs, of course! For, given your prior beliefs, you should think that such arguments are more likely to contain flaws, and that their flaws will be easier to recognize.

Everyone should expect to find a flaw in both the NYT's argument and Ted Cruz's. This is true regardless of where they are politically. This holds more generally as well. Most arguments are flawed whether they agree with my preconceived notions or not.

Expand full comment
Pedro's avatar

"He describes selective scrutiny as "epistemically rational", but doesn't really describe what that means."

It's in the title: expected accuracy (Brier score). That's as close as we can get from your definition of "beliefs that are correlated with the truth".

" but if any set of evidence always strengthens your own beliefs [...]"

That's not what he's arguing. It would indeed be irrational if literally any set of evidence always strengthened your credence on some proposition (that's because it violates probability theory, E and ~E cannot simultaneously raise your credence: P(H|E)> P(H) -> P(H|~E) < P(H)). The main idea is that scrutinizing the NYT in this case would likely provide "ambiguous evidence" to him, which is bad for reasoning. Scrutinizing Cruz's argument, on the other hand, is more likely to provide unambiguous evidence, which would raise his accuracy. He elaborates what he means by ambiguous evidence in this other piece: https://www.kevindorst.com/stranger_apologies/how-to-polarize-rational-people

Expand full comment
chipsie's avatar

>It's in the title: expected accuracy (Brier score). That's as close as we can get from your definition of "beliefs that are correlated with the truth".

Thank you for this clarification.

> The main idea is that scrutinizing the NYT in this case would likely provide "ambiguous evidence" to him, which is bad for reasoning. Scrutinizing Cruz's argument, on the other hand, is more likely to provide unambiguous evidence, which would raise his accuracy.

I have read the second piece you linked, and I think I understand the difference between the ambiguous and unambiguous evidence, but I'm still not sure what you mean by this in particular.

Scrutinizing Ted Cruz would cause him to conclude that Ted Cruz's argument is flawed. This is ambiguous evidence about the correctness of his position, because there are many people making flawed arguments is support of every position.

According to him, scrutinizing the NYT would likely lead him to conclude that the NYT's logic is sound. Isn't having sound logic unambiguous evidence that they are correct? There is a source of ambiguity, that being that it might be flawed in a way he cannot recognize (I assume this is what you are talking about, please correct me if I am wrong), but that depends on your skill at recognizing flaws in arguments. The existence of a good argument can't possibly be less epistemically useful that the existence of a bad argument (which is inherently worthless in any direction).

Expand full comment
ADifferentAnonymous's avatar

>It's in the title: expected accuracy (Brier score).

The elided question is, accuracy of what?

The natural interpretation is to assume this means accuracy on the original question (i.e. whether the Republican or the Democrats are right), but in fact the author means *the accuracy of your search for flaws*.

This is like saying that searching for your keys under the streetlight maximizes accuracy. It maximizes the accuracy of your conclusion about whether or not the keys are in the place you searched, but not the chance of finding your keys.

Expand full comment
Scott Alexander's avatar

Agreed - that was what I was trying to say with the polar bear example (and in the linked post https://slatestarcodex.com/2020/02/12/confirmation-bias-as-misfire-of-normal-bayesian-reasoning/ )

Expand full comment
MaksIM's avatar

I think one has to be a bit careful here, as it seems to me that the model of "rational" in the (technical appendix to the) linked post of Kevin Dorst is a bit different from the "classical" rational model, say a la Jaynes. In Jaynes's model, you are certain you are rational. Two agents reasoning on the same evidence from different priors can end up more polarized after seeing the evidence than before. This (as was pointed out by ajb in the comments to your older linked post) is argued already in Jaynes (Chapter 5.3). In Dorst's model, one very much _can_ doubt one's rationality. I think (though I am not very sure about this) it is precisely this "possibly I'm irrational" allowance that permits "confirmation bias from selective scrutiny" to be rational in his model. Note that it is also about a different thing -- not just divergence of opinion from the same evidence, but actively choosing which arguments to scrutinize.

Expand full comment
Pycea's avatar

This article is a little weird. It seems like the author is making the claim that scrutinizing arguments you expect to have flaws is a better use of time because finding a flaw in something you disagree with gives you more evidence that not finding one in something you agree with. But this is like the drunk looking for his keys under the lamp post, not because that's where he lost them, but that's where it's easy to look. Repeated updates on things you expect to happen isn't all that useful epistemically. It feels like a 2000 rated chess player beating a 1000 rated player over and over and saying how good they are at chess.

For one thing, would the author endorse applying the same principle to, say, horoscopes? They talk about ambiguous evidence, but that feels like weaseling out of defining what counts as ambiguous.

He also seems to treat evidence dubiously:

> In both cases, if I find what I’m looking for (a problem with Cruz’s argument; a word that completes the string) I get strong, unambiguous evidence, and so I know what to think (the argument is no good; the string is completable). But if I try and fail to find what I’m looking for, I get weak, ambiguous evidence—I should be unsure whether to think the argument is any good; I should be unsure how confident to be that the string is completable.

This seems exactly backwards to me. You can only strongly update so many times, and there are a lot of bad arguments out there. If you give an event a 90% probability of happening, you aren't allowed to update strongly when it happens, and only update weakly if it doesn't. I initially thought he was saying the opposite, that if you find a solid argument against your prior then you have a big update, but it seems he doesn't even go that far.

> For example, as I was setting out to scrutinize Fox’s op-ed, I could expect that doing so would make me more confident in my prior belief that RBG’s seat should not yet be replaced.

Then why bother reading it at all?

But all this seems secondary to the fact that most people don't think like this. Republicans don't scrutinize the NYT more, more often they just don't read it. Democrats usually don't spend time finding flaws in Fox News, they just dismiss it as crazy. (I also notice that in the example they give in the beginning, both sides can be right. The Republicans defying precedent doesn't mean the Democrats can't also be violating democratic norms via court packing.)

Expand full comment
chipsie's avatar

>This seems exactly backwards to me. You can only strongly update so many times, and there are a lot of bad arguments out there. If you give an event a 90% probability of happening, you aren't allowed to update strongly when it happens, and only update weakly if it doesn't. I initially thought he was saying the opposite, that if you find a solid argument against your prior then you have a big update, but it seems he doesn't even go that far.

I would take this even farther. There are tons of terrible arguments being made for literally every position that you could possibly take. Discovering any particular bad argument should not cause you to update at all because:

1. Conservation of expected evidence

2. The existence of such arguments is not correlated with the correctness of the position (because they always exist)

I'm not sure that this is really what he is saying (see my confusion in the other subthread), but if it is, that is a huge error.

Expand full comment
Pycea's avatar

If you're talking about a random bad argument, then sure. You might be able to save part of it if you're talking about a notable figure, i.e. if even (Ted Cruz|the NYT) can't make a good argument for it, maybe that's some (albeit weak) evidence that there isn't one, or at least the one that most people on the other side believe is wrong.

Expand full comment
Aapje's avatar

@chipsie

There is rarely just one position though. Often there are a huge variety of positions that we often equate with each other, because they seem or are similar at an abstract level, but which actually differ a huge amount if you zoom in. The devil is often in the details.

For example, 'we should reduce police violence' can mean anything from better training and policing of the police, to getting rid of the police. Or a more extreme example, 'we should fight communism' can actually mean anything from reducing inequality to decrease the level of discontent, to murdering all the Jews.

If society decides to try to achieve a goal, it is likely that the politically most powerful get to make the policy, rather than the people with the best arguments. So it's actually very justifiable to change your high level goals depending on the policies that are most likely to be used to achieve them, rather than based on the best policies that are possible.

Expand full comment
Pedro's avatar

You and chipsie make good points, I'd have to read his papers and simulations to understand the argument better.

Expand full comment
Mark's avatar

Doesn't this assume that the probability of an argument being good correlates well with the accuracy of the conclusion? This seems dubious. It's perfectly possible that 90% of arguments for true and false positions alike are bad arguments. Moreover, it seems likely that essentially all arguments in certain fora (like newspaper opeds) will be insufficient or incomplete.

E.g., most arguments for/against raising the minimum wage make arguments that are insufficient and would be 'recognized' as flawed by opponents/proponents. "Raising min. wage will boost poor people's income: no it won't, because it'll cause disemployment, cancelling out wage gains" "Raising minimum wages will kill jobs: no they won't, because employers are monopsonists." etc. Of course, a 'good' argument would necessitate a thorough examination of empirical literature on the extent to which low-wage employers are monopsonists (and maybe also monopolists), something almost no oped, tweet, or conversation will include.

Expand full comment
Kenny Easwaran's avatar

The common thing that is often mentioned in the context of talking people out of an epistemic "echo chamber" or out of a radicalist cult, or even just out of garden-variety homophobia, is getting to know a friendly person who is from the outside, and gradually coming to like them and eventually trust them and listen to them. This really sounds a lot like the desensitization therapy that works - it's never going to help if Ted Cruz or AOC themself come to get to know someone and show that they don't bite, but if someone who seems nice and isn't already processed as a political symbol gradually reveals their niceness while also gradually revealing their partisanship (or the fact that they're gay) then it can overcome the defense mechanism.

Expand full comment
Erwin's avatar

I fully agree.

This calls for a place where people get mixed enough to have many chances to get to know each other slowly, but flexible enough to keep the distance they need to feel save. Traditionally there are several such places in society like religious gatherings, shops or living city centers, schools, military service, etc. But of cause it's more convenient to just stick with people you know or at least know to share your culture. Many otherwise great modern developments like more automation, long distance communication and much easier transport of people and goods, make it easier to avoid any contact with persons you don't know before that they are like-minded.

I think we as a society have to consciously create and value such places where all kinds of people meet and get to know each other slowly without pressure.

Expand full comment
Jorge I Velez's avatar

I know of a place just like this: Burning Man.

Expand full comment
Melvin's avatar

Where do I find the Young Republicans tent at Burning Man?

Expand full comment
Jorge I Velez's avatar

No idea. I have always found Burning Man to be pretty apolitical

Expand full comment
DABM's avatar

I doubt many Republicans go, but anti-tax campaigner Grover Norquist is a big fan: https://www.theguardian.com/commentisfree/2014/sep/02/my-first-burning-man-grover-norquist

'Before my wife and I arrived in Nevada last week, we were showered with kind comments from Burners disassociating themselves from the idea that Burning Man belongs to any political camp. Indeed, I found political allies who gave me wonderful advice – they had been participating for years.'

Expand full comment
Demeter's avatar

I agree with you- but how? In truly neutral public gathering spaces (say, a public square or a park), strangers don't tend to talk to each other. In gathering spaces that revolve around a common interest, strangers will have more reason to interact, but they will also have more in common to begin with. Church-goers find the church that fits them best, kids who join the military come from military families, etc. How do you create a gathering space that will connect people of radically different backgrounds (besides this blog)?

There's a beer shop in Medford with public tables where people gather to drink and play board games. I've seen a random mix coalesce there. But it seems so rare.

Expand full comment
DavesNotHere's avatar

Can we generalize from this? Does it always move people in the right direction? If the gay dude and the homophobe are also on opposite sides of the $15 minimum wage issue, would it help them resolve anything?

Expand full comment
Kenny Easwaran's avatar

I think that if a family member that you already know and like and have warm feelings about comes out as gay, or comes out as having different feelings about the minimum wage as you, that can help talk through the divide. With homophobia it's pretty easy, because basically just knowing that gay people are people that you can like and can be nice is exactly the entire issue. With the minimum wage, if you have positive feelings towards each other, then that can allow you to have meaningful discussions about things, but it's less clear which directly people's policy preferences would move. The important thing is that they wouldn't necessarily demonize each other.

One problem with greater issue polarization is that families are probably now more likely to be homogeneous on policy fronts, so they have less chance to get this kind of warm feelings across the divide. And when there is division, either the one family member is divergent on just a single policy, and then gradually gets swayed to the family view, or the one family member is divergent on all policies, and gets gradually alienated from the family.

Expand full comment
Kenny's avatar

This reads a lot like what Daryl Davis does too.

Expand full comment
Dave Slutzkin's avatar

A cute example for me was this sentence, from your post:

"Your prior that they're bad has become trapped."

I couldn't look at this sentence without seeing a typo. The two words "your" and "they're" in close proximity told my brain that one of them must be wrong. Weird.

Expand full comment
Zach's avatar

as far as the relationship with political polarization goes i think this is a good argument for policies which increase diversity. if you have a trapped prior about people of type x being exposed to lots of people of type x will be required to overwhelm your prior. american history x is a great dramatized version of this. or daryl davis' stories. so ubi/remote work so that urbanization decreases somewhat, or maybe even something like mandatory federal service.

Expand full comment
Hari Seldon's avatar

Couldn't this article just as easily be used to argue *against* policies which increase diversity?

I could see the reasoning thus:

- Very gradual habituation (showing pictures of cute puppies, then moving up to a puppy in a cage nowhere near you, etc. until you're playing with the Rottweiler), when tuned to the individual, seems to work.

- Flooding (dumping you in the room with the Rottweiler right off the bat) does not seem to work. If anything, flooding seems to deepen the trap your priors are caught in.

- Policies made with the stated purpose of increasing diversity are not likely to have the nuance of gradual habituation tailored to you; it's more likely that they'd act as flooding, and so trap your prior about people of type X being bad.

Expand full comment
Adam's avatar

It probably needs to happen early, as in it won't do much good to have diversity in a workplace if the workers didn't grow up early in diverse environments. Need diverse families, I guess, but that is seemingly pretty hard.

Or if we're going to go full-on rationalist with no bias toward how humans currently behave, force diverse families by randomly assigning spouses and parents instead of keeping people together with whoever gives birth to them and letting you choose who to breed with.

Expand full comment
Erwin's avatar

Of cause the society or state can't and shouldn't get that deep involved into families.

But this is actualy a strong point to keep the state in were it is traditionally influencing the environments people grow up in: urban planing and schools can be organized in a way, that there isn't too much of building bubbles without diversity.

And this is not only about the diversity usually discussed like race, gender or political party, this is also about class, talents (like intelligence), subcultures, hobbys or beeing emotional or vulnerable.

For me this was a strong point in favor of some kind of mandatory schooling in another discussion. 'Schools' just as a place were you get exposed to the diversity of society and learn to deal with it. Of cause this should be organized in a way to be a save learning space. 'Save' in the sense of unlikely to do any harm, but it doesn't have to be always comfortable.

This discussion about schools was a good example of the influence of bias, as many people seemed not to get my point, because 'school' was too much connected with their bad experience in the current disfunctioning schools.

Expand full comment
Zach's avatar

definitely agree about schools but schools reflect the geography where they are located, which ofc speaks to your point about urban planning, but to me that is really dependent on the geography of the economy, hence the importance of things like ubi and the remote work.

Expand full comment
Ghillie Dhu's avatar

I wouldn't've bothered replying if I hadn't seen the same malapropism twice in close succession.

When you say "of cause", I think the idiom you're looking for is "of course".

Expand full comment
Erwin's avatar

Thanks for the hint, I'll try to remember. I'm just no native speaker and automatic spell checking doesn't help in this case.

Expand full comment
DinoNerd's avatar

The problem with schools is that school age children and teens are often little monsters, personally dedicated to status competition of a level of viciousness that isn't currently tolerated among adults in most places. These kids will tend to replicate the usual status hierarchies, and teach each other their proper place. There will be extra factors, as well as family money/power, and membership in higher of lower status variants of race, gender, occupation, cultural background etc. and some especially self confident of manipulative kids may break the mold, successfully battling for higher status in spite of being mere scions of subaltern groups. Most won't, and your hypothetical subaltern scion is left knowing some members of higher status group are directly nasty to them, perhaps because they enjoy hurting people; many others of all groups - including their own - watch without doing anything to interfere - and a very few either try to interfere with these status games, or treat non-members of their group as being equally worthwhile human beings as the dominant group.

I don't have a solution, but do feel the need to point out the viciousness committed by children on other children, and the tendency of group status factors to be a major source of individual victimization or otherwise, though by no means the only one.

For the record, I was a "nerd", aka supposed to be a punching bag for my social superiors in that noxious atmosphere, even without any specifically ethnic factors.

Expand full comment
Erwin's avatar

I was a nerd too, and i was used as a punching bag, but I was lucky, that it was a very small school with good teachers, and my parents were able to give me very good mental support in standing it. This is what makes me confident, that it is possible to organize it in a way, that helps more than it harms. On the other hand, imagine not having learned this lesson as a child, it would be much harder for you to deal with this kind of people when the first time you meet them is as an adult. Even if they are more moderate then, it would cache you unprepared. But I also think of the other side: It could be important to learn by experience in a young age that hard fighting for group status is selfish, hurts others and doesn't play out good on the long run.

Expand full comment
Nancy Lebovitz's avatar

I've seen a claim once that school integration in the US ignored then-current science, which claimed that integration before puberty lowered prejudice but it didn't work after puberty.

Anyone know whether this was science back then, and/or whether there's any truth in it?

Expand full comment
DavesNotHere's avatar

Mandatory = backlash and abuse

Expand full comment
Zach's avatar

the things i suggested would all increase diversity rather gradually, perhaps with the exception of mandatory federal service. but of course you could modify that policy so that it would (that is exactly what the US did in ww2). in any case it isn't clear that the "trapped prior" of political polarization is as calcified as something like a dog-phobia. so the original "brutal" method might work just fine.

Expand full comment
Aapje's avatar

It's very hard to do that, since groups have the tendency to clump together.

And if you introduce 1 Blorp into a large group of Blergs, the Blergs may see a gradual increase in diversity, but the Blorp is overwhelmed.

Expand full comment
Hari Seldon's avatar

To argue against my own comment and in favor of Zach's, if policies are set up well, then the occasional Blorp who chooses to move in with the Blergs would be much more likely than the average Blorp to not have a trapped anti-Blerg prior; the Blergs would see a gradual increase in diversity, and the Blorps would be those Blorps least likely to be overwhelmed by living in Blerg country.

But, if policies are set up badly, what could wind up happening is that the Blorps who move in already have trapped priors about their new Blerg hosts and vice versa, and the numbers constitute flooding for both parties, assuming that Scott's trapped prior / habituation vs. flooding analogy holds at all for this situation.

Expand full comment
Aapje's avatar

But those priors are often due to enculturation, not because of policy. Having a solution that includes: 'we take these uncorrupted people and...' is just not a realistic policy. It's the Noble Savage/Magical Negro trope.

Expand full comment
Zach's avatar

i am aware of schelling's segregation model and so on. there are a lot of dimensions on which mixing can occur. if there are a sufficient number of dimensions people won't sort on them all and because not all are super correlated you'll still get an effect.

Expand full comment
Pycea's avatar

Alternatively, it just turns into a bunch of "oh you're one of the good ones", who then go on to justify their racism by saying they have black friends.

Expand full comment
Zach's avatar

dose was too low then.

Expand full comment
Aftagley's avatar

Having a developed and respectful friendship with someone of a another race is, imo, not-insignificant evidence that you probably don't have an explicit bias against a member of that other race.

Sure - if it's framed like you said as your friend only being worthy of that status because they're "one of the good ones" yeah, - that's shitty; but I think the prevailing culture's absolute rejection of friendship as a metric against bias is somewhat insidious.

Expand full comment
Zach's avatar

agree. and would point out that if that isn't the case then that person isn't really a friend imo but a prop.

Expand full comment
GunZoR's avatar

"Having a developed and respectful friendship with someone of a another race is, imo, not-insignificant evidence that you probably don't have an explicit bias against a member of that other race."

This seems plausible. I agree that it's not-insignificant evidence. But ultimately it's not conclusive evidence, nor does it solve the problem of defining the vague term "racism"; e.g., to many people, the guy in your example without the explicit bias would still be considered racist if he has an implicit bias, and, to Stalinists, even if he simply often has an implicit bias or sometimes does.

Expand full comment
Wtf happened to SSC?'s avatar

Your average racist doesn't think 100% of all black people are bad. It's hard to be that racist; even old timey Civil War-era racism often didn't reach that level (and I assume we all agree that they were, in fact, suuuuuuuper racist?)

Instead, they have some central image of a race that acts as a prior. A racist might work with a black professional they like, but still have "gangbanger from a blighted ghetto who plays music too loud" as a central mental example of a black person. Note the way they're entangled: "gangbanger" *is* actually bad, but it gets entangled with "has different norms about music volume" (which is just cultural) and with poverty and class. And because of that entanglement, they're going to misinterpret data about the world: any black person who plays loud music (even one who *isn't* a gangbanger) pattern-matches to this central example and is used as evidence to reinforce it.

Their nerdy black co-worker who listens to classical music, speaks an upper-class Standard American dialect, and grew up in a gated community doesn't count, because they lack *any* of the traits of the central example. In particular, they lack both the actual bad traits *and* the vague cultural associations. This lets them escape the trap. But that doesn't mean our hypothetical racist isn't racist, it just means their racism is not wholly all-encompassing.

Expand full comment
Aftagley's avatar

Starting out with the old timey civil war-era racists: They fundamentally didn't believe in the basic equality of humans. They literally thought that one race deserved to be subordinated to another and were willing to fight a war to defend that belief. They didn't need to believe that 100% of black people were bad, because they 100% didn't believe that blacks were people.

Now lets get back to today. Let's say someone makes the claim, "I am biased against people who type fit into the "Gangbanger" category but am not biased against black people."

The counter argument you're proposing to that would be, "No, you're actually just racist since your going to implicitly associate all black people into the "Gangbanger" category."

At this point, the first guy being able to produce several concrete examples of friends that disprove this association seems like strong evidence that no, in fact, the first person doesn't think all black people are gangbangers.

I'm getting hung up on the whole "doesn't count" part of your post, because saying anything "doesn't count" in an argument just fails for me. If the evidence for your claim matters, but the evidence against your claim doesn't count, you've got a bad claim.

Expand full comment
Wtf happened to SSC?'s avatar

> The counter argument you're proposing to that would be, "No, you're actually just racist since your going to implicitly associate all black people into the "Gangbanger" category."

No. It's that you're going to implicitly associate *many* black people into that category, and that you'll do so more eagerly than you do white people. This'll be true in both "rational" senses (i.e. you have a different prior) and irrational ones (you'll pattern-match in a binary way that isn't rigorous and thus lock that prior).

> At this point, the first guy being able to produce several concrete examples of friends that disprove this association seems like strong evidence that no, in fact, the first person doesn't think all black people are gangbangers.

It isn't, because it's not a random sample. It's selection-biased by the fact that those are the black people who got past their bias in the first place. This is the equivalent of producing a 90 year old smoker to prove that cigarettes don't cause cancer.

Expand full comment
Joshua's avatar

There is another factor that comes into play when this relates to politics. For someone with cynophobia, the material they are being presented with is actual evidence: whether it is pictures or direct experience with a dog. But in politics, the vast majority of the discussions do not revolve around evidence, but on the reporting of evidence. Nobody outside masochistic bloggers and other scientists are reading the original scientific studies about epidemiology, sociology, climatology, etc. They are reading reports and summaries about the studies produced by think tanks, academic institutions, professional societies, politicians, or more often the media. So it is not evidence that a voter is being presented with, but reports from intermediaries. As long as everyone trusts those intermediaries to the same degree, then the argument holds, but if someone has priors about the agenda or trustworthiness of the intermediaries, then that becomes the relevant prior.

And it's not just the prior about trust in the reporting, it is the prior about what you do next. Let's say you are a staunch Republican who believe that climate change is an overblown issue and being used by the left to advance their political agenda. You hear about a report that climate change is a problem. Even if you think of yourself as a rational person who wants to keep an open mind, you look into that report, and find out that it was produced by a group of scientists at an Ivy League university in collaboration with scientists in Europe, and has already been trumpeted as a clear argument for banning fossil fuels by a liberal organization. So at this point, before you've even had a real chance to read a sober account of the report by, say a neutral expert that you trust (assuming that one exists), you've already had 5 or 6 different priors triggered, all reinforcing your skepticism.

It is not just the encounter with the evidence where you engage your prior, it is the other 10 encounters on the way to the evidence that already have associated priors. It's like a person with cynophobia being told they are going to be taken to a room with a cute puppy in a cage, but on the way to the room, you can hear dogs barking, smell the dogs, and see leashes and bones scattered on the floor. It is a really hard thing to then still walk into the room. To provide a clear path to consider evidence that had political implications, you need a trusted source to explain it; one that you can turn to before the dozen partisan - or perceived partisan - entities can yell you back into your priors.

Expand full comment
tomdhunt's avatar

I think this point is important and is a major differentiator between the basic case, e.g. a phobia, and the political bias case.

Epistemology is hard on its own, but it's even harder in a social context with potentially deceptive actors. Our brains are evolved for this case if anything more than they are for the basic case of simple physical systems (cf. the common human instinct to attribute personality and human motivation to physical systems like storms). And in the presence of deceptive actors, it's often a correct behavior to pick a piece of evidence and intentionally disregard it entirely. Letting a manipulator have any input into your world-model at all is a potentially costly mistake.

In theory, basic Bayesian math can handle this case if you explicitly model all the links in the chain -- P(this guy is lying to me | he claims X is true & he's known to believe Y) and so forth. But in practice it seems this is often intractable, particularly given how often we're now expected to take the word of people or institutions with which we have no prior experience. As a simplifying optimization, "disregard all evidence suggesting !X" (where X is any strongly-held prior) seems like it might be instrumentally rational in sufficiently low-trust social contexts.

Expand full comment
Erwin's avatar

This feeds my bias, that trust is a very essential thing for a working society. Many current problems arise or are boosted by decreasing trust (which is often justified).

Perhaps this would give a interesting article on its own looking on cause and effects of trust in society.

Expand full comment
Demeter's avatar

Along the same lines, I think politically-coded writing style also prevents people from believing evidence that is presented by a member of the other tripe. The Democrats and Republicans tend to use different vocabulary and reference different authorities. If one reads an article written by the other camp, their trapped prior may be triggered by the use of terms they find inappropriate, positive references to public figures they dislike, and negative references to public figures they support. Back when I first made the decision to read publications from across the political spectrum, it was hard for me to get through articles that triggered my own political bias. Re-reading those articles, I realized I wasn't hung up on the central premise, but a dozen tribal speed bumps.

Expand full comment
Demeter's avatar

Lol I wrote tripe instead of tribe. Heck, it's funny, I'll leave it.

Expand full comment
A.'s avatar

It's not just about reporting. There's been enough scientific malpractice related to political issues that there's really no reason to trust a study claiming anything political anymore, unless you really trust the person who produced that study. (It's not that scientists writing papers on non-political topics are completely blameless, either, but I would really like to hope that genuinely bad behavior there is more of an anomaly.)

Reviewing a scientific paper is hard. Finding out what all reviewers missed about this paper is even harder. (Sometimes the reviewers will just blow it off and not even try to understand it, so you will be doing their job when trying to understand it.) Mistakes or manipulation might be at absolutely any level - for all you know, the original data they are starting with, the numbers which you might not be able to verify in any way, might be garbage.

If Scott did a piece of research, made all of its pieces public, and had commenters on this blog poke at it for a bit to see if there were any mistakes, then I bet a number of people who didn't originally agree with what was shown in the paper would change their minds, or at least become less convinced that they were right. But when a scientist we don't know much about does a study in collaboration with a bunch of students we don't know anything about, why would anyone change their minds based on it?

Expand full comment
Joshua's avatar

I don't think that is accurate. While scientific malpractice does happen, it is extremely rare. And it is even rarer on politically hot issues. When it does happen, it tends to be associated not with political issues but financial ones, where individual scientists stand to reap external financial benefits and remunerative professional rewards. It is more common in biomedical sciences than climate science (nobody goes into climate science for the money), and even then it rarely works. And the idea that the whole scientific literature in some field is contaminated by malpractice is just wrong. Like any other human activity, there are bad actors, but for fraud to be that pervasive you'd have to have a massive conspiracy of essentially every scientist in the field agreeing to participate. And that is especially hard because the professional rewards for producing a paper that shows that the work that others are producing is clearly false science is massive. In fact that the basis for the whole system. Science works because the biggest rewards go to the biggest snitches.

When accusations of scientific malpractice are made on politically controversial issues, they are nearly always made by nonscientists who do not understand science. See the "controversy" about the climate research at University of East Anglia where people literally did not understand the scientific meaning of the word "trick".

That is not to say that the scientific literature is perfect - or anything close. Just that errors are more the result of incompetence than malevolence. The review process is indeed hard, and many reviewers don't do a great job. But remember that the review process in only designed to catch obvious and open errors, it is not designed to address fraud. But ultimately if you are producing work saying something wrong, reality will catch up to you, since the reward for demonstrating that is huge.

It's also worth noting that this entire issue is separate from the irreproducibility crisis in the social sciences. That is more related to a field where the publishing incentives are indeed skewed against proving others wrong, along with a embarrassing amount of incompetence at understanding statistics. But again, that is not an issue of malpractice, or politically motivated fraud.

The final point above is right though. If a bunch of scientists do a student that another bunch of scientists verify or vouch for, why should anyone believe it? But that goes for anything. If you've never been to England, how do you know that England isn't just an elaborate conspiracy of cartographers? Our ability to know anything outside of our immediate experience is inversely proportional to our belief that there is an active conspiracy among all those who do experience it to lie to us. The journalism profession is nominally intended to provide exactly that - a set of people who we should be able to trust to a reasonable degree not to lie to us. And the ones that do lie will not be able to make as much money because people won't buy what they're selling. Once we think they are all liars, our knowledge of the world becomes confined to the circle of people we immediately know and trust, or to whatever organizations out there have made the right signals to show that we can trust them, whether it is the flags they wave, the skin color they do or don't like, or the people they hate.

Expand full comment
Allan's avatar

Hmmm...not sure I buy into the belief that scientific malpractice is rare. The replication data in the social sciences are really depressing. Social psychology is a particularly egregious discipline. See the work by Brian Nosek and the center for open science for an eye opening and rather sad experience.

Expand full comment
Joshua's avatar

Definitely agree that it is a major problem, and potentially strikes at some of the foundations of the field. But as I understand it, the problems are not malpractice per se, but just a wholesale lack of scientific rigor, and a refusal to properly restructure incentives within the field to support better science.

I guess at a certain point, "a wholesale lack of scientific rigor, and a refusal to properly restructure incentives" counts as malpractice, but at that point where at a category debate about that term. I teach classes in physical science at a university, and there is a pretty strong argument to be made that the entire secondary system worldwide is guilty of educational malpractice so long as people still teach classes with lectures as opposed to active learning. But systems are slow to change, and I'm not sure when you can draw the line between reluctance to adopt better practices and malpractice.

All that said, there is a ton of bad science out there, but it is mostly due to shoddy work, lack of rigor, rush to publish, and careerism, rather than an evil guy planning a hoax to dupe the world into believe false conclusions.

Expand full comment
Allan's avatar

Fair points all. At the same time there is a very fine (blurry?) line between allowing one’s careerist interests to cloud one’s judgement about data mining, not reporting the thousands of regressions that showed nadda, or reverse engineering hypotheses to fit random ‘solid’ results. These problems have real and significant impact — perhaps most perniciously of late the implicit bias work. The data are v mixed but are presented in organizations as truth. Seems like a slippery slope.

Expand full comment
tomdhunt's avatar

> And the idea that the whole scientific literature in some field is contaminated by malpractice is just wrong. Like any other human activity, there are bad actors, but for fraud to be that pervasive you'd have to have a massive conspiracy of essentially every scientist in the field agreeing to participate.

This is false. All you need is the majority of every other scientist in the field going along to get along, or remaining quiet about their private doubts, while the bad actors trumpet things loudly and play the media. Then the few who feel compelled to go public about the issues can be smeared as cranks.

> And that is especially hard because the professional rewards for producing a paper that shows that the work that others are producing is clearly false science is massive.

This is false. Attacking a pillar of the field is very rarely a reputation-positive move, no matter how correct you turn out to be. (Indeed, it's often _worse_ if you're correct, since in the absence of an easy factual refutation they'll often turn to personal smears instead.)

This is true even absent external political pressure, cf. "science advances one funeral at a time". The self-mythology that scientists promote is in fact mythical; science is a social endeavor like any other, and in any social endeavor it's a bad idea to directly challenge your highest-status peers.

> It's also worth noting that this entire issue is separate from the irreproducibility crisis in the social sciences. That is more related to a field where the publishing incentives are indeed skewed against proving others wrong, along with a embarrassing amount of incompetence at understanding statistics.

Is there any reason to believe that the same incentive structure that pushes against proving others wrong in the social sciences does _not_ apply to the physical sciences, and every other academic field?

Seriously, this is just repeating the Science Mythology again. In order to achieve understanding you have to look under the mythology and see how things actually function, and it's rarely in reality so cleanly in accordance with the flattering self-image of the field's denizens.

Expand full comment
Gramophone's avatar

I'd buy malpractice in social psych in a heartbeat.

Imagine for example a survey that measures 'hostile sexism' with all of two items, and one of the items is "feminists are making entirely reasonable demands of me". Conservatives of course will have a dim view of political enemies, and disagree. Bam! We've shown conservatives are high in 'hostile sexism'.

These kinds of creative scale namings (turning "how I feel about a cadre of activists" into "how I feel about a whole sex") are all too common. Some are malicious like the one demonstrated above, others are simply overly ambitious compared to or tenuously related to the actual scale contents.

Expand full comment
David Friedman's avatar

My view, most recently of the CDC study claimed to show that mask mandates work, is that no scientific paper should be trusted until the data (if it's a statistical paper, as this was) has been publicly available, people who don't want to believe the conclusion have put significant effort into proving it wrong, and it has stood up to the criticism, more generally that is has stood up to replication challenges.

Expand full comment
arunto's avatar

As far as I know the current view on exposure therapy isn't only about gradual exposure. It is in addition about getting the patient to make specific predictions what he/she expects to happen during the exposure and to try observing this as closely as possible. This could be worth a try with political biases, too.

Expand full comment
Aftagley's avatar

This was my argument with the previous president back in 2016 and all throughout his term. What do you expect him to do? What has he actually done? How have any of the things he's done seriously differed from what any generic Republican would have done in the same position?

I can tell you first-hand, it doesn't work.

Expand full comment
Nancy Lebovitz's avatar

Have another stuck prior, from a book called _How to See Color and Paint It_, by an art teacher who would have his students cut windows in 3" x 5" cards so it would be possible to just look at small areas. Then he'd have them go to the bay at dusk and ask them what color are those (brick) building's across the water?

They'd say "red", and then he'd have them look through the 3" x 5" windows, and the buildings were actually blue in that light. The prior for how the buildings look in typical lighting overwhelmed the actual sensory experience.

Expand full comment
Jared Smith's avatar

It's interesting that you mention art in particular in the context of this post: one of the hardest things in art is moving past the symbolic representation of what you're depicting (I've observed this in my children, tried it for myself). It's really hard to draw exactly what you see without letting the symbolic representation (e.g. stick figures in lieu of people, perfectly round oranges) interfere.

Expand full comment
Matthew Carlin's avatar

Someone helped me move past this specific stuck prior by drawing everything upside down for a while.

Expand full comment
CB's avatar

This matches with a book I just read, Tom Vanderbilt's *Beginners*. What I took from the chapter on drawing was this (not the author's words): the key to drawing well is to pay attention to the raw sensor input, not your post-processed experience of it (which is tainted by priors).

In computer terms, your experience is like a lossily-compressed file. If you try to reproduce that, you'll make it worse, because each generation of lossy copies is worse.

Expand full comment
Nancy Lebovitz's avatar

_Drawing on the Right Side of the Brain_ and _The Zen of Seeing_ are older books about the same thing.

Expand full comment
CB's avatar

Ah, yes - Vanderbilt cites both of those!

I suppose I could have referred to the illusion in this post: suppose you try to draw it based on your post-processed experience. You'll make the chess pieces different colors. To draw it accurately you might have to block out all but a small square, reproduce that, then move to a different square.

Expand full comment
Anaxagoras's avatar

My experience with cynophobia fell somewhere in the middle of the treatments. For most of my life, I've been really scared of all dogs. I can keep it under control by expending mental effort, but it's taxing. Then, a few years ago, the ceiling of my apartment collapsed and the landlord refused to fix it. My best recourse was to move in with a friend in the area. The only problem? Another person living with him had a dog, the most golden retriever golden retriever I can recall meeting. I'd encountered this dog before, and could tolerate it, but only with effort.

Still, with a choice between staying in a room with a collapsed roof or a house that had a dog, I chose the latter. Eventually, I stopped freaking out when the dog started barking as I came to the door, and came to actually kind of like it. Months into this arrangement, I visited a friend of another friend who had dogs that were not golden retrievers. Two pit bulls, which he cheerily informed me had recently killed a groundhog in the back yard. And I was able to handle that visit without getting particularly stressed!

I think my cynophobia wasn't quite severe enough to trap my prior beyond where innocuous experiences could shift it, and I had a strong exogenous factor — wanting to get out of an unlivable apartment and into a really cheap and nice room — that motivated me to adjust my views.

Expand full comment
Kenny's avatar

Nice!

I grew up with big dogs but I remember being terrified by this one event when I was very young where I had been (or so it seemed) dog-piled by puppies (tho of a big breed). In my (epistemic) defense, the puppies in aggregate probably weighed more than me and they weren't gentle to me like I was (trying) to be with them. I'm lucky I escaped cynophobia, but I still (vaguely) recall that one moment of terror!

Expand full comment
Eidein's avatar

In the past few years I have intentionally cultivated more trapped priors. I was directly inspired to this by an old scott post: Epistemic Learned Helplessness or something to that effect. On certain subjects I know that a) I am not equipped to tell truth from falsehood in this domain; b) lots of people around me are actively trying to deceive me; and c) they benefit from my deception. Consequently, my defensive reaction to this is to categorically write off everything as fake in that domain, and fall back on my priors.

I have cultivated this intentionally as a response to a common failure mode of rationalists, where they are willing to extend charity far beyond where it's warranted, and bad-faith actors use this to manipulate. I am not getting manipulated anymore.

YMMV, just sharing my experience

Expand full comment
Medieval Cat's avatar

Isn't this a meta thing? Like, assume that I think that Alice is untrustworthy and I should disregard everything she says. Alice tells me that she is in fact trustworthy, but I disregard this. Alice tells me that she has made several correct predictions the last month, but I disregard this as well. You could say that my prior on Alice is stuck. Now Bob comes and tells me that Alice is trustworthy. I trust Bob, and adjust my prior on Alice so that I'll listen to her a little bit. Did it turn out that my prior wasn't stuck after all? Or did Bob unstuck it? If my prior was truly stuck, I would have assumed that Bob was untrustworthy as well for supporting Alice?

I feel like it's normal and healthy to have stuck priors in the "I wont listen to Alice" sense, but not in the "I wont listen to anyone who talks good about Alice" sense. Epistemic Learned Helplessness looks more like the first case to me.

Expand full comment
Mark's avatar

How can you know (b) while (c) holds? That someone benefits from you believing X isn't evidence against X. I may have a selfish reason for wanting you to believe it won't rain tomorrow (so you'll leave the house to golf or something), but that shouldn't affect your prior on it raining tomorrow. Or is it that you're more worried about being manipulated than being wrong? If you know someone's trying to manipulate you, why would you have to reject the proposition instrumental to manipulating you in order to avoid the manipulation? In my example, you don't have to convince yourself that it's going to rain tomorrow in order to stay home to find out what I'm planning to do while you're gone that I don't want you to know about.

Expand full comment
Mark's avatar

First sentence should be: How can you know (b) while (a) holds?

Expand full comment
David Friedman's avatar

"That someone benefits from you believing X isn't evidence against X. "

No. But it makes the fact that that someone tells you things that can be expected to make you believe X weaker evidence than it would otherwise be that X is true.

Expand full comment
mb706's avatar

I think a model of how rational or irrational it is to update political beliefs upon being presented with evidence should take the concept of "Epistemic Learned Helplessness" into account. When we are confronted with many arguments for conspiracy theories then they overwhelm our capacity to check and refute each of them, so we just shut our ears when we have previously heard weak arguments that turned out to be wrong, or just have a strong enough prior against them. There is probably something similar happening with political talking points. Maybe trapped priors are to some degree just immunity to propaganda.

Expand full comment
Marginalia's avatar

Haha, that's marvelous, I know someone who took that approach with his wife and kids. He traveled a lot, she drank, and at some point he went full a/b/c.

The tiny hole in that is that if, in that situation, you have some responsibility, at some point you will have to take an action. It's great that you (and he) were no longer being manipulated, but the write-off affected his data that he used to decide which action to take, in situations where he was responsible for doing something.

Just to let you know, it crumbles a little on that side :) Otherwise, thank you.

Expand full comment
Incurian's avatar

I read this as "Charlie Brown should not ever trust Lucy to hold the football, and should assume evidence in favor of Lucy choosing to really hold it this time is actually just deception from Lucy."

It sort of reminds me of the difference between decision theory and game theory. When you have an opponent, you sometimes need to make choices that would seem irrational in the single-player version of the game. Unfortunately, sometimes epistemology is competitive.

This is in the rationalist canon somewhere as the entity that can convince anyone of anything. The right move is supposed to be pre-committing to not updating based on anything it says.

I guess the right thing to do may be to put certain sources on the ignore list, but to hot permanently blacklist anything they've said, since it may turn out to be true and you might find out about it through more trustworthy sources.

Expand full comment
CB's avatar

> I don’t know why this doesn’t happen in real life, beyond a general sense that whatever weighting function we use isn’t perfectly Bayesian and doesn’t fit in the class I would call “reasonable”. I realize this is a weakness of this model and something that needs further study.

Since I hang out with engineers all day, I'm primed to think everything is feedback loops: Could we conceptualize the "trapped prior" as a problem in which an output signal (e.g. dogs are scary) is bleeding back into the input channel? The weighting function might be fine, but the problem is what counts as input? Would this help us identify a biological basis for the phenomenon?

Expand full comment
Garrett's avatar

All you need to do is have a minimum threshold cutoff step and you'll discard information which you presume to be wrong merely as a time and energy saving step.

Expand full comment
Sniffnoy's avatar

Yes, I think so -- see maline's comment below (above? IDK about ordering).

Expand full comment
Daniel H.'s avatar

Engineer here. I have a few thoughts to add.

1) The whole "trapped prior" thing looks to me as if it's something you get in statistics when you either do weighting of evidence (putting more weights on some sources of data than others; e.g. what Nate Silver / 538 does with their poll weightings where they shift weight on pollsters with a good track record - now assume your weights are biased) or when you use measurements several times (an easy and completely wrong way to lower your p-values is to count each measurement several times; now with discrete scientific measurements this is obvious, with "facts" social media throws at you this is not so obvious if those are separate evidence or just the same point repeated over and over again.

2) From a predictive processing view (because I explain everything in the brain with predictive processing; note that this may be a very biased prior), our mental landscape is hierarchical in nature and it's not evident at which level of the brain's hierarchical structure to integrate new information / errors. This is a situation where weighting of incoming information is crucial; in a piece by Karl Friston I read yesterday ( https://ajp.psychiatryonline.org/doi/10.1176/appi.ajp.2020.20091421 - not unique as a reference, just what I have in my mind right now) he explains that there are basically two types of error your brain has to deal with: noise-related error (just ignore) and errors due to bad predictions (update your mental model). To figure out which is which, the brain has something like expected accuracy encoded in every prediction (at low light, you expect a lot of noise coming from your eyes and you're way more likely to discard unexpected input as irrelevant; sometimes this misfires, e.g. with children "seeing" monsters in the dark). This is exactly where weights come into play - there's just no way to update a hierarchical model without them.

3) there are two points from an evolutionary perspective that add to this, one predictive-processing-related, one not:

- from a hierarchical structure, your brain has a very high interest in maintaining the overall structure (related to what's often called a hyperprior): if your whole mental model is teared to shreds, you're not very functional (as far as I can tell, this is exactly what happens with trauma patients; one way to think about it is that their hyperpriors are severly damaged and until integrity is restored, they are "not very functional"). Ergo, your brain might put high priority on maintaining a general mental structure and will viciously attack information that might by a threat to this.

- from a general evolutionary perspective, you can think of "socially defined weights" also as a way to maintain social order within a group ("you don't listen to the enemy's propaganda as it would disrupt our social order if we do; even if the propaganda is full of true facts")

4 / sidenotes) all the above is written assuming the brain is functioning as expected; we haven't even thought about pathologies, e.g. bad error escalation (hypothesized to be related to Schizophrenia): it's probably a strech that any brain implements bayesian reasoning in a purely optimal way.

Also, I'd like to understand all this better than I currently do; anyone interested in a discussion: I recently discovered r/PredictiveProcessing on reddit and plan to ask a few of these questions there in the future - if this is for you, please feel free to join!

Expand full comment
Daniel H.'s avatar

I just skimmed this paper and it seems highly relevant here

https://www.sciencedirect.com/science/article/pii/S0920996420306277?via%3Dihub

I don't have the time for a full summary, but see this quote as a start

"In Bayesian terms, the maintenance of delusions (and beliefs in general) is usually attributed to strong prior beliefs. However, inductive inferences critically depend on the beliefs about the structural dependencies between the relevant variables. For example, what one person takes to be evidence for a hypothesis, another person interprets as contradictory evidence. This can happen without contradicting the rules of logic because the direction of belief updating depends on other beliefs (Jern et al., 2014). A ubiquitous example of this phenomenon is the “explaining-away” of evidence. This describes the case in common-effect networks in which the presence of one cause in a common effect network makes another less likely. This implies that the interpretation of an observation depends on the ability of the observer to generate additional assumptions, called auxiliary hypotheses, which can “explain away” the evidence or even turn it into its contrary."

Expand full comment
Peregrine Journal's avatar

The Lord Ross and Lepper study is great, thanks.

I'd been citing "They Saw a Game" for this, which I think makes elegant points about bias shaping perception. But at the end suddenly seems to conclude that we live in some kind of shared multiverse with mutually inconsistent realities just stacked on top of each other as the most likely coherent explanation for "sometimes people disagree about stuff." So it's nice to find one less committed to the "what even IS reality man?" trappings.

For "the rationalist project"... who is the primary target for this intervention on stuck priors?

Do you want the average ACX reader to be more aware of their own trapped priors and find tools to overcome them? Or do you want to find a better way to talk to passionate partisans, maybe create ways to disrupt common patterns and scripts, to hop the conversation onto new tracks that are more productive?

Just because I think the viable solutions will differ dramatically (unless you're really into spiking coffees), so that affects the search space.

Expand full comment
Rohit Krishnan's avatar

In most arguments you're looking at both the content of the argument (data + logic) and the person/ context providing that argument. The updating process might get derailed when the subject doesn't believe or trust the argument source.

It prob doesn't apply to trauma, but for strongly held beliefs that seem unshakeable in econ, politics etc, does seem to play a large role. I don't know what's trapped here, whether it's the prior or perhaps it's the updater that also considers the validity of the source of any new argument. This applies when it's still a neutral source since that would *still* require you to at least believe your ability to update is not somehow screwed up.

Expand full comment
JohanL's avatar

OK, so the McGurk effect didn't work at all on me - I hear the same thing regardless. Anyone else having this experience?

Expand full comment
Godoth's avatar

I hear the exact same sound as long as I don't stare at mouth movements and focus. If I do stare at his mouth movements, I can 'hear' FA rather than BA, but I can only maintain it with active focus and it reverts quickly.

Expand full comment
c.'s avatar

I can make the effect happen a little bit if I focus on the mouth movements, but weakly enough that I'm still not sure whether the sound you're supposed to illusorily hear is "fa" or "va". And it's not hard to drop that and go back to hearing "ba".

Expand full comment
JohanL's avatar

There might be a _small_ change in perception for me, like from a solid "ba" to "ba" with a weak tendency towards "va", but not more than that.

Expand full comment
Melvin's avatar

I found it switching back and forth between "va" and "fa". This makes sense, because the "fff" and "vvv" sounds are more or less identical in mouth position, the difference is that "vvv" is voiced and "fff" isn't. (Try it!)

Expand full comment
Destouches's avatar

Are you a native English speaker? Didn't work on me with Scott's video, watched a second video in English that also didn't work, then watched a third video in German instead of English and it worked on me (https://www.youtube.com/watch?v=AHI1sM0W3Hs).

Now the third video also zoomed in very closely on the mouth of the speaker, so might have been caused by this.

Expand full comment
ana's avatar

I'm not a native English speaker and it did work on me. But it would make sense that a requirement for the illusion to work would be having experience with seeing people speak in the relevant language. Unless the relationship between month movement and sound produced doesn't depend much on the language? Hmm...

Expand full comment
Elias Håkansson's avatar

I mean the word is "ba"... I doubt it matters what language you speak.

Expand full comment
Luke G's avatar

Some languages don't have the "f" sound though (Korean for example) so the visual cue for "fa" might not be wired in their brain.

Expand full comment
JohanL's avatar

Not a native speaker, but we have the same phonemes, so it's not an R/L situation.

Expand full comment
Cynthesis's avatar

Native English speaker. I hear "fa" when the lower lip tucks under the upper teeth. I continue to hear "fa" for one more repeat of the word after closing my eyes on the lip tuck and then switch back to "ba" with my eyes still closed for the next repeat. Neural network training probably pays a part. Seems to me the strength or lack of this effect might be related to forces that drive unconscious lip-reading training paired with changes in sensitivity of the sound resolution structures of the ear/brain.

Expand full comment
Byrel Mitchell's avatar

I'm slightly hard of hearing, and rely on lipreading a lot. The McGurk effect was VERY strong for me.

Expand full comment
Ciaran Carroll's avatar

It seems to me that the ad funded social validation slot machines that we call "social media" are trapped prior generating machines.

Expand full comment
Daniel Speyer's avatar

Another factor is that a lot of the time the evidence channel really is quite narrow.

I know someone who is convinced that vaccine side-effects are very dangerous and downplayed. I thought about trying to put together an actual analysis on the subject and presenting it with "I'm not asking you to trust 'them'; I'm asking you to trust me, because I actually checked" but then I realized I can't do it. I could probably get the post-approval surveillance data, but I don't have the skill to construct a demographically-matched control. Nor can I look at someone else's analysis and reliably tell that it was done correctly. Nor can I look at another analyst and reliably judge them regarding statistical skill and virtue of lightness.

Similarly, someone with a fear of dogs can find one easily, but what if you have a fear of sharks and the local aquarium doesn't have any?

So far so harmless, but how much US policy is written by people with a pathological fear of muslims who have never actually met one? Or who figure those chose to move to the United States are not representative of the ::handwaves at middle east::. That last point isn't actually wrong.

Is it possible to be so homophobic that even openly gay people refuse to out themselves to you? I bet it is.

Expand full comment
Kenny's avatar

> So far so harmless, but how much US policy is written by people with a pathological fear of muslims who have never actually met one?

I would guess approximately zero. They just play people with pathological fears of X on TV!

Expand full comment
Phil Getts's avatar

I used GIMP to join the top half of the dark image with the bottom half of the light image, and the chess-piece pixels do in fact appear to be the same in the two images.

Expand full comment
Charlie Sanders's avatar

Nice article. I wonder if you could differentiate partisan results based on argument style — analytical arguments vs emotional appeals vs anecdotes. Logos vs Pathos vs Ethos.

Also, were there really accusations of dogwhistling on Biden's BLM support? I haven't seen that before, and a cursory search didn't find anything either.

Expand full comment
UserFriendlyyy's avatar

It was on the real fringe like Alex Jones and OANN.

Expand full comment
gordianus's avatar

It's not quite the same accusation as the one Scott mentions, but https://web.archive.org/web/20200905212511/https://www.nationalreview.com/2020/09/its-a-straight-line-from-biden-to-blm/ interprets Biden's statements on BLM as saying that Republicans deserve to be victims of rioting because of their racism and claims that the Democrats are using this as a strategy to censor conservative political speech. (Although it does admit that Biden's involvement is more a result of opportunism than partisan zeal.)

Expand full comment
Mike's avatar

A couple interesting/fun papers by philosophers on issues like this beyond perception, from a non-Bayesian perspective:

https://www.jstor.org/stable/20620131?s

https://www.cambridge.org/core/journals/episteme/article/abs/echo-chambers-and-epistemic-bubbles/5D4AC3A808C538E17C50A7C09EC706F0

(sorry for the paywalls; googling the titles will get you penultimate drafts)

Also check out Susanna Siegel's book Rationality of Perception

Expand full comment
hymneminium's avatar

I'm not convinced that trapped priors explain the capital punishment experiment. An alternative model: people's certainty about the topic reflects how well they think they can justify their opinion.

If you have a strong opinion you want to be able to talk at length about why that opinion is correct. If you barely have anything to say you look foolish.

A study that confirms your opinion is something you can cite in support, that therefore lets you get away with a stronger opinion. A study that disconfirms it is not as helpful, but it doesn't make you less able to support your opinion. Possibly you can still wring something useful out of it, but at worst it doesn't change anything.

In this model the subjective probability isn't affected by evidence, just the social expression.

Expand full comment
Elias Håkansson's avatar

I get your point, but why would "not as helpful for social expression" translate to "the evidence for this view isn't very good"? Is that just our minds manufacturing reasons to discard the "wrong" opinion?

Expand full comment
hymneminium's avatar

Yeah. I suppose it fits into the cynical Hansonian view that political opinions exist mainly for signaling. Your mind is made up from the start, but your brain has to convince itself that its opinion was constructed out of evidence.

Expand full comment
Randomstringofcharacters's avatar

Your comment about gradually increasing low level exposure getting around the strong prior made me think of the descriptions of radicalization. Eg you start with a strong prior against holocaust denial, so would reject a direct denial argument, but if you are slowly introduced to related arguments then in time you'll be willing to accept it.

Which raises the larger question of if these techniques are content neutral. Using the same technique as reducing fear of dogs could one be trained not to be afraid of being hit by cars, so walk in front of traffic?

Expand full comment
ana's avatar

Presumably it would not be possible to train the fear of cars out of someone without providing actual evidence that the cars were not dangerous.

If you take someone with an unreasonable fear of cars and show them many times how crossing the road at a crosswalk with the green light on doesn't hurt them, they might eventually lose the fear of crossing at crosswalk with the green light on.

On the other hand, if you take a person with a regular fear of cars and have them cross a busy road at a random place many times, they will likely end up getting hurt and this will not reduce their fear.

Expand full comment
TitaniumDragon's avatar

I mean, that's a model for complacency, which does get people injured and killed. They cut corners, nothing bad happens, and they keep on doing so until something bad does happen. At which point some people stop being complacent while others double down on it.

Expand full comment
ConnGator's avatar

Your discussion of left-wing and right-wing priors makes it all the more clear why it is so difficult to be a libertarian. To us, almost everyone we deal with is ludicrous. It is very frustrating having a political conversation with 95% of the public.

Expand full comment
Destouches's avatar

If it makes you feel better: For everyone else dealing with a Libertarian is also ludicrous.

Expand full comment
Elias Håkansson's avatar

But you don't almost always deal with libertarians

Expand full comment
Andy Jackson's avatar

My trapped prior to read 'libertarians' as 'librarians' makes these comments quite amusing

Expand full comment
Kenny's avatar

It actually _does_ make me feel better: we're all ludicrous!

Expand full comment
TitaniumDragon's avatar

Given experiments with libertarianism like the Articles of Confederation, and that generically anti-regulatory beliefs often have negative results, this would seem to suggest that libertarianism is itself the problematic trapped prior.

In reality, a moderate level of regulation appears to be optimal - places with too little regulation have enormous systemic problems and have lots of preventable issues, while places with too much regulation either end up authoritarian or with endlessly spiraling costs.

If you have the trapped prior that "regulation is bad" or "regulation is good", then you end up with the problem where you always blame everything on either too much or too little regulation, which leads to a never ending spiral of too much or too little regulation because you can't change your views.

Expand full comment
ConnGator's avatar

I am not aware of any place outside of Africa that has too little regulation. America and Europe, especially, are drowning in it.

Expand full comment
Matt H's avatar

Libertarianism, like any political ideology, only sometimes has the right answer to questions. Sometimes other ideologies do. Sometimes none do. My view is it's better to take issues on a case-by-case basis rather than make them fit an ideology that cannot always be right.

Expand full comment
TitaniumDragon's avatar

The problem is that there are some regulations that are unnecessary while other things that aren't presently regulated probably need to be.

Regulation is neither good nor bad, it depends on the particular regulation.

Emissions regulations are a good thing, for instance.

Expand full comment
ConnGator's avatar

I agree that certain things need to be regulated. However, in the US there are about 100 unnecessary regulations for every missed regulation.

Expand full comment
Allan's avatar

Compounded by the problem that 95% of the public believes that you have zero principled beliefs. You’re trapped in the belief that they’re all espousing ludicrous ideas and they’re trapped in the belief that you’re an idiot for not sharing their views. A recipe for loneliness?

Expand full comment
Nate's avatar

Great article. It makes me think two things:

1. People should aim to have as few priors as they possibly can, and to chose them very carefully.

2. Priors can be dislodged in many ways, but the overarching theme is γνῶθι σεαυτόν (know thyself).

As I was reading the article my mind immediately went to PTSD before it was even mentioned. As someone who has had to do serious personal work because of PTSD, I was definitely dislodging a lot of trapped priors (about myself and the world), but I think one reason they were there is that they helped me keep the world much simpler. Dislodging a trapped prior (as someone who has had to do a lot of it) is really painful because it probably is naturally going to dislodge other beliefs and create a more complex picture of the world that will require a lot of work to ingest.

Expand full comment
Medieval Cat's avatar

Your first point makes zero sense to me. Priors is not something you chose to have. You can't have "fewer" priors than someone else. If I tell you "My friend saw a polar bear outside your house yesterday." and then force you to bet on there being a polar bear in your neighborhood, I can force you to assign a probability to it. You can't "not have a prior" on there being a polar bear in your neighborhood.

Expand full comment
Godoth's avatar

Certainly you can not have a prior on something. You can say "I don't know," and mean it. And it's quite easy to simply give up a belief in something, if that belief is purely intellectual; I don't know whether there is life on other planets and really don't find any of the arguments or evidence either way to be very persuasive.

Forcing someone like me to bet on whether there are ETs doesn't do anything operatively, you're just making me pretend to you that I believe something I don't.

Expand full comment
Medieval Cat's avatar

I guess you can appeal to Knightian uncertainty, and it might even make sense for issues like ET etc. But it doesn't really work for polar bears in the neighborhood? Doesn't that just lead to a state of pure Cartesian doubt or something? Like, most people seem to go trough their day-to-day life making decisions about probability (should I bring my umbrella? Will my partner enjoy chocolates? Will my boss notice my slacking?). Making such a decision requires having priors. You can't make such a decision without having priors: making that decision automatically means that you had a prior.

Expand full comment
Godoth's avatar

I mean, in science, pure Cartesian doubt isn't such a bad place to start from now and again. In fact it would be a far superior starting place for most endeavors.

But really, I think that your assumption that everyone has and is using priors for everyday decisions is not well founded. Let's take the umbrella, for example. Everybody probably has a prior on whether or not it *can* rain at all where they are (having no priors at all is not what OP suggested). But it's a terrible idea to have a strong prior, in a place where it does rain, on whether or not it will rain. If you ask someone if it will rain, a foolhardy person says, "It will not rain today"; a smarter person just looks out the window; the smartest says, "I don't know, but I keep an umbrella in the car."

Expand full comment
Medieval Cat's avatar

Sure, having a strong prior that it won't rain when rain might happen is bad. The better option is to have a prior that rain might happen if rain might happen. But not having a strong prior is completely different from not having a prior at all! You should definitely have a prior on the risk of rain, and you don't need to chose it that carefully for it to be useful.

Expand full comment
Andy Jackson's avatar

There is a great temptation to think (putting this into Kahneman terms) "I'm smart now, I reject type 1 thinking in favour of type 2 thinking" until you get eaten by a metaphorical lion as you didn't react quickly enough. We need priors/ heuristics to function, it's just that some of the time they lead us astray. They are mostly right though

Expand full comment
Adam's avatar

People can't, but scientific studies, government policy, and automated decision-making or classification system can choose uninformative priors on purpose to maximize the value of evidence.

Expand full comment
Medieval Cat's avatar

What is the uninformative prior of there being a polar bear in my neighborhood?

Expand full comment
Cynthesis's avatar

There was an earthquake and the neighborhood is near a zoo? Lots of entropy with an earthquake.

Expand full comment
Adam's avatar

Uniform distribution over the two possibilities.

I'm not saying that's a correct prior, but an automated decision making system can nonetheless easily implement it.

Expand full comment
Medieval Cat's avatar

So governmental policy should be based on there being a 50% chance of polar bears in my neighborhood? That doesn't sound good to me, am I misunderstanding something?

Expand full comment
GunZoR's avatar

You can at least have fewer priors than someone else (don't people vary in their level of knowledge?). Moreover, even if one can't not have a prior--because every person with a functioning brain has at least a minimal amount of prior, organized, mental knowledge of reality--one CAN with enough self-awareness play with one's priors; so for example, if you told me about the polar bear, of course my priors tell me the probability is low. But immediately I go meta and conclude that these priors are just priors and it may very well be the case that your friend saw a polar bear; let's just ask more questions and gather more evidence. I think this "playing with priors" can become so extreme that you get to the point where you essentially could be said to have no priors, because you don't let your priors manipulate you.

Expand full comment
Medieval Cat's avatar

Knowledge has nothing to do with priors? I know a lot more than a five year old. Both of us have a prior on there being a polar bear in our neighborhood.

Why do you conclude that you priors are "just priors"? What does that mean? Does going meta like this change the probability you assign on there being polar bears? Let's say that I have a strong prior against polar bears: how does that stop me from asking questions and gathering evidence? Are you sure you aren't conflating "having priors" with "being lazy"? To me those are completely different things.

Expand full comment
GunZoR's avatar

"Knowledge has nothing to do with priors?"

I don't have a precise mental definition of what a prior is, but my idea of it at its most general is simply "some knowledge of reality."

"I know a lot more than a five year old. Both of us have a prior on there being a polar bear in our neighborhood."

Right, that's a possible scenario; but wouldn't you say you have many more (of course an unquantifiable number) of priors than the child in any case? This is what I meant when I wrote, "You can at least have fewer [or more] priors than someone else (don't people vary in their level of knowledge?)."

"Why do you conclude that you priors are 'just priors'? What does that mean?"

To my mind, it just means that I subject my initial--and even my Mode B--probabilistic projections to skepticism, because I am taking the overarching view that I'm a flawed animal with evolutionarily ingrained mental defects for truth assessment.

"Does going meta like this change the probability you assign on there being polar bears?"

I don't think it would. After going meta, what happens is I try to ignore the prior and determine, through evidence gathering, the truth or falsity of the statement "This guy's friend saw a polar bear," starting from a mental tautological framework of "If X is logically possible and its truth or falsity of it having obtained is not practically indeterminable, then the truth or falsity of it having obtained can be ascertained"--and the launch into an investigation begins in Mode B.

"Let's say that I have a strong prior against polar bears: how does that stop me from asking questions and gathering evidence?"

I don't think it does for many, perhaps most, people. But I'm a very visual thinker, so I literally often see floating sentences when I think those sentences; also images. So having the prior in mind feels like extra mental clutter that gets in the way of my thinking about the truth or falsity of whether your friend saw the polar bear. I think my brain is good for creative writing, but it might be bad, for this very reason, at logical thinking, which might require a brain that is better at abstraction.

"Are you sure you aren't conflating 'having priors' with 'being lazy'?" No! I'm new to probablistic thinking/Bayesian reasoning and am trying to work out what all this stuff means and how it relates to the way I think.

Expand full comment
Nate's avatar

I'll grant that constructing/strengthening priors is a difficult task, but I do think you can work on eliminating priors by knowing yourself better (stack tracing belief).

Expand full comment
Medieval Cat's avatar

This still doesn't make sense to me. How can I eliminate my prior on there being a polar bear in my neighborhood, and why would I want to do that?

Expand full comment
GunZoR's avatar

You can't eliminate it, but there could be many reasons why you would want to try to disregard it. E.g., it's not an emergency situation in which you must act fast (Mode A) and have time for patient truth gathering (Mode B). From my experience, if I am in Mode B, having a prior about the low probability of your friend's polar bear is mentally taxing/irritating--it gets in the way of my evidence gathering and logical thinking about evidence--when I am in Mode B and trying to find out whether your friend actually did see the polar bear. In that case, it just feels like an extra burden of information, if that makes sense.

Expand full comment
Medieval Cat's avatar

See my previous answer to your post. To me, "having priors" has nothing to do with "being lazy". Priors exist in both Mode A and Mode B (but the different modes might use different priors). Every decision under uncertainty requires a prior.

Expand full comment
Nate's avatar

I think priors are more multi-dimensional than that. They're probably not all the same. Some of them probably go deeper than others, some of them probably exist on a spectrum. Also, as the article suggests, priors are (can be?) rooted in emotion, so I imagine unresolved emotional issues are a path dependency for some priors. I can probably work on having a more positive view of myself and others, for example. That might inadvertently make it easier to believe a friend when she says she saw a Polar Bear in the Bay Area, especially if I know myself and her well enough to know that she wouldn't lie about something like that. Also, she may have seen a Polar Bear, but it might not actually have been there. I think there are all sorts of ways we can make our thinking about ourselves, the world, and others more nuanced and graceful.

Expand full comment
Medieval Cat's avatar

I agree with everything you say but I don't see how it has anything to do with priors. Like, I might believe that I'm a terrible person, and assign priors accordingly. Then I might do some self-discovery and realize that I'm actually a good person and reassign my priors accordingly. Nowhere in this process did I lose any priors, I just changed them. Thinking more nuanced and gracefully, or trusting my friend more, does not change the fact that I have a prior on there being polar bears in my neighborhood.

Like, maybe we are using words in different ways? What you call "prior" might be what I call "bias" or "first guess" or "lazy thinking"? To me, a prior is something statistical. A simple computer vision program that tries to identify birds in pictures have a prior. An amoeba probably has a prior on how a certain chemical relates to the chance of finding food. Would you agree with that?

Expand full comment
Nate's avatar

I don't know I think you can eliminate them. In the article, the example that information from the opposing party is always to be distrusted is a type of prior. I do think this kind of prior can be eliminated. If you think priors are non-eliminable and static then, yeah, I would say we have different definitions. I do agree with you that there are probably non-eliminable priors, but I think of them as being nodes in a Bayesian decision tree you can certainly flip *some* of the nodes or make them an interface for a much more complex/nuanced view that adjusts based on its inputs. In a sense, I suppose, you can't "eliminate" priors, but you can make their input into the decision tree more multi-variate. Also, once you eliminate a prior, I do suppose you can sort of make the argument of "well, was it really ever a prior?"

Expand full comment
ana's avatar

What about your certainty in your priors? I mean, how much you change your priors given new evidence. Intuitively(*) I think this could be a function of more than just the new evidence.

For example, I can have a prior of 0.01% probability of seeing a polar bear in California, but be only 90% confident in my prior, which means I would be quicker to change it than if I was 99% confident. It doesn't matter for my Level 0 conclusion that I disbelieve the polar bear story, but it does matter for how I update my prior, given that you told me someone saw one.

Under this model, Nate could be interpreted as meaning that having a lot of confidence in your priors was bad, regardless of how strong they are.

(*) I also have a different intuition that I'm creating a meaningless distinction and that this "confidence" in your prior should already be part of your actual prior. But people are not perfect Bayesians and I can imagine myself having a high prior on something but then being quick to change my mind on relatively weak evidence (eg, if the prior was unconscious and was brought to my attention).

Expand full comment
Medieval Cat's avatar

My prior is high that your "confidence" in your prior should already be part of your actual prior. :p

And even if you are right, it doesn't mean that people don't have priors: it only means that they have bad priors. I could trick a child to bet at even odds on them rolling a 6 on a fair six-sided die (maybe with the famous argument "it either happens or it doesn't happen so it must be 50:50!"). This child would have the prior of rolling 6 be 50%. This is obviously a bad prior, but it's still a prior.

Expand full comment
Phil Getts's avatar

Re. the wine effect, I noticed many years ago that if I forgot what drink I'd poured into my cup, as I often do, that on drinking it I wouldn't say 'Oh, this is apple juice", but "Omigosh what is this strange taste that I've never tasted before?" Then, when I realized it was apple juice, the taste would suddenly shift to conform to apple-juice expectations.

I like to imagine that forgetting what's in my cup provides me with a "more-true" sense impression, but that desire is probably based on a bankrupt epistemology.

Expand full comment
Scott Alexander's avatar

I also have this experience - if I think I poured myself water, but it's actually 7Up, then my experience of tasting the 7Up is jarring and strange and not 7Up-like at all until I realize what's going on, after which it tastes normal.

Expand full comment
Cakoluchiam's avatar

Some food companies capitalize on this by selling "mystery flavor" snacks & drinks. I had the pleasure of enjoying one a few years ago, but the mystery was kind of ruined when I looked at the ingredients list. After doing so all I could taste was peach.

Expand full comment
TitaniumDragon's avatar

How good is your sense of smell?

I don't ever have this experience and have a very keen sense of smell.

I have always found it weird that people's sense of taste can be so easily manipulated, but it makes me wonder if the reason why it doesn't work for me is that they are much more reliant on visual cues than I am, whereas my expectation will be heavily influenced by the smell, so it's harder to trick me in this way because my belief about what it will taste like is not primarily based on what it looks like.

Expand full comment
Phil Getts's avatar

I think my sense of smell is more precise than average, because (for instance) I would never mistake lavender perfume for actual lavender (as in oil of lavender). But I seem to have no direct recall ability for smell or taste. To remember a smell or a taste, I need to summon some intermediating sense. For instance, I must picture a specific type of apple before recalling the taste of an apple. To recall the smell of smoke, I must summon up the image, sounds, and temperature of a campfire scene (and I have trouble separating smell from temperature, humidity, and touch; eg I can't re-experience the smell of a winter campfire without re-experiencing the texture of the snow; I can't recall the smell of dirt without re-feeling the dryness / hardness / moisture / color / texture of a particular dirt).

Expand full comment
Phil Getts's avatar

Incidentally, this causes me problems when a food product changes its packaging, because then I no longer know whether I like it or not. I still like Coca-Cola when drunk from an old, sexy, thin-waisted glass bottle; but not as much when drinking it from a plastic bottle.

Expand full comment
Kenny's avatar

These observations are super interesting! I've never run into this in the exact scenario you described, i.e. forgetting what was in a glass or cup or whatever, but I have experienced it when I didn't know what something was, guessed wrong, and then was _disgusted_ by the 'wrong' taste/smell.

Expand full comment
Simon's avatar

I think this is somewhat related. I went through a really really rough patch of trying to recall a traumatic event I had blocked out. I had told a loved one about it. A month or two later we tried talking about it again. We started off on normal conversation and then when I knew she was about to bring it up something happened. I felt like I lost my hearing. Like a mental discontinuity happened and no audio from her reached my ears. I only remember feeling like I was shriveling up on the inside. After her sentence I did what I could to quickly change the subject. This post reminded me of that. I feel like I have a better grasp on it now, but at that moment It literally seemed like I went deaf to stop hearing the words I wasn't ready to hear.

I wonder if there's any other cases of that happening out there.

Expand full comment
Kenny's avatar

There's a school of therapy called Self Therapy that covers what you described pretty well! It _seems_ really woo, but I've observed some positive effects of it anyways. Your description of losing your hearing is the 'action' of what they'd call a 'protector', something like that a sub-personality that protects you from trauma or discomfort (or some other strong negative feeling). Part of the therapy is picturing those sub-personalities, maybe even naming them, but definitely _talking_ to them as if they were people, and then negotiating with them to stop their unwanted 'behavior', usually after demonstrating that you don't need them to protect you from whatever it is anymore.

Expand full comment
etheric42's avatar

It seams obvious to me (which means it is probably wrong) that the reason the prior doesn't update and the bandwidth on the sensation is throttled is because the person experiencing the situation has a flood of additional sensation that the context/prior weighting is generating.

The Rottweiler is scary. I am not only perceiving a dog wagging its tail, but also perceiving that my heart rate is accelerated, my muscles ache and want to pump, my hands are clammy, etc. If I can only pay attention to so many things at once, some of the immediacy of my physical reaction is going to take up my perception. Furthermore I'm now going to add an additional context on top of "dogs are scary" but also "when my heart races like this, it isn't a good thing" and "when I act scared my partner/parent is ashamed of me (which in shorthand is heard internally as: "I'm gathering shame right now")"

Expand full comment
etheric42's avatar

"seams". Ugh.

Expand full comment
Kristin's avatar

Exacerbating "bitch eating cracker syndrome", I think, is the escalation of moral investment in the designation "bitch." If you slap an ostensible bitch for eating a cracker, then you better be right that she's a bitch... otherwise you're a moral monster. In fact, you may need to up-level the designation to "evil bitch" in order to resolve your cognitive dissonance.

This is my hypothesis on how people turn abusive, or at least how my (ex) husband went down that path. In many respects, he is/was a very good man. I think he was trapped in the belief that I was an evil bitch, because what sort of monster physically abuses a non-evil human? This may be especially challenging for him because of his splitting - his world is full of angels and devils, with little in between - and thus he had a long way to fall. As trite as it sounds, I think the only hope for him is to learn to accept and forgive himself. But I'm not sure this is in the cards... may god have mercy on his soul.

Expand full comment
Mark's avatar

This sounds like someone engaging in 'moral sunk cost fallacy.' One picks the set of moral propositions (who or what is good or evil) that best allow salvaging one's self-image conditional on what one has already done. If we judged moral worth less based on what they've done in the past, it may reduce this tendency, but may also reduce the disincentive to do bad things in the first place. Or maybe it's a natural tendency rather than a learned one. It may be an interesting case of two adaptive traits combining unexpectedly in a way that reduces fitness: it's good that we avoid negative feelings; it's also good that bad behavior induces negative feelings; but we've invented a trick to avoid bad behavior-induced feelings by changing the definition of bad behavior (or redefining who when, where, to whom it is bad), which may in some circumstances be worse than not associating negative feelings with bad behavior at all.

Expand full comment
Nancy Lebovitz's avatar

I've been noticing in general how much people attribute bad motivations to people they disagree with.

Expand full comment
David Friedman's avatar

Never attribute to malice what can be explained by stupidity.

But that isn't that much of an improvement, because believing that the reason someone believes something you think is wrong is that he is stupid is another way of not having to think about whether he might be right.

Expand full comment
C MN's avatar

This sort of dynamic is discussed at length in Baumeister's book "Evil". He talks about how abusive partners are oversensitive to slights (so "my partner made a joke about my job" turns into "my partner embarrassed me and has no respect for my authority") and thus feel justified in engage in retaliation that, to anyone else, looks excessive. It's the classic "look what you made me do"--from the abuser's point of view, the abuse is justified, because they perceive themselves as acting rationally based on how "poorly" they've been treated (even when an objective outsider would say that's not what's going on).

Expand full comment
UserFriendlyyy's avatar

Anyone that doesn't thoroughly hate both political parties is either a billionaire or not paying attention.

Expand full comment
Andy Jackson's avatar

Strange game, the only winning move is not to play (War Games [movie], terrible source, good quote)

Expand full comment
TitaniumDragon's avatar

This is a very immature point of view.

Both parties have some good aspects and some bad aspects. Their goodness and badness vary over time.

George HW Bush and Bill Clinton were both very solid presidential candidates.

The Republicans have degraded badly over the last 30 years due to excessive neo-Confederate influence.

The far left have been trying to degrade the Democratic party similarly (as they did previously) but have thus far failed.

Were both parties to degrade, it's likely some new centrist party would pop up. This is what happened when the Republicans came to be - the decay of the Democrats and Whigs allowed for an opportunity for new parties to emerge, and the Republicans were the result. The Democratic party almost disintegrated (and indeed, fractioned into multiple factions) only to reform after the Civil War.

Expand full comment
UserFriendlyyy's avatar

Both parties function exclusively to help different sets of elites and get different groups of poor people to vote for them on cultural / identity based arguments all while working class people has been working longer and harder for less money and see no realistic path to improve their life end up killing themselves in such large numbers that they decrease life expectancy for the whole country. The only thing that has broad bipartisan agreement is that we should always be in at least 13 wars killing poor people abroad and that we should be killing as many poor people at home as possible. Obama intentionally maximised the number of foreclosures on poor people, overseeing the largest loss of black wealth in history. He didn't so much as investigate the copious amounts of fraud that lead to the GFC, much less prosecute any of his campaign donors who were responsible. He gave banks retroactive immunity for forging chain of title documents. Trump's TCJA was hilarious in how brazen it was by only giving 83% of the benefits to the top 10% of americans. I'm not even being hyperbolic when I say the only reason this country exists is to kill poor people to make profits for billionaires. If we eliminated fraud and corruption there would be nothing left.

Expand full comment
TitaniumDragon's avatar

The idea of wage stagnation is a complete myth with absolutely no basis whatsoever in reality.

In real life, the median new house is more than 60% larger relative to 1970.

People have massively nicer cars.

Their houses are massively more likely to have central heating and air conditioning.

They have numerous new amenities - twice as many TVs for the median household, as well as internet, computers (desktops and laptops), cell phones, smart phones, cable/satellite TV, video playback units (we've gone through several ever better generations), smart speakers, ect.

And this isn't just the top, and it isn't just the median. Standard of living has increased across the board, with poor people seeing massive improvements in quality of life. The number of people who live in consumptive poverty - that is to say, who, after government benefits are taken into account, consume less per year than the poverty line - has been in sharp decline since the War on Poverty began in the 1960s.

The idea that people are worse off is just a flat-out lie.

It has zero basis in reality, which is immediately obvious if you have ever looked at the US Census American Housing Survey data.

https://www.census.gov/programs-surveys/ahs.html

Standard of living has skyrocketed across the board.

The entire lie of wage stagnation is based on CPI, which is known to overestimate inflation by north of 1 percentage point per year, cumulative - if you compare it to the GDP deflator (which is used for productivity calculations), CPI shows 200 percentage points more inflation since 1970.

You have been systematically lied to - and the people who have been lying to you are doing it because their entire political ideology is based on a foundation that rich Jews are stealing from gentiles. That was what Marx believed, and that's where socialism comes from:

> "Thus we find every tyrant backed by a Jew, as is every Pope by a Jesuit. In truth, the cravings of oppressors would be hopeless, and the practicability of war out of the question, if there were not an army of Jesuits to smother thought and a handful of Jews to ransack pockets. [...] the real work is done by the Jews, and can only be done by them, as they monopolize the machinery of the loanmongering mysteries by concentrating their energies upon the barter trade in securities... Here and there and everywhere that a little capital courts investment, there is ever one of these little Jews ready to make a little suggestion or place a little bit of a loan. [...] Thus do these loans, which are a curse to the people, a ruin to the holders, and a danger to the governments, become a blessing to the houses of the children of Judah. This Jew organization of loan-mongers is as dangerous to the people as the aristocratic organization of landowners... The fortunes amassed by these loan-mongers are immense, but the wrongs and sufferings thus entailed on the people and the encouragement thus afforded to their oppressors still remain to be told."

> * Karl Marx, The Russian Loan, 1856

It's not how real world economies work. Not even remotely.

In real life, consumer goods are consumed by consumers. We have seen a massive increase in the output and quality of consumer goods. Where do you think those have gone?

They've gone to consumers. Things like personal computers and smart phones are ubiquitous these days despite not existing in 1970.

As for the idea of life expectancy going downwards - you've been deliberately lied to.

People live almost a decade longer due to massive improvements in health care.

Life expectancy as of 2019 was 78.93 years.

In 1970, it was 70.78 years.

That's an 8 year increase.

It would be larger, if people weren't so fat. But Americans are extremely obese.

Notably, Asian Americans enjoy an average life expectancy of 86 years, higher than any country on the planet.

What you believe about the economy is entirely untrue. What you believe about life expectancy is entirely untrue.

I would strongly suggest deleting your priors and starting over, looking at actual, real-world economic data, like the American Housing Survey and things like life expectancy numbers over the last 50 years.

Expand full comment
The Ancient Geek's avatar

Young people want buy houses and can't afford them. If the high cost of housing is due to the increased size of houses, why arent developers building small, affordake houses.

Expand full comment
UserFriendlyyy's avatar

And sadly HW bush was the best president in my lifetime, once you get past the fact that he organized Iran Contra (https://www.lrb.co.uk/the-paper/v41/n02/seymour-m.-hersh/the-vice-president-s-men), death squads in 34 3rd world countries for the crime of wanting to try communism, and lying us into the gulf war by getting the kuwaiti ambassador's daughter to lie about throwing babies out of incubators that lead to the disgusting unnecessary violence by killing retreating troops along the "Highway of Death".

Clinton was a great republican president. He deregulated Wall Street with GLBA and CFMA which were by far the biggest cause of the GFC, well that and his stupid budget surplus that left markets scrambling for other no-risk places to park money in lieu of treasury bonds, landing on Mortgage Backed Securities. His other great achievements, ending welfare and dramatically increasing childhood poverty. 3 strikes laws which tore apart countless poor, mostly black families. There is literally nothing that man did that the modern Democratic party is proud of, well outside of killing off the left wing of the party.

But please do tell me about all the wonderful things any of our war criminal presidents have done that helped anyone who wasn't already rich.

Expand full comment
UserFriendlyyy's avatar

Or you could just admit that actually, being ignorant of why and for who our political parties operate is actually the immature position.

Expand full comment
Dweomite's avatar

Possible editing error: In the section "Trapped prior: the more complicated emotional version", you introduce van der Bergh's idea that negative emotions reduce the bandwidth on sensory input. Then in the section "Reiterating the cognitive vs. emotional distinction", you introduce this same idea again, phrased as if it were the first introduction.

Expand full comment
Adam's avatar

What is the usual time frame of studies like this on the entrenchedness of priors? Just typical minding based on myself, I can definitely see and feel my skepticism tingling whenever I am presented with evidence that I am wrong, but in the long run the political stances I tend toward have changed quite a bit, say over the past 20 years. I don't think I ever particularly strong stances and I still don't, but in general temperament they have changed. I'm not claiming to be abnormally rational or anything. I think I just don't care much and find myself expressing agreement with whatever happens to be considered acceptable to my peers, and those peers have changed over the years.

On the capital punishment thing, what was the evidence these people were presented with? This seems like a question of values. Values are fundamental, not based on evidence, so I don't know what should be able to change them. This feels like a manifestation of the is/ought gap more than trapped priors. For questions of value, all you have are priors. If they ever update, it's by fiat or epiphany, not evidence.

Expand full comment
GradientDissent's avatar

I'm reminded of KGB defector Yuri Bezmenov's take on trying to reason with demoralized people; "you cannot change their mind, even if you expose them to authentic information". Scott enumerates some potential ways to escape trapped priors. But if I were to channel Bezmenov's criticism of the Soviet Union I then have to ask, "is it possible to take advantage of this belief model to better convince people to defend your position"? I assume so.

Reinforcement of priors is one approach. For example, I really want this person to be afraid of dogs. Therefore, I'm going to keep showing them pictures of scary looking dogs. This is more or less "I'm going to read about how bad the other political side is from the people who agree with me already." (Is it?) But under this model anything that can decrease the strength of the raw sensation is another approach to convincing people. But this seems to be a more defensive strategy. However, I'm having trouble thinking of how someone would craft a situation where this is the method of attack. Thoughts?

Expand full comment
GradientDissent's avatar

To clarify, I'm asking from a self-defense perspective. The tools for defending yourself from getting kicked in the head are different from those you need to recover from a concussion.

Expand full comment
Cole R's avatar

Wouldn't it be great to just be the nice dog instead and only be conditioned on whether an action results in a treat? Sounds like a pretty nice prior if you ask me

Expand full comment
Doug Mounce's avatar

Remember JJ Gibson, 1950's, mechanism of perception and organ participation, which shows that perception is a compilation of the person's environment and how the person interacts with it.

Expand full comment
Marginalia's avatar

Observation on the extremes of the dog whistle problem. Policies have lots of consequences, both benefits and costs/positives and negatives, potentially even to the same people.

When a pattern of consequences arises which (at least seems) like it was not stated in the "This is what the Law is About" section (haha) - but the pattern is either really positive or really negative for some group or individual - the question is posed, Did they do it this way on purpose, by accident, or what combination? Did "they" not realize this would happen, or was it what they were after all along and they just hid it? Or was it not the primary goal, but a desired objective that the architects were aware of? Individual lives have that, someone can legitimately not realize, and at an individual level the person can also have motives they are unaware of - and they can also have definite motives and feign innocence.

The problem is, looking at the complex of unintended consequences of a policy created by a group, and then choosing the "every supporter meant this and knew it" option.

From the position of someone affected by the unintended consequences, if something is negative enough and goes on long enough, there is a "they reasonably should have known by now" position, which makes it possible to stop extending any benefit of the doubt to the architects of the policy. It allows someone to attribute intent and therefore be justified in loathing.

From an education point of view, sometimes people really don't know and don't realize.

I have to stop writing this. Maybe more later. Very interesting post.

Expand full comment
Tom Zimbardo's avatar

So, I work as a Clinical Psychologist in the UK, and have been recently treating anxiety, including phobias and OCD using standard CBT approaches.

In the case of a phobia, the thing that limits people’s raw experience is typically avoidance of the phobia stimuli. This temporarily relieves anxiety, and thus strongly negatively reinforces the avoidant behaviour (by making the scary thing go away) and of course reduces the experiential bandwidth that the “dog” is not scary, meaning that they only have their prior to make predictions about future dogs with.

I’ve never used flooding, but my understanding is that it actually *can* work, so long as the person doesn’t freak out and avoid the phobia stimuli. As we have to do therapy ethically, if a person simply doesn’t want to be locked in the room with the Rottweiler, we’ve gotta let them out. But if we let them out at a moment of high fear, we actually increase the fear conditioning and avoidance. In order for a person to become habituated and de-conditioned to the phobic stimuli and past avoidance, they need to stay with the stimuli for long enough that the experiential data demonstrates it is not dangerous *and* for their amygdala to calm down. This typically takes 20-30 minutes, and I explain all of this to patients at the start. We don’t use flooding because few people could tolerate it enough to consent, whereas most people can tolerate the cute puppy stage of systematic desensitisation which enables them to gradually face more scary phobic stimuli.

The issue though with phobias is that the person understands that their phobia is in some way silly or harmful, and is prepared to do systematic desensitisation to overcome it. My patient who believed she had to check her children were breathing multiple times per night or they might die *knew* on some level that her belief was irrational, and wanted to challenge it, but the trapped prior and fear and alleviation when she checked kept her negatively reinforcing the checking behaviour. Exposure and response prevention changed that, but it was because she ultimately wanted to change and simply needed the method and a good therapist to guide her.

My sense is that some political beliefs could be similar. If you have limited data on a group of people and fear them, you will likely avoid them, both negatively reinforcing your avoidance and also not gathering data which could disconfirm priors. If you actually got to know some Republicans, or some Muslims, you might not find them to be that bad, if you actually spent long enough with them, but your avoidance could prevent you from ever doing this.

Our confirmation biases could positively reinforce seeking information that makes us feel better about reading information that agrees with our beliefs - we all want to feel we are correct. Conversely, the cognitive dissonance, or discomfort we feel when presented with information that demonstrates to us that we are in error could cause us to avoid such information, negatively reinforcing the avoidance and also limiting our exposure to such evidence. It could also provoke a defensive behaviour, such as stating that the evidence is in some way “lies” or “fake news” as to truly accept the data as being correct involves some discomfort or potentially embarrassment at realising one was wrong.

Potentially, the cognitive dissonance or embarrassment is even stronger the more outlandish and bizarre ones beliefs are, which could explain why people would rather continue to believe e.g. Qanon conspiracy theories than face the emotional discomfort for *long enough* that the discomfort fades, enabling them to continue to absorb new experience without avoidance.

I think the avoidance, time required for fear or discomfort to reduce, and defensive reactions could all be reasons why humans can’t simply absorb additional data and thus more rationally update. I hope this is a useful addition to your own model. I appreciate that you will know the basics of this in psychiatry, but assume that you’ve not actually spent the time in the room with the patient and the dog to find out what makes it so hard for her 🙂

Expand full comment
Ben's avatar

Reminded me of this story from the New Yorker on the (de?)conversion of a Westboro Baptist Church member via Twitter.

Expand full comment
John Slow's avatar

I'm surprised Eliezer Yudkowsky's method of updating posterior probability is not mentioned as an excellent way of updating trapped priors.

Expand full comment
Sniffnoy's avatar

Could you elaborate on that?

Expand full comment
Dweomite's avatar

I'm not entirely sure if this is what Ayush is referring to, but Eliezer has a proposal for how to re-examine entrenched beliefs that he calls a "crisis of faith": https://www.lesswrong.com/posts/BcYBfG8KomcpcxkEg/crisis-of-faith

Expand full comment
John Slow's avatar

Yes I'm referring to that sequence

Expand full comment
CB's avatar

I was thinking of a Less Wrong post that goes something like: I have a belief. I encounter a weak challenge to it, which I easily dispatch. Then I encounter another weak challenge, and I dispatch that. Repeat a thousand times. Because each challenge was weak on its own, I still have the belief. But if I take a wider view, I'll realize I should abandon it because it's been undercut a thousand times.

Do you remember which one this is?

Expand full comment
John Slow's avatar

The general method works like this: If you have belief A, assign a confidence to it. Say you have 90% confidence in this belief. Now conduct a real world experiment on that belief, and be ready to update your belief in either direction by a fixed percentage.

Example: Suppose you have a belief that your partner doesn't care about you. Ask yourself how sure you are of that belief. Suppose you come up with "70%". Now conduct the following experiment: ask your partner for help with something you've been having trouble with. But before asking them, decide right away by how much you'll update your confidence in your belief. Say you come up with "15%". Now ask your partner for help. If they agree to help you, update your confidence in your belief to 55%. If they don't help you and give you a flimsy justification for doing so, update your confidence in your belief to 85%.

Expand full comment
Sniffnoy's avatar

On "When Prophecy Fails", I'd suggest seeing this later paper by Lorne Dawsone: https://www.gwern.net/docs/sociology/1999-dawson.pdf

In short, yes, people *can* become more fervent believers after a failed prophecy, but in general what actually happens varies. The paper looks at a number of groups that had failed prophecies and some continue and become more fervent, others fall apart, etc. I'd suggest reading the whole thing, but I think the summary version is that it seems to depend on how the group's leaders react in the immediate aftermath of the prophecy.

Expand full comment
maline's avatar

It's worthwhile to spell out why confirmation bias isn't actually rational: it's a failure to draw distinctions between the evidence itself, and your interpretation of it.

If you know you have a phobia of dogs, then, when you see a dog as scary, that shouldn't give you any new evidence that dogs are a threat. You knew in advance that would happen! It would happen no matter how harmless the dog actually was. The problem is that you ignore the layers of interpretation, and try to update as if you had encountered an objectively dangerous dog. If that had happened, it would indeed be evidence that many dogs are dangerous.

If you have a prior that vaccines are safe, and you hear about someone who died soon after being vaccinated, you will say that it was most likely a coincidence. This is a correct conclusion. Then, since this death was a coincidence, you will not update your priors et all. This part is wrong! The event is still a valid bit of evidence. It is something that would be more likely to happen if the vaccine were dangerous than if it were safe. You need to update (a tiny bit, in this case) in favor of "dangerous" - even though you are correctly confident that there was no connection.

Updating based on evidence, and the interpretation of said evidence, are two different processes that need to be kept completely separate. This is something humans are very bad at doing, hence "confirmation bias".

Expand full comment
Sniffnoy's avatar

Aha, I wanted to say something like this, but you said it better. Great comment. I think this an important point that isn't really stated explicitly in the post. I'd say it demonstrates that Scott's diagrams are missing a bit of detail, because they can't really show this distinction. In that way, they're perhaps a little misleading; proper Bayesian updating and confirmation bias are made to seem more similar than they actually are.

Expand full comment
Adam's avatar

Part of the issue is I don't see how a human can realistically distinguish between believing there is a 2% chance of some proposition being true versus a 2.5% chance. It's easy to work out the math and develop mechanical systems that can update in very small steps, but human beliefs are quantized at some higher level that makes it impossible to "feel" the difference a small bit of evidence should make, and if you're asking how strongly you believe something, it's very hard to put a real number on it and not just "weakly, strongly, 50/50."

Scott tries with his prediction registration and annual calibration exercises, but this relies on predicting events that for sure either happen or don't in a time bound manner. If someone predicts there is a 2% chance alien civilizations exist, how are we supposed to evaluate the correctness of that number?

Expand full comment
David Friedman's avatar

This is linked to the tendency of a lot of people to confuse evidence with proof. That is probably linked to our preference for bright line rules. For a natural rights libertarian, something either is an initiation of coercion or isn't, there is nothing between. For anyone on either side of the abortion controversy, a fetus at any stage either is a person or isn't. And similarly, this fact either does prove claim X is true or doesn't.

Expand full comment
maline's avatar

Yes, the tendency to treat things as binary is a major contributor to confirmation bias, especially the way we tend to treat our understanding of things as facts, rather that merely the most probable possibilities.

But I think the point I am trying to express is more basic than that. Confirmation bias occurs when when we use our priors to evaluate new observations, and then update our beliefs based on those interpretations of the evidence. This allows the prior to be self-reinforcing: evidence is understood in a way that conforms to the prior, and then taken as confirmation of the same prior!

To do Bayesian updating properly, you have two valid options:

1) Refer only to the raw data as "evidence", without taking any interpretation as "correct". In particular, your prior should play no role here.

Your friend has told you that he saw a polar bear near SF. If there was in fact a polar bear there, how likely would he be to say that? How about if there was not? Plug those numbers into Bayes' formula, and only then insert your prior on the presence of polar bears. Conclude that it is still very unlikely that there was a polar bear, but a little less unlikely than you thought. Then, and only then, do you dismiss the story as "almost certainly nonsense" and ask your friend what he's been smoking.

2) Go ahead and interpret things in the way that seems most natural to you, but be aware of the interpretation process, and update based on the likelihood of getting observations that would be interpreted as you have.

You see a dog and are terrified. What does this tell you about the threat level of dogs? Here you do not have access to the pre-interpretation details about the dog's appearance and behavior; your memory of the experience is colored too strongly be your fear. You need to be aware of that. Instead of asking, "if dogs were safe, how likely is it that this one would act so threateningly", you need to ask, "if dogs were safe, how likely is it that I would interpret this dog's behavior as threatening?" If you have cynophobia, that conditional probability is high, so you end up with no update, or maybe even one in the "safe" direction (if the dog seemed a bit less terrifying than you had expected, which would only be likely to happen if it really was quite benign).

The issue is when we judge the evidence, then treat the output of our judgement as an observation. This is problematic even if we are aware that the judgement is only probabilistic; it shouldn't be playing a role in the evidence at all! But as you say, treating the judgement as definite and binary does make things much worse.

Expand full comment
Clive F's avatar

In your vaccine example, the update wouldn't change the prior.

Say your prior that vaccines were safe, at some suitable ratio - 99:1 or whatever. Someone dies after their vaccination, as has happened in Europe recently with someone dying of a blood clot.

The update ratio, given someone dying after receiving the vaccine P(death|vaccine_safe)/P(death|vaccine_not_safe). But these odds are the same, if indeed the vaccine didn't cause the death.

In this specific case Dr Phil Bryan, MHRA Vaccines Safety Lead said:

"The Danish authorities' action to temporarily suspend use of the vaccine is precautionary whilst they investigate. Reports of blood clots received so far are not greater than the number that would have occurred naturally in the vaccinated population."

So the update doesn't change the odds as P(clot|safe) equals P(clot|not_safe), so the update ratio is 1. It's important to look at the ratio of these two, as otherwise you're right, any new evidence would change the odds.

Expand full comment
maline's avatar

This is not correct. P(death|vaccine_safe) is given roughly by the regular death rate for the population, while P(death|vaccine_not_safe) is much higher -- that's more or less what "not safe" means! So yes, every individual who dies after being vaccinated provides a bit of evidence in favor of "vaccine_not_safe".

Fortunately, all the millions who have taken the vaccine and are fine, myself proudly included, give us much stronger evidence that it is safe.

Expand full comment
Clive F's avatar

You're of course right that P(death|not_safe) would be higher if it were, indeed, not safe!

I was making the point (very badly!) that we have evidence that P(death_from_clotting|vaccine) = P(death_from_clotting|no vaccine), so even though you did see a death from clotting, the update doesn't change the odds.

In short, bad stuff shouldn't change the prior, if the bad stuff is happening at the normal rate.

Expand full comment
teddytruther's avatar

I'm not totally convinced by the move from subconscious trapped priors (snap judgments, sense data pattern recognition, phobias, PTSD, etc.) and conscious trapped priors (bias, beliefs, ideologies, etc.). The former is driven by raw associations and experience, but the latter arises from more sophisticated psychological constructs: identity, temperament, tribal affiliation. Rewiring the former seems doable and in some cases therapeutically useful, rewiring the latter seems both difficult and also perhaps not terribly desirable.

Expand full comment
teddytruther's avatar

To expand a little further: I recently read the graphic novel 'Sabrina', which (no spoilers) depicts the way main characters' sense of identity and sense-making is completely rewired by a jarring experience that violates their priors, with pretty alarming results. Cults, extremist groups, and conspiracy theories are other entities that operate by re-engineering the cognitive sense-making apparatus. In fact, I really can't think of a good example of someone having a massive personality change, unless it is de-programming from these kinds of affiliations.

Expand full comment
RalRosche's avatar

Is it truly wise in starting the post by talking about priors in the context of democrats and republicans? I understand that the distorted thinking people have around politics is a big problem, but that same distorted thinking also makes it even harder for you to reach anyone suffering from it (I.E. a large fraction of the population) when you start out the gate with "but what about reasonable republican arguments." They'd get angry and say that it's entirely justified for them to be aggressively opposed to Republican policies, and list any number of reasons why giving the benefit of the doubt to Republicans is like giving it to Nazis. I actively try to avoid news articles most of the time and I've read and enjoyed several hundreds of your posts, and it would be basically effortless for ME to make that list. It's an urge that has to be held back, an urge that as you say, seems reasonable to a person from the inside. And most people can't hold it back. So why start an important article like that, and lose a fraction of the audience from the start?

I mean, I understand the other failure mode. Give all uncontroversial examples and people will nod their heads along to your point and then as soon as something controversial comes up they won't apply the lesson to that one at all, act like it's a different case or just inapplicable. That's a tradeoff I suppose. But what does the side you chose on this particular tradeoff benefit you? it seems like democrats vs republicans was an example you picked because it was easy to reach for, not one chosen to be particularly illustrative. You don't do anything with that paragraph that could not have also been done by substituting in any other ideologies, and almost any other ideologies would be less rage invoking (less prior bending?). I don't feel like slipping the democrats vs republicans thing in at the start does anything meaningful to break political tribalism, but it will negatively tint the perspective of everyone who is not already extremely open minded.

(Oh, or maybe this is the kind of thing you're doing because your actual goal is to try to win over more republicans to your line of thinking, in which case perhaps it's a more useful paragraph than I'd thought)

Expand full comment
Daniel P's avatar

I shared this in my local buddhist facebook chat, and this guy who I personally find oppressively leftist (homeboy just drops Bakunin quotes into random conversations. it's just leftist peacocking) messaged me directly and said in essence, "Wow I really liked this. Sometimes I really just can't see the dog whistles that my more emotional trump hater friends are upset about, I think this is an interesting explanation."

Which is to say, he might lose some of the audience, but anyone who can't admit that the entire republican ethos might possibly have a least one better belief or policy position than democrats *definitely* has trapped priors, and writing articles to people with trapped priors is (basically by definition) a waste of time.

Expand full comment
RalRosche's avatar

I appreciate the data.

Expand full comment
Dan L's avatar

I guess that element of the banner image is about updating priors instead of the Astral triad of Yesod, Hod, and Netzach after all. Alas, I suppose it was just a coincidence.

>I've sort of lazily written as if there's a "point of no return" - priors can update normally until they reach a certain strength, and after that they're trapped and can't update anymore. Probably this isn't true. Probably they just become trapped relative to the amount of evidence an ordinary person is likely to experience. Given immense, overwhelming evidence, the evidence could still drown out the prior and cause an update. But it would have to be really big.

>Sloman and Fernbach might be the political bias version of this phenomenon. They ask partisans their opinions on various issues, and as usual find strong partisan biases. Then they asked them to do various things. The only task that moderated partisan extremism was to give a precise mechanistic explanation of how their preferred policy should help - for example, describing in detail the mechanism by which sanctions on Iran would make its nuclear program go better or worse.

My take on this is that prompting someone for the mechanistic explanation confirms both that that 1) the belief directly serves its holder in terms of expected evidence and 2) there is a *specific* expectation whose negation might be evidence against the belief. When the context is generating predictions rather than commenting on someone else's work, it skews the utility away from belief-as-attire and removes the feedback loops of interpersonal politics that would drive people to extremes.

(Maybe there's also an instinctual element of leaving a line of retreat that become easier with moderated beliefs, but I'm not sure if that's strictly necessary for the effect. )

Expand full comment
Aapje's avatar

Unofficial book review survey: https://forms.gle/1aU6BdcAt5ZKit7n7

I'll send the results to Scott.

Expand full comment
The Pachyderminator's avatar

I think you posted this in the wrong thread?

Expand full comment
Aapje's avatar

I'm spamming a bit, sorry :)

Expand full comment
TitaniumDragon's avatar

One issue is that your model may be too simple - in particular, there's more than two inputs.

Phobias are fairly simple - someone has an irrational fear of some stimulus. It makes no logical sense, they may even recognize that it makes no sense, but they still struggle with it.

But when we talk about, say, scientific knowledge, one issue is likely that people become falsely overconfident in their scientific beliefs because if they are often right about *other* scientific things, that makes it more likely that they're right about *this* thing. That is to say, they think that they are "smart science people", they are usually right about most scientific things, so when there's some scientific thing that's controversial, they side with whatever it is that their political beliefs tell them to, regardless of the actual evidence, but because of their science knowledge, they falsely end up believing that their beliefs are correct because they have other science knowledge that is correct, and that incorrectly gets carried over into the biased domain of belief.

This happens with a lot of experts in general, really; people assume they are good at X and therefore know better about Y, even though, in reality, Y is primarily a matter of knowledge and so being good at X isn't actually all that relevant.

This makes it inordinately difficult to change their scientific belief because their brain, rather than correctly applying it back as "you don't understand this thing in particular", ends up trying to apply it back against "you don't understand this thing IN GENERAL", at which point the counterevidence of "I know all this stuff!" comes back in and thus greatly devalues or even reverses the signal.

Someone who isn't so sure about science is going to have an easier time changing their opinion about some of these things if they aren't capable of cognitively isolating the "good at science" thing, becuase they don't think that they ARE good at science, and so their priors are more easily changed because they don't have the same resistance to changing their beliefs.

I think that people who can overcome this are the people whose view of science is "I am smart about science, so questioning this thing that I/people/the public/my tribe assumed to be true means that I am good at science". This causes the negative reinforcement loop to instead become a positive reinforcement loop, as it means that questioning common assumptions means you are *good at science*.

Of course, there is a downside to this, which is that if you combine this viewpoint with being not actually very good at science, you can end up believing stupid things or else having overly low confidence about everything, because you aren't very good at filtering out bad information.

Honestly, I sometimes wonder if this is why we have so many dumb people about some of these things - a lot of the antivaxxers I've met IRL are not vehmently anti-vax, they just are distrustful of them because they know enough to be suspicious of medical studies but not enough to actually be able to apply skepticism correctly to data. They're skeptical of everything, and would, ironically, be better off with a worse basic herustic, because they aren't actually good enough at "all things should be questioned" to actually apply it usefully.

The people who trust respected authority figures like Dr. Fauci are better off than they are.

Meanwhile, the people who are *actually* good at this stuff were saying to wear masks before the CDC said to wear masks. My mom bought respirators back in January 2020.

This may be why these sorts of things are actually so common - because having fixed priors is bad, but being incapable of actually learning anything is worse. Someone with relaxed priors who is actually good at this stuff will probably only end up with a few weird beliefs, if any, but most people are better off with the worse heuristic because they're not capable of using the better one.

This would also suggest that using psychedelics is probably a bad idea - permanently trapping people's brains open is going to probably lead to net negative consequences in most cases, as while there is a greater optimum to "open minded + competent" than "closed minded + competent", "open minded + incompetent" is probably worse than either.

Which is probably why most people are closed minded rather than open-minded, but there's enough people who benefit from being open minded that the trait hasn't been removed from the population.

Expand full comment
TitaniumDragon's avatar

Of course, open mindedness vs closed mindedness could also just be mediated by early environmental factors. Maybe people who are raised in friendlier environments where experimentation is less dangerous trigger more "open minded" while those in more hostile environments where experimentation is more dangerous trigger more "closed minded" traits.

It could be both - given that openness is heritable, but not overly so, I suspect that it probably is both genetic and environmental.

I do wonder - are phobias associated with lower openness in general? Are people who have PTSD and other such things less "open" in general? If so, that might be indicative that there's some sort of switch that gets triggered in such people.

Expand full comment
Chris Savage's avatar

Thank you so much for laying this out.

I have pondered this phenomenon as a layperson for quite some time; my intuitive thought was that at some point the Bayesian prior is so strongly believed that contrary evidence is taken as an indication that the source of the contrary evidence is unreliable, not that the prior itself might be wrong. I'm not good enough at the math to model that, but any given observation in the Bayesian model doesn't have to be viewed as either true or false; it (like anything else) is assigned a likelihood. But, intuitively, if the NY Times reported (seriously) that the earth was flat, I would not decide the earth was flat, I'd start doubting the NY Times (more than I do; lolz).

FWIW my internal shorthand for this phenomenon is not "trapped priors," which is fine, but rather getting stuck in a "Bayesian Black Hole." If you like the "Bayesian Black Hole" framing, feel free to use it - no attribution needed!

Thanks again.

Expand full comment
Elias Håkansson's avatar

Is it the case that when we're young, our minds run sensation and context through a sensation-weighted algorithm; and once we're older the algorithm switches and weighs context more heavily? Is the context vs sensation-weighted algo thing the same as low vs high neuroplasticity?

Expand full comment
Andy Jackson's avatar

Reading "algorithms to live by' - Brian Christian. He frames it as explore/exploit. If you were going to live in a new city you'd start by checking out a bunch of different restaurants, by your last week you'd only go to your favourite. There is an optimum strategy for the switch from explore to exploit. As you get older, it makes perfect sense to bias towards exploit, it's better bang for buck.

I'd say the change in neuroplasticity is a result rather than a cause, it comes with the major plus side of massively increased neural efficiency (by the laying down of grey matter IIRC)

Expand full comment
Elias Håkansson's avatar

Yea, it's clear that exploit ultimately is what pays the bills, so to speak. But trapped priors due to narrow sensory bandwidth looks a lot like being stuck in exploit mode. Stuck on autopilot in a sense.

Expand full comment
Dweomite's avatar

You observe that, even with a very strong prior, a "reasonable" update function should be able to notice that the experience was at least marginally less bad than predicted, and slowly update away from the prior. But that empirically, this often doesn't seem to happen.

I feel I can very easily imagine a minor and subtle defect in the update function that would cause this failure. For example, suppose evolution "wants" you to update 20% more rapidly on negative stimuli (for similar reasons to why we have loss aversion). The way that *ought* to be implemented is: calculate how far your beliefs would move "normally", then multiply the movement by 1.2 (this will always cause you to move in the same direction as before, just farther). But suppose the *actual* implementation is: multiply the "badness" of the perception by 1.2, and then update on that. If your prior was approximately neutral, this works out to nearly the same thing. If your prior was bad, but the "perception" (weighted combination of sensation and context) was good, then the multiplier isn't used and you update normally. BUT if your prior was bad, AND the perception was only *slightly* less bad, then multiplying it by 1.2 will make it *worse* than your prior, and cause you to update in the wrong *direction*.

That second implementation isn't *defensible*--you couldn't rationally argue it is somehow preferable to the first implementation. But I can easily imagine a lazy programmer doing it without realizing the issue, checking that it gives expected results in some basic tests, and then calling it done.

Evolution is certainly not MORE foresighted than a lazy programmer.

But even that explanation might be overcomplicated. When I read your description of the predictive coding theory of the brain (probably this was in your review of Surfing Uncertainty, though I'm not certain), one detail that REALLY REALLY jumped out at me was when you said the brain compares the sensory input (with error bars) to its prediction (with error bars), and if they match, then it sends THE PREDICTION (not the sensory input!) up to the next-higher layer of abstraction for further processing.

That was just your summary; I don't know how much the evidence supports that detail, or even if you were consciously making the distinction when you wrote it. But if that gray box in your diagrams isn't so much taking a weighted average of the inputs as using some threshold function to decide which input to IGNORE, then that would compactly explain the failure to update in the direction of the sense input.

(Disclosure: I immediately thought that this detail would make sense of a lot of arguments over board game rules where both sides are convinced the rulebook supports their own position, so maybe I was predisposed to believe it.)

Expand full comment
Dweomite's avatar

Upon further thought, the "multiply the badness before updating" example could literally be the same *implementation* as loss aversion. Imagine there's a shared subfunction for "how good or bad is this thing?" that is invoked both in decision-making and in belief-updating, and that subfunction always multiplies negative answers by 2. That might simultaneously explain both loss aversion and trapped priors.

Trapped priors could actually be a *side-effect* of evolution selecting for loss aversion, even if the over-weighting of negative results is a straight loss within the context of the belief-updating system.

(But note I have no idea how plausible this is in terms of actual neural hardware; I'm only thinking in abstract functional terms.)

Expand full comment
Thomas B.'s avatar

The shortcoming of such an approach is that so many - and arguably the worst - biases are not based on direct experiential "priors" at all, but on things like values and sensibilities. So even if one found a way to help people prioritize new "raw" experience, would it even touch the problem? This of course particularly applies to politics.

Having said that, the research on psychedelics does suggest the more "raw-like" experiences it facilitates leads to a coalescence of a sort around more relational values. Though the quality of these studies hasn't been that great, and likely many went into the experience with a particular openness to being changed.

An openness that many if not most of those with "trapped priors" or rigid values are unlikely to share.

Expand full comment
Dweomite's avatar

Arguably a lot of values/sensibilities are actually strong priors in disguise. If you think government transparency is good, that's probably because you believe that transparency makes it easier to pressure government into acting in the interests of everyone instead of just in the interests of politicians. If you think education is good, that's probably because you believe that education somehow improves productivity, safety, happiness, or other more-basic good things.

Maybe you can justifiably be pretty certain of the causality in some of these cases, but if you were truly convinced, say, that government transparency *actually* makes government act in the interests of plutocrats who can spend a lot of resources monitoring them and *less* in the public interest, then that would probably make you feel differently about transparency.

Obviously this has to ground out somewhere, at some values that really are fundamental. But I'm not convinced that any major political disagreements are truly that basic.

Expand full comment
Thomas B.'s avatar

I've never come across any research that backs up what you're arguing, but plenty that suggests the opposite. It's virtually a truism in psychology that most reasons people give for their beliefs and behavior is just a rationalization.

Expand full comment
Dweomite's avatar

...I suppose I'm not intending to argue that people actually make rational or well-reasoned choices in common situations. I just mean that I think most of the values that people invoke to support their policy decisions are instrumental values, rather than terminal values, and therefore IF you could get the person to slow down and think carefully, those "values" are susceptible to argument *in principle*.

And I think that humans do *occasionally* think rationally in specific instances. It's not the normal or default mode of thought, but it's also not impossible.

If you think even that much is still going against the research, then I hope you'll point me to some relevant studies.

Expand full comment
Thomas B.'s avatar

"Most of the values that people invoke to support their policy decisions are instrumental values."

Correction: most of the values that liberal people invoke...

Just look at the last election. While Biden was offering a bazillion policies aimed at solving instrumental problems, what was Trump doing? What do the republicans do most every election? They harp on terminal values - faith, family, country, ect. This, more than any "trapped priors" is what divides the parties. Democrats (and rationalists) just don't seem to understand non-instrumental, non rational motivation. They get irrationality (biases, ect), but not something which doesn't even attempt to partake of it.

And this is not just a US thing. Look at the fascist vs communist battle in pre WWII period in Germany for an extreme example of this. Klaus Theweleit has done an exhaustive examination of the writings and historical records of the Friedcorps and in his conclusion to his first volume he sums it up like this:

"Fascist masses may portray their desire for deliverance from the social double bind, for lives that are not inevitably trapping, but not their desire for full stomachs. The success of fascism demonstrates that masses who become fascist suffer more from their internal states of being than from hunger or unemployment. Fascism teaches us that under certain circumstances, human beings imprisoned within themselves, within body armor and social constraints, would rather break out than fill their stomachs; and that their politics may consist in organizing that escape, rather than an economic order that promises future generations full stomachs for life."

From a non-fascist center-right position, Max Scheler has made eloquent arguments also coming from a non-instrumental position. From the intro to a recent collection of his:

"The contradiction internal to humanitarianism, the reason it is a falsification of love, is that even whilst it celebrates humanity it diminishes humans, stunting aspiration for the highest values as it obsesses over welfare. Wanting to manage human needs, humanitarianism restricts love to the goal of coping with the human struggle of embodiment and environment, neglecting the spirit of devotion to the good in the world, never mind the capacity of love to arc beyond worldly concerns to savor the beautiful and holy. Humanitarian welfare, closed to religious perception, and so closed to spirit, and thus persons, administers human life with the only resources it knows: mechanistic, bureaucratic, economic, and industrial. Human persons are betrayed: placed in manageable general categories the dynamism and eccentricity of human personality is lost. Human life is degraded, depersonalized, amidst the humanitarian impulse towards the routinization of need, and thus collectivism."

Obviously there are many others making the same or similar arguments, most notably Adorno and others from the Frankfurt school. But I'm really just skimming the surface of the argument - one of the things that differentiates rationalism and these kinds of arguments is that rationalism, particularly of the instrumental variety, is much easier to talk about and explain. Specific examples in the scientific literature demonstrating the difference between "real" motivations and the reasons people give, is large and growing (though obviously there are methodological barriers to truly explosive findings).

One of the more popular examples are first dates that take place in high places. Studies show people have a tendency to interpret the anxiety they feel from the vertigo as a sign of attraction. So the "real" reason for their attraction is not the other person but rather the location. This is a pattern - people have a strong tendency to misinterpret internal states or rationalize around them. As Theweleit points out, often these are the "real" reasons- that people invent arguments to make it appear rational. A lot of work around cognitive dissonance, if I remember correctly, touches on this. Haidt has done a lot of work around values and politics. "The body keeps the score" may have touched on that as well. Honestly I'm too lazy and disinterested to do a heavy search, but that's a start. The data is definitely there.

Expand full comment
Dweomite's avatar

Interesting. I'll need to ponder this.

Expand full comment
Nancy Lebovitz's avatar

Yes, a lot of propaganda is about giving people a feeling that they've had experiences pointing in a particular direction. Imagine someone from the outgroup doing something horrible to you or to someone close to you! Imagine how good it feels for your group to have higher status! Or at least high and unquestioned status!

Expand full comment
Chris Perkins's avatar

If one is aware of trapped priors and how they functions (such as you've described), can that help make us less susceptible to them? Not necessarily for traumatic things, like the "scary dog", but the more mundane, like political bias and polar bears?

Expand full comment
859552's avatar

A lot of my strongly partisan/conspiracy theory-ish beliefs are things I just kind of choose to believe because it feels good to believe them. But I'm capable of setting them aside in favor of more reasonable beliefs when I need to. It's not really trapped priors or any kind of issue with rational thinking. I'm just choosing in these moments not to think rationally.

Expand full comment
Scott Alexander's avatar

Do you think it's fair to describe yourself as "believing" those conspiracy theories, or are they just fun to think about / fun to act as if you believed them?

Expand full comment
859552's avatar

I suspect -- but can't prove -- that I believe them as much as real (not mentally ill) conspiracy theorists believe their conspiracy theories. The difference is maybe analogous to the self-aware vs not self-aware person with a dog phobia. I can sense the beliefs aren't really rational, but some people can't. It really feels like a belief at the time, though.

Expand full comment
Dweomite's avatar

Eliezer Yudowsky would say that the test of whether you "really" believe something is not how it feels, but what experiences you anticipate having in the future. For example, if you claim you're immune to fire, but you still wear oven mitts to remove you dinner from the oven, then your belief in fire-immunity isn't actually controlling your prediction of what would happen if you tried it without the oven mitts.

See "Belief in Belief" https://www.lesswrong.com/posts/CqyJzDZWvGhhFJ7dY/belief-in-belief

and "Making Beliefs Pay Rent" https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences

Expand full comment
859552's avatar

I agree. But people also use the word 'belief' for things that don't pass this test. So if you're trying to figure out why someone has an irrational belief, you need to clarify whether it's an actual belief in the Yudowsky sense, or a belief-like mental state that isn't meant to respond to rational evidence. I suspect that the public is much more rational than we think they are, because a lot of their crazy "beliefs" are not actually beliefs. Anti-vaxxers would be a counter-example, since they actually don't wear the oven mitts.

Expand full comment
Dweomite's avatar

To clarify: Are you saying that your personal beliefs in conspiracy theories do NOT pass Yudowsky's test?

Expand full comment
859552's avatar

Yes, exactly. And conspiracy theories was perhaps not the right phrasing. For me it's usually more conventional things like "All Trump supporters are racist." Which I definitely don't believe for real, because I have friends who are Trump supporters that I don't think are racist. But in the moment it feels like a real belief.

Expand full comment
Andy Jackson's avatar

Anti-vax actions usually don't burn your hands at the time (& frequently never) so the oven-mittlessness is get-away-able with.

Expand full comment
TitaniumDragon's avatar

Symbolic belief is strange.

Really, a lot of religious beliefs in secular society are symbolic beliefs. People who believe in things like the afterlife and who actually act like the afterlife is real are the strange people, even though ostensibly most of the population believes in such.

But people get just as upset over symbolic beliefs being directly questioned.

How you ask people questions will significantly alter their answers as well.

For instance, if you ask people "Did humans always exist in their present form, or did they evolve over time?", 31% of people will say that humans always existed in their present form.

If, however, you ask them "Did humans evolve over time by natural processes, evolve over time directed by god, or always exist in their present form?" only 18% of people will claim that they always existed in their present form.

https://scienceandbeliefinsociety.org/2020/04/21/are-there-100000000-creationists-in-the-usa/

Expand full comment
Thiago Ribeiro's avatar

"(My usual metaphor is "if God came down from the heavens and told you ... - but God coming down from the heavens and telling you anything probably makes apocalypse cultism more probable, not less.)"

What if God came down from the heavens and told you He doesn't exist?

Expand full comment
Sergei's avatar

Like in all situations like that, it's the time to get checked into a psych ward.

Expand full comment
Andy Jackson's avatar

The babel fish proves that anyway

Expand full comment
Adam's avatar

I often used to think what it would take to convince me of the existence of "God" in the Euro-classical sense and I had to conclude that nothing could convince me. God himself coming down from the heavens and taking me back in time to show me how He created the universe and was responsible for everything would be indistinguishable from delusion. Performing miracles would be indistinguishable from technology that is simply unknown to our primitive physics. Perfect predictive accuracy is indistinguishable from perfect simulation. Even if you could convince me somehow that you had supernatural magical abilities that were clearly contracausal and defied all possible physics, there are infinite conceivable magical beings who are still not God.

In short, I don't see that there is even possibility of evidence in this matter. If I die and end up in heaven, great, but how is whoever greets me supposed to prove they're really God and not just some sub-God emissary or advanced aliens or even future humans who figured out how to simulate me. It's interesting in a way how The Good Place envisioned this. People went to an afterlife, but it wasn't populated by gods, just immortal celestials who pre-existed humans and they didn't seem to have any better idea than anyone else how the universe came to be in the first place.

Expand full comment
Cakoluchiam's avatar

It sounds like most of your defenses against believing in a god-who-appeared rely on the god simply showing you things. How would those hold up against a classic genie, who demonstrates godhood by imbuing you with the ability to perform miracles of your own choosing?

If such power does not convince you that a being is a god, then what is the meaningful difference between a god and a powerful magician? Is there a way one could impact your life where another could not?

And of course the easy answer to "What would it take to convince me of the existence of God?" is "Altering my mind-state such that I now believe in the existence of God.", which is a miracle that is not outside the scope of the Euro-classical God's powers (and, frankly, could theoretically be performed by sufficiently advanced technology).

So I think the only problem here that can't be proven is "What would it take to have already convinced me of the existence of God?" Which is a question that ceases to be meaningful less than a picosecond after you ask it.

Expand full comment
Deiseach's avatar

"Altering my mind-state such that I now believe in the existence of God.", which is a miracle that is not outside the scope of the Euro-classical God's powers

Yeah, but that raises some theological problems involving free will. Are you being exposed to irresistible grace, which is Calvinist? https://en.wikipedia.org/wiki/Irresistible_grace Some denominations are opposed to this, and some of us haven't made a formal decision on the matter yet 😁

https://en.wikipedia.org/wiki/Congregatio_de_Auxiliis

"The controversy is usually considered to have begun in the year 1581, when the Jesuit Prudencio de Montemayor defended certain theses on grace that had been vigorously attacked by the Dominican Domingo Bañez. That this debate took place is certain, but the text of the Jesuit's theses have never been published. As to those reported to the Inquisition, neither Montemayer nor any other Jesuit ever acknowledged them as his. The controversy went on for six years, passing through three phases—in Louvain, in Spain and in Rome. ...So, after twenty years of public and private discussion, and eighty-five conferences in the presence of the popes, the question was not solved but an end was put to the disputes. The pope's decree communicated on 5 September 1607 to both Dominicans and Jesuits allowed each party to defend its own doctrine, enjoined each from censoring or condemning the opposite opinion, and commanded them to await, as loyal sons of the Church, the final decision of the Apostolic See. That decision, however, was not reached, and both orders, consequently, could maintain their respective theories, just as any other theological opinion is held. The long controversy aroused considerable feeling, and the pope, aiming at the restoration of peace and charity between the religious orders, forbade by a decree of the Inquisition (1 December 1611) the publication of any book concerning efficacious grace until further action by the Holy See."

And of course the Jesuits and Jansenists got stuck into each other over this: https://en.wikipedia.org/wiki/Formulary_controversy

General opinion is that God does not forcibly over-ride human will, though of course He does give and withhold the gift of faith. "Altering your mind-state" is too much like "forcibly over-riding the will". Could He do it? Yes. Would He do it? Answer unclear, come back later.

Expand full comment
Jiro's avatar

This is a fully general argument. Evidence for X is always evidence for very-good-fake-X. At least the fake-God would imply that an alien exists with godlike power, even if it isn't actually a god. Evidence for "homeopathy is ineffectual" doesn't take a powerful alien to fake, just a conspiracy. Evidence for "this company plans to release a self-driving car on Tuesday" could be faked simply by the company lying.

Expand full comment
David Friedman's avatar

This is part of what _Inferno_ by Niven and Pournelle is about. The protagonist is an sf author who dies, wakes up in Dante's inferno, and spends much of the book trying to explain it away in sf terms.

But I don't find it hard to imagine experiences that would raise my subjective probability for the existence of God from very low to above 50%.

Expand full comment
Deiseach's avatar

This is part of the plot of the BBC Radio series "Old Harry's Game" https://en.wikipedia.org/wiki/Old_Harry%27s_Game. Set in Hell, it involves Satan dealing with (amongst others) the Professor who refuses to believe he is dead and in Hell (since he doesn't believe in any of that kind of thing) and instead postulates that he is in a coma after his car accident and simply hallucinating the whole experience.

Expand full comment
ryukafalz's avatar

Interestingly, you see a similar sort of sensitization and desensitization with allergic reactions. After an initial sensitization to certain allergens (from e.g. insect stings), subsequent allergic reactions can be significantly worse. Despite that, allergy shots can also work for those same allergens, with the main difference being the amount of the allergen you're exposed to at each time - gradually increasing as you become desensitized to it.

It makes me wonder if the underlying mechanism is similar in each case.

Expand full comment
Joe's avatar

The supplementary materials for the Sloman and Fernbach paper include the prompt they used to ask for mechanistic explanations:

“Now, we'd like to probe your knowledge in a little more detail on two of the political issues. This is the first one. Please describe all the details you know about [the impact of instituting a 'cap and trade' system for carbon emissions], going from the first step to the last, and providing the causal connection between the steps. That is, your explanation should state precisely how each step causes the next step in one continuous chain from start to finish. In other words, try to tell as complete a story as you can, with no gaps. Please take your time, as we expect your best explanation.”

https://journals.sagepub.com/doi/suppl/10.1177/0956797612464058/suppl_file/DS_10.1177_0956797612464058.pdf

Expand full comment
Chris Savage's avatar

So I may have already posted a comment here, but it's not appearing.

This is brilliant.

My intuitive sense is that we assign probabilities not just to priors, but to observations, including the likelihood of the source. I generally believe the NY Times, but if they reported that the earth was flat, I'd lower my estimation of them, not decide that the earth wasn't round. In this way a sufficiently strong prior can overwhelm contrary evidence by degrading belief in the observation.

I call this a "Bayesian Black Hole." If you like the term, feel free to use it.

Expand full comment
David Bannister's avatar

Formal debate training, where one must research a proposition and then be prepared to argue persuasively either side, was once a common requirement in education. It helped foster critical thinking. Similarly, the Catholic Church adopted the concept of a “Devil’s Advocate” to argue against the elevation of a person to Sainthood. I have found, and there is much written about the technique, that formally designating a “Devil’s Advocate” helps break down biases in decision making. It helps eliminate the “you can’t talk about that” factor.

Expand full comment
David Friedman's avatar

And as evidence of how much we need to eliminate it, a Harvard professor submitted an article arguing that the WWII "comfort women" were not in fact seized and enslaved by the Japanese army, they were simply prostitutes hired by the Japanese army. Presumably some people who objected actually looked at the evidence for and against, but the overwhelming reaction was that claiming that was outrageous and the article shouldn't be published.

Expand full comment
Yotam's avatar

Van der Bergh et al see what they are studying as "Better Safe Than Sorry" information-processing strategies. Carrying that over to trapped priors the question is why it would be unsafe to release the prior and allow it to be updated. What safety is preserved by keeping that prior trapped?

As a coach, I have consistent success asking people what's at risk if they were to reconsider certain deeply held beliefs, and asking if there are better ways to mitigate or tolerate that risk than the strategies they've been pursuing. (Scott's Mental Mountains post nods at similar interventions.)

This only works if the client is suffering in some way because of their held prior, and has reason to question it. But it could work for rationalists as well: asking "What's at risk for me if I were to change this belief?" may draw our attention to where our own reasoning might be motivated.

Expand full comment
Dweomite's avatar

That sounds similar to "leaving a line of retreat" https://www.lesswrong.com/posts/3XgYbghWruBMrPTAL/leave-a-line-of-retreat

Expand full comment
anzabannanna's avatar

This is a great write up, but I think it overlooks the significance of the degree to which this phenomenon exists among the public. It's easy to see how this can occur in political partisans and people with phobias, but what is rarely discussed is how powerful the phenomenon is on culture war topics, even with otherwise extremely rational people. For example: Hacker News and /r/SlateStarCodex are two communities with very high concentrations of rational people. However, if certain topics come up, individuals in these communities almost without exception are rarely able to keep it together and engage in a logically and epistemically sound conversation, free of rhetoric. It seems that people are simply unable to practice the Socratic method on certain topics.

I believe here lies an excellent topic for some research. Typically, studies tend to focus on average person, who usually know they are involved in a study. I think it would be interesting if a way could be found to seed deliberately controversial articles directly into such communities, and then have researchers observe how these rational people behave irrationally (including things like mind reading and predicting the future), and perhaps even intervening to see if any way(s) can be found to break them out of the illusion. And even if that process was a failure, then see what the reaction is when they are let in on the fact that they were the subjects of a study, and shown a copy of their comments, with logical criticisms noted along side each comment. If this could be done by someone who the community members respect, I think it could be very informative, and possibly lead to some form of a saleable approach.

Expand full comment
tailcalled's avatar

"In the old days, psychologists would treat phobia by flooding patients with the phobic object. Got cynophobia? We'll stick you in a room with a giant Rottweiler, lock the door, and by the time you come out maybe you won't be afraid of dogs anymore. Sound barbaric? Maybe so, but more important it didn't really work. You could spend all day in the room with the Rottweiler, the Rottweiler could fall asleep or lick your face or do something else that should have been sufficient to convince you it wasn't scary, and by the time you got out you'd be even more afraid of dogs than when you went in."

This sounds odd; why would they apply a treatment that didn't work? Furthermore, when I looked it up, a lot of pages claimed that the treatment did work, with some even claiming that it was highly effective.

Expand full comment
JohanL's avatar

Plenty of older psychiatric treatments were _insane_, like sewing a depressed person into a bag with red ants in it to "cure" them. At this point, I'm pretty sure we can discount the idea that empiricism and interest in results had anything to do with it.

Expand full comment
Zubon's avatar

If you're interested in how doomsday cults respond to the lack of doomsday, Stephen O'Leary (at USC until he died last year) had research and a few books about that. He had put a group together to watch the world not end in the year 2000. I remember reading some of his [i]Arguing the Apocalypse: A Theory of Millennial Rhetoric[/i] at the time, but I never followed up after the world failed to end.

Expand full comment
Johann Bererund's avatar

There's some evidence that, when someone has a relatively strong position in favor of X, ridiculous over-the-top propaganda from pro-X extremists can be more effective in making the pro-X belief less strong, compared to a reasonable argument or well-designed study refuting X. I remember reading about a study of attitudes about Israel/Palestine that came to that conclusion: people who supported Israel were made more extreme in their position after viewing a reasoned argument in favor of an independent Palestinian state, but they became more moderate after viewing over-the-top crush-the-Palestinians propaganda. Unfortunately I can't find the study now.

I know that for me personally, there are many cases where extremism from my own political camp was more effective in moderating my political beliefs, compared to arguments from the "other side."

Maybe this works because, since the content at first glance appears to confirm your beliefs, you don't reflexively decrease the weighting on the sensation compared to priors? I would suspect that this only works if you believe the extreme ridiculous pro-X propaganda to be genuine; if you think it's actually a straw man from the anti-X side, your pro-X prior will only be reinforced.

Expand full comment
hymneminium's avatar

It might work because it makes your side look bad. You reflexively move away from an opinion you don't want to associate with.

Expand full comment
Witness's avatar

I think it works in this example because most people who think of themselves as, to continue the example, pro-Israel don't want to crush Palestinians, but also don't want to feel like they are rewarding bad behavior on the part of Palestinians (or pro-Palestinian people) with concessions.

Seeing a moderate argument in favor of an independent Palestinian state leads to "once the other side takes the appropriate de-escalation steps I'm on board with just that, and since they [aren't unilaterally doing that / didn't do that at this moment in time I can point to / fill in the blank] they don't really believe this moderate-sounding argument I just heard."

Seeing someone supposedly on your own side behaving badly should at least sometimes engender sympathy for honest members of the other side being unable to police their team at 100%.

Expand full comment
Witness's avatar

Or, to reframe what I said above, when observing the Other, you may not be correctly diagnosing which of their priors is "stuck".

Expand full comment
Witness's avatar

like, "why aren't you just doing [X], it's so simple!" is an amazingly common and stubborn prior, especially among people who haven't actually tried to do X recently.

Expand full comment
DoraTheExplorer's avatar

Scott, I have some articles on priors that are relevant to this post, but cannot find the email to reach you at. Please let me know which is best so I could send them to you!

Expand full comment
Kenny's avatar

Is there some reason why you can't just link to them in a comment? Or are these articles whose copyright you don't own?

Expand full comment
DoraTheExplorer's avatar

They're from a journal under a paywall, if i recall correctly, so I have the pdfs through my university but can't link the full papers through a link

Expand full comment
Kenny's avatar

I thought that might be the case. Can't you just link the articles on SciHub?

I remember reading a recentish post (tho they're all kind of 'recent' here, on Substack, for me) where Scott mentioned how to contact him via email. If there's no Substack search, maybe try searching for `site:astralcodexten.substack.com email`.

Expand full comment
DoraTheExplorer's avatar

Thanks! Sometimes sci-hub links end up broken, so I didn't wanna risk it. But Scott's book review post has his email turns out, so I will use that (and if you'd like to have the articles as well just link the email below and I can share them with you as well :) )

Expand full comment
Belobog's avatar

I think part of the story that might be missing is that your priors can feed back and influence what sensations you even collect. Consider the fisherman who never casts a line into a certain part of the river because he "knows" that the fish never bite there and it would just be a waste of time. I wonder if human attention is gappy enough that positive interactions with a dog are literally invisible to the cynophobe.

Expand full comment
Stephen Pimentel's avatar

What's going on with psychedelics in regard to priors seems pretty straightforward: they potentiate Bayesian updating.

It just turns out sometimes that's truth-directed, and sometimes it's not. This inconsistency of effect is not because of anything about psychedelics. Rather, it's simply because Bayesian updating isn't an oracle for truth.

Rather than seeking a "pro-rationality" intervention, one should seek practices of good mental hygiene, including some of those Scott mentions. The effective model is more like pursuing good diet and exercise, and less like debugging code.

Expand full comment
Erlank Pienaar's avatar

This is a similar situation for NDE’s. People die and ‘come back’ different. Why? Their dying causes a release of trapped biases as to who they are supposed to be. A break in the narrative. Death exposes the absurdity of our notions about life and makes obsolete the fears that previously had such a hold over us, because we have now experienced the ultimate ‘happening’ of dying that what we previously feared seem absurd.

Expand full comment
the.jazzhole's avatar

This is one of many reasons why we should be using the word ‘Steelman’ as much as possible. When it comes to trapped priors in the context of personal beliefs, political or otherwise, having to steelman the argument of the person you disagree with will probably help in reducing the strength of your priors (assuming they are actually capable of granting charity to the opposition). I’m so sick of people assuming the worst of members of the outgroup.

Expand full comment
jnlb's avatar

This thing—The Trapped Prior—is exactly the concept I've been looking for. This brings together a chunk of old rationalist material, and arguably The Cowpox of Doubt (https://slatestarcodex.com/2014/04/15/the-cowpox-of-doubt/) discussed the negative version of trapped priors, i.e. psychological inoculation. After hearing a large number of poor arguments for P, you will evaluate even objectively strong arguments for P is poor. This is it: The Trapped Prior has afflicted you. Your own thinking cannot recognize it; you will feel like "I evaluated this argument fairly, as I evaluate all arguments, and it's clearly a bad argument."

Another interesting line of thought is whether or not the Ethnic Tension-style arguments (https://slatestarcodex.com/2014/11/04/ethnic-tension-and-meaningless-arguments/) somehow act on people's ability to evaluate policy proposals by creating a context such that policy proposals for a certain position will get evaluated more positively.

Expand full comment
Arrivedierchi's avatar

SJWs begin the trapped prior process for people until they can no longer perceive wolves.

Were you surprised on 1/6? Perhaps it's because of your trapped priors around leftism.

Expand full comment
Daniel Böttger's avatar

Another possible method to explore is to pretend to believe in an opposing prior. Like a democrat could pretend to be a republican, or a cynophobic person could pretend to love dogs. I imagine a playful, humorous setting where you start out with a ludicrous parody of the position opposed to the one you're stuck in and end it with laughing about it. Then maybe extend it for longer periods of time or try to make it more "realistic". And only after you have some familiarity with the role, bring in the experience and try to look at it the way this role you're playing would.

Has that kind of thing been done in psychiatry?

Expand full comment
​​​​'s avatar

You're assuming exactly two inputs per evaluation to the update function, and also that the function is itself immutable. Do you trust those assumptions? Should you?

Expand full comment
Russell Hogg's avatar

On the subject of overcoming phobias there was a UK TV hypnotist/NLP practitioner called Paul McKenna who claimed to be able to cure people's phobias within half an hour or so. You can look him up on YouTube. He is not the most charismatic person but I do remember watching a person who went literally white in the face seeing a dog a hundred yards away and within the next hour they had been transformed into someone happy to have a dog leap up and lick their face. It was quite remarkable and I don't think it was a fake.

Expand full comment
Andy Jackson's avatar

Scott said - ....self-serving bias. People are more likely to believe ideas that would benefit them if true ....... I suspect that these are honest beliefs

Elephant in the brain (Simler/ Hanson) explanation. It is an Darwinian advantage to sincerely believe one's own lies, so we've evolved to be really good at it.

Expand full comment
Nicholas Weininger's avatar

Not only that, but one might speculate that trapped priors are a side effect of the mechanism that makes it easy for us to durably believe our self serving lies: i.e. the resistance to updating on evidence that undermines those useful lies might also confer resistance to updating less useful beliefs.

Expand full comment
JenniferRM's avatar

On the therapeutic side, it seems like being in a room with a rottweiler while having cynophobia might be unpleasant enough that "dogs cause unpleasantness" gets updated by the experience... but naive mechanistic reasoning suggests that ameliorating contextual factors (like body armor, or the dog being chained up, and intervening protective glass, or also eating ice cream, or all of these factors at the same time) could modulate the process.

This feels consistent with a strongly felt intuition of mine that a FUNDAMENTAL pre-condition for learning, and thinking creatively, in general, is a (basically justified) feeling of "safety".

The most effective protection I can personally imagine is simply "a adequately powerful agent close enough to ensure my safety". Also, I think it might be common for people to have phobias for things a parent encouraged them to be afraid of as children (roughly) because the parent thought the fear would be more helpful than harmful?

When I google the plural with ["trapped priorS" attachment style] I find nothing. Modifying to search for the singular form gives a tiny number of ecology papers about wild animals being "trapped" "prior" to some measurement or treatment that the research paper is focusing on. I'm wondering if "trapped priors" is your own name for this? Are there other keywords for this that might turn up other veins of related research?

Related to parental judgement that a phobia might be good to install... I've talked with people who were phobic about various critters, and they basically have never been interested in curing their phobia. It is like they "want to find something bad tasting" because then they'll eat less of it, which would be good if it is objectively bad to eat? Except they are reasoning this way about fear and avoidance instead of taste and food.

This helps me make sense of why "talking through the mechanistic steps" around a big vague fear might cause people to become more mentally flexible: maybe visualizing more intervening details allows them to imagine a different action than "shunning the first step" which could ALSO protect them from the very last domino falling over in a way that they think will cause objective harm?

If they thought the fear was their only good strategy for avoiding harm, they might cling to the fear.

Expand full comment
Jimmy's avatar

"This feels consistent with a strongly felt intuition of mine that a FUNDAMENTAL pre-condition for learning, and thinking creatively, in general, is a (basically justified) feeling of "safety"."

This is *exactly* in line with my thinking on the topics. I think it basically boils down to "safety, respect, and attention".

There are some impressive demonstrations where you can see peoples phobias being cured in minutes (as in, actually see the difference before and after, as well as the moment of change, and the guy calibrating to her responses), like this one and the 25 year follow up.

https://www.youtube.com/watch?v=mss8dndyakQ

https://www.youtube.com/watch?v=TjjCzhrYJDQ

However, if you can create a context where the person knows they're safe enough to play and takes you seriously enough to follow your direction of attention, then it really is as simple as asking "How dangerous is it?" and you can get similar results just as quickly without it even seeming noteworthy.

It's not that "people are fundamentally irrational, so what tricks and techniques can we use to combat this inherent irrationality?", it's "people are fundamentally rational but not made out of magic, so when people don't learn the things we think they should, which fundamental preconditions are we failing to fulfill that leads them to not believe us?".

Expand full comment
S_James's avatar

I think self serving bias along with the rich & powerful having leverage to change the political / economic system is enough to explain cost disease.

The longer everything exists in some kind of stable state the more people with power figure out ways to corrupt the markets & bureaucracy to their benefit. Citizens United, Surveillance Ad Tech, Corrupt Unions, CEOs making 300% more than workers. There won't be a single explanation because it's simply processes being corrupted by people with power who likely are deceiving themselves that this is a good thing.

Expand full comment
dogiv's avatar

"It can happen (in theory) even in someone who doesn't feel emotions at all. If you gather sufficient evidence that there are no polar bears near you, and your algorithm for combining prior with new experience is just a little off, then you can end up rejecting all apparent evidence of polar bears as fake, and trapping your anti-polar-bear prior."

I don't see how this can be true. If you're very sure there aren't polar bears, and then you keep seeing bits of white fur and tracks that look like they could be from a big bear, you might not notice them at all, or you might be confident they're fake, or that the fur is from a dog. But none of that can make you MORE confident that there are no polar bears.

The only way evidence of polar bears could make you more certain there are no polar bears is if it convinces you that, say, someone is out to get you by faking signs of polar bears to make you look crazy. In other words, an emotional situation.

Expand full comment
Dweomite's avatar

I think you're overlooking the part of your quotation that says "and your algorithm for combining prior with new experience is just a little off".

If your update function is fully rational, then evidence for X will never make you update in the direction of not-X, whether emotions are involved or not.

But if your update function is slightly flawed, where weak evidence for X can be misinterpreted as evidence for not-X, then you can get a trapped prior...again, whether emotions are involved or not.

Expand full comment
dogiv's avatar

Any examples of this kind of thing in real life?

Expand full comment
Dweomite's avatar

Could you rephrase that question, please? I'm not sure I understand what you're asking.

Expand full comment
dogiv's avatar

I can't think of any examples of situations where a flawed "algorithm" for updating on evidence causes an observation to result in an update away from the evidence, unless it's an emotional response.

Let's say, for example, that I'm pretty sure (80% confident) that inflation will be more than 2% this year, and I expect it will be about 0.17% each month. Then the data for February comes out and it's just 0.1%. Somehow, due to a flawed algorithm, that results in me becoming 90% confident that inflation will be over 2%.

Nobody makes this kind of mistake. The common mistake would be confirmation bias, where you say this contradictory data doesn't matter, and you point to some other evidence that goes the other way ("so-and-so wrote an op-ed about how inflation is about to increase!"). Not literally "I'm even more certain because of this evidence that seems to contradict me."

Expand full comment
Dweomite's avatar

I'm not sure how one would establish that emotions were NOT involved in any specific example of human behavior. Humans have emotions about pretty much everything.

But I'm reminded of governor Earl Warren's congressional testimony about the risk of Japanese sabotage during WW2 (which I only know about because of a lesswrong post here https://www.lesswrong.com/posts/mnS2WYLCGJP2kQkRn/absence-of-evidence-is-evidence-of-absence which quotes Robyn Dawes’s "Rational Choice in an Uncertain World"). He argued that the total lack of any sabotage or espionage SO FAR was evidence that there WAS a real threat. So that's an example of someone citing evidence to support an obviously-contrary conclusion. And it's not the "I know this doesn't make sense, but I can't help it" thing you sometimes get with phobias, but a seemingly sincere belief that he's reasoning correctly.

(I also hope that the mere *mathematical possibility* of defining a system that updates wrongly is obvious, and we are only discussing whether humans actually have such a system, not whether it is theoretically possible.)

Expand full comment
dogiv's avatar

Thank you, that is a great example. I think there are two other possible ways we could interpret it (though maybe your view, that he updated backwards, really is the correct one).

First, my impression in many cases like this one is that the causality is figurative. The person had already made up their mind--they were unjustifiably nearly 100% certain. Failure to think probabilistically is unfortunately a very common error. So then the claim "this evidence makes me even more certain" should be viewed as an attempt to convince others of the "obviously correct" answer. The observation is consistent with the "correct" answer (as all plausible observations must be, if you are effectively certain of your view) and that's kind of similar to being evidence in favor of it.

The second interpretation is specific to this case--Warren may have been arguing not that lack of sabotage was a reason to expect sabotage, but that delayed sabotage was a reason to be more worried about the severity of the effects when it did happen. He said things like "The only reason that there has been no sabotage or espionage on the part of Japanese-Americans is that they are waiting for the right moment to strike" and (paraphrased) "the lack of sabotage was an eerie sign, indicating that tightly disciplined Japanese Americans must be quietly planning some sort of massive, coordinated strike" [1]. If you already "knew" that sabotage was very likely, this reasoning would kind of make sense.

[1]1 http://content.time.com/time/magazine/article/0,9171,149131,00.html

Expand full comment
SelfDiagnosedWiseMan's avatar

Comparing phobias and political/cognitive biases is an interesting exercise. People with phobias may recognize they have an abnormal condition and can seek or accept help. People with unassailable political convictions often don't recognize their state as a problem, even when their beliefs are relatively fringe and impose real costs like the lost of family and friend relationships (QAnon, cults).

The difference is obviously in the nature of sensory evidence: after all, one can see dogs being walked everywhere by unafraid people, but it's impossible for a bystander to directly experience an unjust death penalty. Generally, the only way to absolutely know a person is innocent is to be that person.

I'd theorize that in cases of abstract/intellectual issues like politics the "raw experience" channel is fed by the imagination, not the senses, and the imagination is essentially just a remix of context priors, plus whatever one is trying to imagine. So context in => context out and beliefs won't move away from context

Then why would argument ever work on anyone? Presumably when one can imagine their conversation partner's specific argument more easily, allowing a path for non-prior concepts to enter the process. This naturally works better when the incoming concepts more closely agree with one's priors, or possibly for individuals that naturally give their priors low weights.

I'm particularly tempted to think that a link between low prior weights and a flexible imagination might also correlate with some people's ability to think much more abstractly than others.

Expand full comment
Cat Jackson's avatar

You're conflating two things with very different mechanisms of formation because they both present similarly as "trapped priors". Cynophobia (or many traumas really) can result from a single bad experience and is generally more context specific. It's the result of one-shot learning. A rat learns the association between the bell and the shock in as little as one trial: the limbic system at work.

Political biases are the result of much more evidence accumulated over a much longer period of time, are far more context-independent, and are the trigger *to* intense emotion rather than being triggered *by* intense emotion. A much better model would be that of habit formation. The mechanisms of learning/unlearning are far less plastic in this case, responses are far more reflexive and the emotional component is secondary.

Expand full comment
mordy's avatar

Just for fun I made a quick spreadsheet to simulate what would happen if your priors updated slowly.

Say you're a perfect Bayesian agent embodying Bayes' theorem. You start out very certain that all dogs are scary, and it's very unlikely that happy puppies even exist, but then you see an obviously very nice puppy. You'll update immediately from 99.9...% confidence that dogs are scary, to something like 1% odds that all dogs are scary.

But if you modify the update such that the experience of seeing a happy puppy only updates your prior half as much as much as it should, it takes you ~4x as many exposures to the happy puppy before you reach the "ideal Bayesian agent" confidence that happy puppies exist.

And if your prior only moves 10% as much as it should, it takes you *~25x* as many exposures.

Expand full comment
Sergei's avatar

Given my old days in a neuroscience lab, I would conjecture that a trapped prior can be facilitated by long-term potentiation and long-term depression (https://en.wikipedia.org/wiki/Long-term_potentiation) in whatever synaptic connections that are involved, creating a lasting link that is resistant to changes in stimuli. If so, then a more promising way to change the behavior would be the pharmaceutical or therapeutic interventions that reverse LTP/LTD in specific brain regions, like maybe an NMDA receptor blocker (memantine?), and whatever EMDR does, possibly. And then we would finally get a handle on the addiction to unhealthy Vietnamese noodles, and other pho-bias.

Expand full comment
Onzyp Q's avatar

The “if God came down from the heavens and told you” scenario plays out amusingly in “The Oven of Akhnai”, a story in the Talmud.

To paraphrase, two rabbis are arguing over whether a new type of oven is kosher. Eventually Rabbi Eliezer gets frustrated that his arguments are failing to persuade Rabbi Joshua, and he declares, “Look, if my interpretation of the Law is correct, this carob tree will move.” At once the tree leaps a couple of yards to the left and re-roots itself. But Rabbi Joshua just replies to the effect that strange trees hopping about is no basis for a system of determining ritual purity.

So Eliezer says, “Well then, if I’m right, the river will reverse it’s course!” And the river promptly turns and flows toward its source. But Joshua remains unimpressed.

“OK,” says Eliezer, “If I’m right, the walls of the yeshiva will tumble to the ground!” And they do start to fall - until Joshua scolds them for getting involved in a halakhic dispute without proper training. (The walls then hedge their bets by standing up again, but not quite perpendicular.)

Finally, Eliezer has had enough. “If I am right,” he cries, “the very Heavens will declare it!” And from the sky an immense voice booms out, “ELIEZER’S RIGHT, ALRIGHT? JUST GIVE IT A REST ALREADY.”

But Joshua replies simply, “Torah is not in Heaven.”

You’d think that this extraordinary assertion of the primacy of evidence over the highest conceivable authority might have licensed an early flowering of scientific reasoning in Jewish culture... but sadly the idea doesn’t seem to have been extended much beyond, “We should bicker incessantly about liturgical texts.”

https://en.wikipedia.org/wiki/The_Oven_of_Akhnai?wprov=sfti1

Expand full comment
TGGP's avatar

I heard about that from David Friedman, who used it to contrast the single school of Jewish law with the multiple schools of Islamic law: http://daviddfriedman.blogspot.com/2010/07/furnace-of-akhnai-story-and-puzzle.html

Expand full comment
Jack C's avatar

Simulated annealing is a simulation of real annealing, which is a metallurgical technique. After beating a piece of metal for a while, some of the atoms in the original piece get dislocated. This means atoms actually get pushed around and crammed into open spaces in the crystal lattice. This makes the piece slightly denser and also more brittle.

To overcome the brittleness, the piece is briefly reheated. This causes some of the connections around the dislocations to break and new ones to form. This makes the piece less brittle while also preserving the atoms-forced-into-nooks-and-crannies density that came from working the piece. The final piece is denser than the original yet not as brittle as it was before annealing.

Simulated annealing is a technique in computing which is designed to escape local extremes an optimization algorithm might get trapped in. It takes lots of forms, but a common one is to speculatively "teleport" your current position in the space you are optimizing to another semi-random location. This briefly makes the optimization worse, in the hopes that further optimization from this new location will yield a better global result than the previous optimum. This is analogous to reheating the metal and allowing it to recool, where the temperature of the piece functions the same as the cost of the optimizing cost function.

Annealing is both contexts is a process of making a crystallized structure more plastic in the hopes that it recrystallizes into a better form than the initial.

Expand full comment
E Dincer's avatar

Here in Amsterdam University there's this professor Merel Kindt (https://www.uva.nl/en/profile/k/i/m.kindt/m.kindt.html?cb) who's doing the not-gradual-barbaric-exposure followed closely by propranolol and says that works wonders for both phobias and PTSD. Here is her clinic's website: https://kindtclinics.com/en/

I'm an engineer and not a doctor/psychologist/etc so wouldn't know a thing, but the theory kind of makes sense and your post made me remember that, the underlying mechanics seem similar.

I tried to convince my girlfriend to go there for her extreme dog phobia, but even the thought of the therapy triggers her trapped priors so we couldn't try that yet. That's kind of a recursive trapception we need to take ourselves out from. One last detail is, when she's really drunk she loses the phobia (and pets dogs), but it comes back when she's back sober.

Here is an American newspaper link to the clinic:

https://www.washingtonpost.com/news/morning-mix/wp/2015/12/15/scientists-say-theyve-found-way-to-cure-fear-of-spiders-in-2-minutes-could-they-also-cure-ptsd/

Expand full comment
Peter Gerdes's avatar

Seems to me you are kinda confusing two different models. On the emotional model we are imagining an agent who engages in some kind of reward based learning. If you allow a feedback from the model to the learning function you can easily see how it would end up in an state where objectively ok experiences were actually interpreted negatively.

On the Bayesian rational updating model there shouldn't be any means for a high prior to trap your beliefs. And that's not just an assumption of perfectness but seems like you'd have to intrinsically build in this kind of bad behavior.

The problem is simply that the more certain you are that *all* dogs are horrible and dangerous the higher your prior should be that the dog will attack you and thus the greater the evidence updating will give when the dog doesn't bite. You can't explain why the individual would keep predicting a high chance of dog attacks even though they don't happen.

I suspect, that if you actually ask people it's not that they will report crazy high priors for any concrete idea. Thus, I think the take away should be that this case is better explained by the emotional reward idea first pretty well and the pure Bayesian prediction model (unless you are predicting emotional reaction in which case you just grated on the reward model) not so well.

Expand full comment
Mary Pat Campbell's avatar

This semi-relates.... I have said to people in the past that if their priors are either 0 to 1, they never get updated.

Expand full comment
duck_master's avatar

(Isn't this just Cromwell's law?)

Expand full comment
Mary Pat Campbell's avatar

...evidently, yes. (I didn't know it had a name -- thanks!)

Expand full comment
State of Kate's avatar

I don't think psychedelics are necessary to assist in changing priors more rapidly, but really any mind-altering substance that induces a pleasant emotional affect. MDMA would likely work much faster and more effectively than psychedelics. Putting someone who is afraid of a dog in a room with a dog is just as likely to terrify them as cure them, but put them in a room with a dog on MDMA and they would probably not be able to resist cuddling or at least would have profoundly reduced fear.

But even MDMA isn't necessary, alcohol works too. Alcohol works quite well for overcoming all kinds of mild phobias and social anxieties and opening one's mind to things one is generally against.

Another things that works pretty well is being under the influence of whatever neurochemicals are coursing through one's brain when they're in love. I've persuaded several exes out of life-long strongly held religious and political positions while they were in love with me. Falling in love or maybe even lust or strong like with someone who holds convictions you disagree with is a strong motivator to becoming open to changing them.

In general though, your post (which I liked a lot) was somewhat depressing, as someone who likes to take my dogs hiking, I am always trying to find a resolution for people who are irrationally afraid of dogs. Because I don't like being screamed at by someone who is terrified, as has happened. One of my dogs looks like a big black wolf, though he has the most bite-inhibition I've ever seen on a dog...even as a puppy he refused to put his teeth on skin...and poses essentially zero risk of ever biting a human, but he looks scary to people afraid of dogs.

I have always wondered how it is that so many people who fear dogs claim to have been "attacked" multiple times, which is statistically extremely unlikely for anyone who doesn't make a career of working with them, and I've suspected that what they consider an "attack" is exaggerated -- i.e. a dog running towards them or barking or making noise. Your post indicates that is likely the case.

Though it is even worse than just a matter of trapped priors, however, because people with this phobia often behave in ways that make their priors much more likely to be confirmed. People afraid of dogs behave in a manner that dogs interpret as "sketchy person who is acting weird and is a threat" and dogs then become more likely to actually growl at or menace the person. This certainly is the case in human social contexts. If someone has a prior that "X category of people discriminate and look down on people like me", they are likely to behave defensively and in an unfriendly manner -- i.e. not making eye contact, scowling, using a hostile tone -- which then increases the likelihood that they get a like response, which then confirms their prior. Hence the chip on one's shoulder ends up being realized.

What a terrible cycle to get out of between competing political groups. One's priors leads one to assume the other person is going to lie or be an idiot, and so do theirs, you both interpret whatever the other says through that filter, AND you both actually behave in a ruder and more hostile way than you normally would, thus lending both assumed and actual validation to the other's priors. What a shame that nowadays, it is fashionable to preface one's argument with "speaking as an XYZ identifying person, I think that blah blah blah", which just sets up everyone's brain to trigger off the identification rather than the argument.

Expand full comment
Tom O's avatar

Great article, I’d suggest an alternative mental model that solves this problem:

> In most reasonable weighting functions, even a strong prior on scary dogs plus any evidence of a friendly dog should be able to make the perception slightly less scary than the prior, and iterated over a long enough chain this should update the prior towards dog friendliness. I don’t know why this doesn’t happen in real life, beyond a general sense that whatever weighting function we use isn’t perfectly Bayesian and doesn’t fit in the class I would call “reasonable”. I realize this is a weakness of this model and something that needs further study.

In my model, the brain’s sensory processing unit is a prediction machine. It receives raw experience, guesses how the higher units of your brain will feel about this input, and then processes the input to highlight relevant details based on its guess. If it’s right about what it guessed, the processing is reinforced. If its wrong, then it adjusts in the appropriate direction.

When you are trapped in the room with the Rottweiller, it guesses you will be terrified, and will want to highlight the parts of the experience that are potentially dangerous. Since you *are* indeed terrified, it guessed correctly, and the processing is reinforced.

However, when you are gradually introduced to puppies, your processing guesses wrong. The higher units of your brain are not terrified, and so the signal is sent down to the sensory processing unit to change its predictions for next time.

Expand full comment
Drago's avatar

Direct persuasion is hard, almost downright impossible for strongly-held beliefs. That's because humans are social creatures, and our lizard brains are very attuned to the social hierarchies in our world around us.

If my friend Bob tries to convince me of something in direct contrast to my political beliefs, yes, maybe my political priors should be updated based on the strength of the argument and based on my perceptions of Bob's trustworthiness and intelligence, but at the same time my priors about Bob are being updated, too. And because we evolved as social animals, we are much readier to shift our perceptions of people.

And there's the fact that if I do change my belief, that also sends a social signal back to Bob, and he's going to update his priors about me.

The medium is the message, and when that medium is another person, interpersonal dynamics take over.

Expand full comment
Ben's avatar

If somebody came along and advertised a "prior-unsticking therapy for improved rationality," the first thing I'd want to know isn't how they unstick my priors, but how they determine what I'm being irrational about.

And also maybe the opinions people tend to hold after embarking on this therapy, so that I can judge whether I want to hold those opinions myself.

Right now, we have deemed some opinions to be ethically medicalizable: personal delusions ("I am Napoleon!"), paranoia ("everyone's out to get me"), and others. Insofar as prior-unsticking is a therapy for such conditions, it seems unproblematic.

But if we're considering roping in opinions that are wacky, but not currently medicalizable, then I'm concerned. Even if everybody has a tendency to be irrational about politics, that doesn't mean that everybody (or anybody) should be subjected to prior-unsticking therapy over their politics.

Political irrationality eats everything, and it will eat this too. Just wait for the day when a decrease in support for an unpopular opinion is deemed as evidence for the efficacy of prior-unsticking therapy...

Expand full comment
JKPaw's avatar

Society puts its stamp of approval on all sorts of "wacky" notions, elevating them to the status of generally agreed-upon "perception." (If I gave examples I might wind up in Scott's bucket of LSD users with weird ideas.) The relatively few who pedantically insist that such prevailing notions are wacky tend to become alienated and isolated. Some, I assume, can't bear this, so they figure out how to get back on board and "healthy."

Maybe some day when we're just brains in jars we'll learn to better fulfill our potential as rationalists.

Expand full comment
Eudai's avatar

> I think van der Bergh argues that when something is so scary or hated that it's aversive to have to perceive it directly, your mind decreases bandwidth on the raw experience channel relative to the prior channel so that you avoid the negative stimulus.

How can this be true when e.g. meditation techniques tell you to focus on the actual sensations of the pain, which I find are less aversive once I force my experience channel wider? Pain goes from "ahh the pain" to "a sort of weaving blooming sensation in my abdomen, with unpleasant valence". It seems like explanations of the form 'helps you avoid negative stimulus' can't be right due to how often context makes things worse. I would faintly bet on it making thing worse on net – it feels like there's no experience so good that it can't be ruined by multiple bad experiences (ordering a dish I strongly expect to be good each time, that is slightly less good than it was last time, until I get sick of it), but some things feel so fixedly bad would take enormous effort to make them less aversive or scary.

Probably the 'ultimate reason' trapped priors happen has to be something about saving on computational costs, but IF it were adaptive/functional my guess would be that it's good not to update on certain phenomena – if you saw a tiger chilling out 100 times in a row and decided that tigers were mostly nonscary, you'd sort of be more correct than the tigerphobic villager but slightly less likely to survive.

But this seems wrong, because many things I get trapped priors about are totally harmless. Regarding BEC syndrome – weirdly, I've rarely had this for someone treating me badly. When someone is very mean to me it's "so extraordinary it needs conscious explaining", and I come up with best-fit explanations for their behavior. I do overall expect them to treat me badly again, but they feel like a complex system that requires complex thought in response. But when someone is MINORLY ANNOYING in the first few times I interact with them I can very trapped.

Expand full comment
Nancy Lebovitz's avatar

There might be two different sorts of being aware of pain/fear/whatever. One is the OMG-things-are-bad level and the other is detail-detail-detail.

It's partly that the sensations are different in the two modes and partly that the mental attitude which which one is perceiving them is different.

Expand full comment
Armok's avatar

Could this also be an additional mechanism of depression, when it happens in fuller generality on *all* possible experiences?

Expand full comment
duck_master's avatar

Scott Alexander wrote about this earlier I think (e.g. https://astralcodexten.substack.com/p/the-precision-of-sensory-evidence, https://slatestarcodex.com/2017/09/12/toward-a-predictive-theory-of-depression/ ?). I'm personally unsure about this idea; my current guess is that depression might just be an extended emotional low (like a long run of heads when flipping a coin, or whatever), but AFAICT it could be some kind of weird dynamical systems effect or whatever.

Expand full comment
Armok's avatar

I read those they're what inspired the question, but I mean something more specific than that.

Expand full comment
David Bloomin's avatar

Thanks for breaking down a really important cognition "bug" so clearly. I want to suggest a small modification to your framing, which I think would add more clarity.

Your current theory treats agents as performing Bayesian inference on a stream of experience. If you instead assume the agents are performing active inference, the theory becomes more concise.

In your model, the agent receives a raw stream of experience that it must integrate with its priors. Active inference suggests that the agent queries the world, using its priors to generate hypothesis. The resulting experience is then integrated with the priors via a Bayesian update.

So a regular person locked in a room with a dog might have a low prior on the dog being dangerous. They are actively scanning the room for all sorts of cues including: is this dog dangerous, does this dog want pets, is this the cutest doggie they've ever seen.

A person with cynophobia is also actively scanning the room, but they are already certain that the dog is dangerous, so instead they are looking for exits and trying to figure out exactly when and how the dog will attack them. It's not that this person fails to do a proper update on the experience, it's that their experience is actually very different.

The queries that the agent performs are also not purely epistemic. They are not just trying to have more accurate priors, but to also act in accordance with their values. This captures both self-serving bias as well as emotions reducing the bandwidth of experience. In both cases, the agent isn't trying to learn as much as it can, but is instead trying to be safe, avoid pain, and maintain its social status.

I hope this made sense, and I want to credit Belobog with posting this first with his fisherman example.

I am also not sure that a perfect Bayesian learner would slide into confirmation bias, and my intuition says this is not the case. Unless every dog experience was *as terrible as you imagined*, your prior would gradually adjust down. Of course, humans are not perfect Bayesians. It's possible that humans would score "there was a dog and I was really scared" and "there was a dog and I was really scared and it attacked me" the same, or that they can't distinguish between a prior of 0.99 and 1, and so the update is always too small to move the needle. But I don't think we need to blame Bayesian updating when active inference is a much cleaner explanation.

Expand full comment
Scott's avatar

Very interesting point. You are saying we construct the data collection in the first place not just actively work it after it's gathered.

Expand full comment
David Bloomin's avatar

Yes. This is one of the main things I got out of reading Friston, and before that, Zen and the Art of Motorcycle Maintenance

Expand full comment
Scott's avatar

One more thought on this. I wonder if the added elements of your point about active inference provides a mechanism of explanation for how extreme, and extremely solidified, views have become politically and socially. Perhaps social media is putting the active element on steroids.

Expand full comment
David Bloomin's avatar

I’m not sure I follow. Do you mean that social media creates echo chambers, and so we can always get a confirmation of our existing biases when we query our social world?

Expand full comment
Scott's avatar

Yes. Exactly. We are not passively receiving what data comes into us from social media. We are actively shaping it and it becomes highly reinforcing. That fit your point?

Expand full comment
jeff wade's avatar

As a spacecraft engineer who specializes in Kalman Filtering, which is basically recursive Bayesian estimation, I have always enjoyed the parallels with cognitive processing. This article reflects a problem we often see in KFs when the filter parameters are not properly designed, and the filter insufficiently weights new sensor inputs and simply propagates the state estimate using a math model (prior). This causes the estimate to diverge from the true state, or in other words, to deviate from reality. We say the Kalman Filter “has gone to sleep.” There are many possible causes for this divergence, and we don’t need to examine all the possibilities here, but the problem may be summarized as two types. The model used to propagate the prior state estimate forward in time may be too simplistic and needs to be improved, or there is some random element in the model that is not properly accounted for. These ideas, if applicable to cognitive processing, suggest that to avoid a trapped prior, we might consider expanding the dimensions or fidelity of our understanding, or else increase the uncertainty we apply to our prior. The latter solution is obvious and unlikely to be adopted since overconfidence is implicit in the problem of trapped priors. I wonder, however, if we might focus on improving our cognitive models, i.e., the mental model that produces our priors. Perhaps learning more about the dynamics of what we’re fearful of, for example “how do Rottweilers express themselves, and how are they stimulated to aggression or play?” Or, for reducing political demonization, “how do people of other political persuasions prioritize their value judgements?“ This is something psychologist Jonathan Haidt has emphasized and perhaps it is already part of standard psychological desensitization treatment. It seems possible to me at least, that improving our “dynamics model” of the thing we oppose or despise or fear may not meet with the same cognitive resistance as simple exposure to the results (measurement).

Expand full comment
duck_master's avatar

Thanks for the comment! The issue of "Kalman filters going to sleep" seems a lot like the issue that Scott Alexander's discussing here (new evidence is being insufficiently weighted, etc.)

I think (here's the more uncertain part) that psychedelics, or whatever, basically "flatten" one's mental prior, which effectively makes it more uncertain; and improving mental models also seems like a decent way of dealing with a trapped prior.

Expand full comment
jeff wade's avatar

I agree! Psychedelics "open our minds," very much like increasing the prior state covariance in a KF, with the effect of better incorporating new information. This is definitely my subjective experience, but also seems like a handy, if simplistic interpretation of reported effects. Thanks

Expand full comment
Kitya Karlson's avatar

It looks like specific phobias (like cynophobia) are rare, for example this article https://www.med.upenn.edu/ctsa/phobias_symptoms.html says all specific phobias combined affect 7-9% of the population. I'm not sure what percentage is just cynophobia, but since most people have experience with dogs in their life and most people clearly do not get cynophobia, this means trapped priors about dogs are rare, most people can have a good and a bad experience with dogs, yet don't get trapped. If political trapped priors are exactly the same mechanism, I don't get why it seems like most everyone has them, instead of just small percentage of conspiracy theorists, etc.

Expand full comment
Chris Savage's avatar

My personal theory is that overall our lives are so good/prosperous (food, shelter, etc.) that most of our cognition is spent on things that are not directly tied to our survival. Nature, as they say, is a mother, and erroneous priors (or conclusions) are harshly dealt with when the question is something like, "Can I outrun that predator" or "Can I eat this berry?"

So hunter-gatherers, for example, likely had very similar beliefs about the practical aspects of the natural world, since the ones who lacked such beliefs died.

Where they could readily differ, though, is their metaphysical explanations for the natural world. Sun God? Forest sprites? Hairy Thunderer in the Sky? Many different beliefs could arise about how and why the world was as it was, even while there was agreement on how the world actually was.

In our lives we don't have disagreements about those sorts of things. Need food? Go to the grocery store (for folks with money, obviously). The looniest Antifa partisan and the most steadfast MAGA-hat-wearing Trumpista agree on that, and a host of other things (look both ways before crossing the street; wear warm clothes in the winter, etc.)

The strongest disagreements arise for beliefs one or two (or more) levels of abstraction above the practical. Cf. the old joke: Why are academic arguments so bitter? Because there's so little at stake.

Expand full comment
Luke G's avatar

I think with politics, there's a combination of (1) a lot of policy issues are really difficult to fully reason through (2) lots of peer pressure to conform due to the "us vs them" dynamics of political parties. Most people won't invest (or don't have!) the mental resources to actually reason through political positions, so instead it becomes a signaling thing to their peers.

Expand full comment
Scott's avatar

This is a lovely and illuminating piece.

There is a small flaw though on the chess piece illusion. I'm sure the actual illusion is based on the pieces being the same color but the brightness changing the perception. But, the pieces in the photo are actually darker vs lighter just as we perceive them. I pasted out a little section of both Kings lower part and the darker appearing one is actually quite a bit darker in reality, on the screen, compared to the lighter appearing one. Conceptually, though, the point is fine.

Expand full comment
Luke G's avatar

Looks like there's some jpeg compression artifacts that are making the top and bottom not quite identical.

Expand full comment
Scott's avatar

I thought of such things but I think it's more. There really is quite a difference, not a small one. And, sure, my priors were objecting to being wrong and they almost always are but this one didn't get me enough to not challenge them. BTW, it was actually the queens not the kings, but the same would hold true either way. A screen show of both and then cropped to just color without any context, quite different.

Expand full comment
David Friedman's avatar

I did the same experiment with the king's upper part and the two pieces were the same color.

Expand full comment
Scott's avatar

You are right. I just tried it again. Now, I'm on a different computer, but my priors are NOT that strong on this. I expect them to be the same. Anyway, I think you are right. They are the same. I tried a few others just now and I can only imagine that the other day, I must have selected the lower part of the king vs the lower part of the queen. Laughing at myself. Thank you.

Expand full comment
Nancy Lebovitz's avatar

How about a stuck prior about a positive emotion? I'm not sure whether addictions qualify, but pursuing a hobby which isn't much fun any more because every time you think about it, you remember how much fun it used to be would count.

Expand full comment
Carl Pham's avatar

I'm reminded of Oscar Wilde's observation that second marriage is the triumph of hope over experience.

Expand full comment
Mark Atwood's avatar

(off the cuff) I can't help but think that political partisans on the right, will interpret this post as "force feed conservatives LSD and party drugs until they become brain-addled progressives", with a side dish of "The Soviets were also a big fan of psychiatric treatments for having the wrong politics".

Expand full comment
John Schilling's avatar

The discussion here has rather less of "...and that's why [outgroup] can't admit they're wrong about everything!" than I was fearing, from either side. But that's here.

Expand full comment
Kenny's avatar

Meh – I'm kinda "on the right" but don't think yours is a likely interpretation. Historically, rough equivalents of 'force feeding people LSD' seems to make them just as likely to become 'crazy religious zealots' (e.g. the Salem witch trials). Psychedelics _do_ seem to loosen priors, some of them anyways. I've heard or read of people 'breaking with reality' but never of 're-thinking literally everything from the ground up'. Almost all priors seem to survive psychedelics just fine.

Expand full comment
Lasagna's avatar

Great article, as always. I think, though, that you may be downplaying the role of self-interest (or at least self-aggrandizement) in this mode of thinking. You focused on politics, religion and phobias, areas where these thought loops seem to kind of “happen to” a person rather than being self-directed (OK, maybe not politics so much). The first thing I thought of, though, were tax protesters, and they don’t really fit the mold. I don’t know where I’m going with this, but that’s never stopped me before. Off we go.

For the uninitiated, tax protesters are a (very) loosely affiliated group of [redacted unhelpful insult] who believe that the federal income tax is unconstitutional and that they are therefore not required to pay it.

There are a thousand different flavors of tax protester, most of which philosophize about the difference between direct and indirect taxation, but the argument that brought them some mainstream credibility was first described in The Law That Never Was, a 1985 book by William Benson (I was hoping to do a book review on it for SSC, actually, but never got it together). The Law That Never Was claimed that the 16th Amendment wasn’t properly ratified in 1913, and that, as the 16th Amendment is what gives the federal government the power to tax US citizens’ income, no US citizen is legally obligated to pay federal income tax.

At its heart, the claim is that the various states that ratified the 16th Amendment ratified slightly different versions of it (so one state might have ratified “The Congress shall have power to lay and collect taxes on incomes, from whatever source derived, without apportionment among the several States, and without regard to any census or enumeration,” while another state didn’t capitalize “States” or DID capitalize some other random word, or used “remuneration” instead of “enumeration”, or a dozen other minor variations). Since three-quarters of the States are needed to ratify an amendment, and nowhere near three-quarters ratified any particular version of the above (both of which are true), the 16th Amendment lacks the force of law.

To attempt to tie this into Scott’s article: I think that if this sounds intriguing or plausible to you, it’s probably because you have a strong prior to having more money, and you’re ignoring the nagging doubts that otherwise would have made you laugh this off the way you would a Seventh Day Adventist. This is already going on too long, but: this argument fails because (1) most amendments were ratified with slightly different language since they didn’t have computers and shit to keep everyone on exactly the same page - if the Sixteenth Amendment isn’t in effect, neither is the First; (2) they considered this back in 1913 and the Secretary of State ratified the amendment anyway; (3) the federal government more or less already HAD the power to tax incomes before the 16th Amendment, so the Amendment doesn't mean what you think it means; and, above all else, (4) you aren’t going to walk into a federal court and convince a judge that the entire federal income tax structure is null and void and the government is required to go bankrupt, effective immediately, so cut me a check.

When I was in law school I got very interested in these sorts of weird legal arguments. Number (4) above is was intrigued me the most: how could anyone believe that this kind of legalistic thinking could have any practical effect? What was the end goal?

So I dug into it, I visited the forums where tax protesters hung out and swapped caselaw minutiae and jerk lawyers showed up to mock them. The community maps to Scott’s outline of “trapped priors” precisely. These were guys (they were almost ALL guys) who walked the walk. They weren’t hypocrites. They were in court making these arguments, despite being directly told by the judge that they were going to go to jail if they kept pushing it. And every time they were warned by the judge, every time another member of their community had their lives wrecked by brutal fines or prison terms, they believed in their cause more. The fact that nobody ever succeeded in avoiding the income tax by making these arguments seemed to reinforce the rightness of their cause rather than convince them that it was a losing proposition. I never really saw anyone change their minds, either through failure or through outsiders to the community explaining reality to them.

This was an awful lot to type when I don’t really know where I’m going with it. Sorry to anyone who read this far. I guess what I’m saying is: while my somewhat-extensive contact with tax protesters has convinced me completely that failing over and over again reinforced their beliefs rather than making them question their priors, the loop was also inextricably tied to self-interest, and would not have existed without it. They wanted money, and they wanted to be fighting a noble fight, in that order. They weren’t bitten by the IRS as a child and developed an irrational desire to fight back. I guess I’m asking: is it worth exploring the degree to which this thought process is actively selfish, and maybe approach treatment from that direction?

Expand full comment
Melvin's avatar

When I hear about attempts to get taxes declared as illegal, I think about Hammond and Ha vs the State of New South Wales.

In a series of cases in the 1990s, the High Court of Australia found that certain taxes that Australian states were levying (on fuel and tobacco) were unconstitutional, and that the states had no right to levy such taxes, only the Commonwealth had that power.

What happened next? Were the states forced to repay all those taxes back to the taxpayers? Nope, the Federal Government simply immediately (a) instituted its own fuel and tobacco taxes at the same rates as the state taxes, (b) gave the proceeds to the states, and (c) made all this retrospective, so all the money that the states had already unconstitutionally collected was _actually_ collected by the Federal government making it all nice and legal.

My point is that if any lawyer anywhere did manage to prove that income tax was unconstitutional, politicians would immediately finagle a way to make it retrospectively legal. This seems pretty unfair, but on the other hand I suppose it's better than having the government suddenly collapse into insolvency.

Expand full comment
Jeremy's avatar

I think there's a better way of thinking about this. The political, scientific, and conspiracy examples of trapped priors have a better explanation, which I think is separate from the explanation for phobias.

Everyone has a model of where evidence comes from and how it gets into their heads. If this model is updated to say that evidence from source X is unreliable/biased/adversarially selected, then all subsequent evidence will be interpreted using that model.

If you come to believe that opinion X is supported by made up statistics and bad science, then you get stuck on the belief not-X. Your strong belief in not-X isn't what's causing the problem, the problem is caused by your beliefs about *how evidence is getting to you*. E.g. you believed GMO's are dangerous, and you also believe that any studies that disagree are funded by GMO companies and are therefore false. The way to change these beliefs is not to provide more studies that say GMO's are good. Instead you have to provide evidence that the studies aren't influenced by the interests of the companies.

In this explanation of stuck priors, your beliefs about "how the evidence reaches you" (meta-beliefs?) control how you update your object level beliefs. If humans were less computationally restricted, we would always be able to maintain many different meta-belief models, keep track of the probability of each, and for each meta-belief model we would have to keep track of the object level beliefs that your observations point to, *assuming that model is correct*.

I think the reason humans get stuck on beliefs is that it's *really hard* to keep track of multiple models without letting the object level beliefs formed under each model interact with each other and mess the whole thing up.

The solution to stuck priors, the way to get people unstuck, is to change their meta-beliefs. This is difficult and slow. For example, if a moon landing conspiracy theorist became friends with a few NASA employees, they would probably start to update their meta-beliefs, and get unstuck. But if the friendships were in any way deliberately arranged, then it's easy for the meta-beliefs to explain away the evidence as part of the conspiracy. The way to change these meta-beliefs is for evidence to sneak up from the side, from a source that the meta-belief doesn't automatically discount.

Under this model, there is no amount of evidence that would drown out the prior and cause an update, *if the evidence comes from a source which the meta-belief says is broken*.

This explanation doesn't help explain phobias and bitch eating cracker syndrome. I think they are a separate thing, explained by broken interactions between beliefs and emotions.

Expand full comment
duck_master's avatar

This is a pretty interesting take on the "trapped priors" issue. Fortunately, we have a super-easy way of increasing one's effective computational power, which is to use computers, so I think it sounds plausible to use those to keep track of meta-beliefs. In the meantime, my current defense against trapped priors is basically to remember the conservation of expected evidence (as I discussed in a top-level comment).

Expand full comment
Jeremy's avatar

How would you use computers?

It's definitely not impossible for a single person to just keep track of multiple models of how to process evidence, but it is slower. The easiest way I can think of to increase your computational power is to collaborate with multiple people, which I think works for this problem, and I think it would be easier than using a computer.

Expand full comment
Melvin's avatar

In general, people don't want to change their beliefs, no matter how irrational they may be. And why would they? It's almost entirely downside. You get the pain of admitting that you were wrong all along. You get the additional pain of admitting that the people who laughed at you for years were right all along. You may lose a bunch of friends, without immediately acquiring any new ones.

And what do you get in return? A world view that's slightly better in concordance with reality, but in a way that usually doesn't give you any practical benefits.

Expand full comment
Jeremy's avatar

I want to change my beliefs, if they are not true. I think most people here also value true beliefs very highly. High enough that it more than outweighs most potential social downsides, and high enough that I seek out social environments where there aren't negative consequences for changing my mind.

Expand full comment
Melvin's avatar

Perhaps, but you're in the tiny minority of people who read and comment on rationality-themed blogs.

Expand full comment
Jeremy's avatar

I know, but so what? I don't understand the relevance of your comment to my original one. How does it relate?

Expand full comment
Demeter's avatar

"I think the reason humans get stuck on beliefs is that it's *really hard* to keep track of multiple models without letting the object level beliefs formed under each model interact with each other and mess the whole thing up."

Are you saying that it's hard for someone to track multiple meta-belief models and still live and act with a sense of consistency?

About meta-beliefs: I have a couple friends who believe conspiracy theories, and I think one reason people get so stuck on conspiracies is because conspiracy theories are often embedded within a highly defensive framework of very paranoid meta-beliefs, which tend to attract paranoid people. They believe that not only is everyone else wrong about the moon landing, but all evidence in favor of the moon landing is provided by suspicious agents who wish to deceive us. If you're really nice when you argue for the moon landing, they'll relax a little and decide you're just brainwashed.

I don't think paranoia itself is a broken meta-belief, but more of a separate cognitive phenomenon that crops up as a symptom of mental illness or as a result of heavy drug use. In my experience, it's damn near impossible to get paranoid people to accept counter-evidence. What seems to work better is doing something that makes them feel safe and comfortable, like taking them out for coffee and complementing their hat. Even then, as soon as the conspiracy theory comes up, it becomes a one-sided conversation.

Expand full comment
Jeremy's avatar

> Are you saying that it's hard for someone to track multiple meta-belief models and still live and act with a sense of consistency?

I meant that it just takes more effort and time. I think anyone that can put themselves in the shoes of people with different beliefs to them is tracking multiple meta-beliefs. And that won't stop them from acting with consistency.

The point is that a computationally bounded but otherwise ideal Bayesian agent could still have "stuck priors", unless they are really careful and spend time trying to avoid it.

Is it easy to distinguish between paranoia as a symptom of mental illness and "normal" conspiracy beliefs, held by someone without any mental illness? Is the difference one of degree, or clearly different categories? I don't think I've talked to anyone who has paranoia as a symptom of mental illness.

Expand full comment
Demeter's avatar

>I meant that it just takes more effort and time. I think anyone that can put themselves in the shoes of people with different beliefs to them is tracking multiple meta-beliefs...

I see your point. I imagine it also depends on the individual's aptitude for perspective-taking, which always takes at least a little effort, especially when stress has the effect of locking us into our own perspectives.

On paranoia: I think it's a difference of degrees, but there's a threshold past which the paranoia is too strong for contradictory evidence to be entertained at all because the person is way too caught up in their suspicion of anyone providing contradictory evidence. I think one way you can distinguish paranoia is by tracking how often the person mentions "they" without specifying who.

As in:

"But I read that jet trails are just condensation"

"That's what they want you to believe"

I'm curious about the demographics of populations who believe in conspiracy theories, specifically what percentage of those individuals experience significant paranoia or self-reinforcing delusions. One friend of mine became obsessed with the Illuminati when he developed schizophrenia, but didn't seem that interested in them before.

I know some ordinary people who entertain moderate conspiracy theories (like sitting and wondering who was really behind 9/11 or the JFK shooting). The serious ones all seem to have a preexisting illness that compels them to keep going down the rabbit hole despite massive contradictory evidence and near-universal signaling from friends and family that they have the wrong idea.

Then again, my sample size is small, a total of five (one chemtrails, one illuminati, one flat earth, one thinks the CIA is behind everything, and one has his own home-cooked theory of a Dark Empire). We also have a very active black market for psychedelics locally, and those can contribute.

Expand full comment
Demeter's avatar

Meant to say: accept counter-evidence *or* change their meta-beliefs.

Expand full comment
duck_master's avatar

This is an interesting idea! However, it seems a lot (to me, as a math nerd) like trapped priors are an implementation error rather than a fundamental feature of Bayesianism. The fundamental reason why is that as more (say) "dogs are actually mostly friendly!" evidence accumulates, the odds ratio for "dogs are friendly" vs "dogs are dangerous" should keep going up, because Bayesian updating is basically multiplying the prior odds ratio by an odds ratio representing the weight of the evidence.

Related idea: the conservation of expected evidence (mentioned - introduced? - by Eliezer Yudkowsky at https://www.readthesequences.com/Conservation-Of-Expected-Evidence); what you expect to believe in the future is what you believe right now, if you're doing Bayescraft correctly. So if you were some person with a "dogs are dangerous" trapped prior, and you forecasted that future!you would believe that dogs are dangerous even more than present!you *no matter what happens*, then you should already have that level of belief in the dangerousness of dogs.

(The rest of the problems - like having beliefs about how dogs lead to unpleasant experiences, or how Democrats are morally evil - can be avoided by only holding beliefs about unambiguous object-level things that are outside your control, which avoids both ideological ambiguity and fixed-point problems/markets for entropy/etc.)

Expand full comment
Rb's avatar

I'm confused/skeptical on three counts.

First, you've said in the past (e.g. in "Going Loopy" and "Can You Condition Yourself?") that the brain is in general pretty good at filtering out its own contributions when predicting things; it's not clear why this case is an exception (in the phobia case, I assume the answer is 'the thing that distinguishes *-phobia from normal fear is the failure to filter correctly', but if you're using stuck priors to also explain confirmation bias, then that's solidly in the domain of normal, non-pathological reasoning).

Second, why does the gradual version of exposure therapy work? In normal reasoning, a photo of a dog provides no evidence about whether I should be scared of dogs, a dog in a cage provides very little evidence about whether the dog would be dangerous outside of the cage, and so forth. Given that the cynophobe is supposed to be less responsive to evidence than normal, why doesn't that all get rounded down to zero difference from the prior, the same way they round away the much stronger evidence of being in a room with a Rottweiler and not getting bitten?

Third, the Drummond and Fischer paper finds that polarization increases on issues to do with political and religious identities, but not on issues that aren't politicized. That seems to me not like a trapped prior (if I can have a trapped prior about non-political issues like phobias, presumably I should also be able to have trapped priors about arbitrary scientific beliefs); on the other hand, it's perfectly consistent with a signalling/group identity type of explanation. Signalling also seems like a better fit for dog whistles; if I have a prior that Ted Cruz is malicious and he says 'I'm against New York values', then I might naturally update to 'New York values are good', or 'New York values are bad and Ted Cruz is lying about being against them', but 'New York values are a coded message for Jews, and Ted Cruz is conveying his anti-Semitism in a way intelligible only to his base (and also his opponents currently interpreting the dog whistle)' seems like a stretch. On the other hand, if I just want to make it really clear to everyone that I'm part of the Ted-Cruz-hating-tribe, then making an argument about secret codes does a much better job than providing a nuanced critique of one of his policies, because I might disagree with his policies for reasons other than partisanship, but I probably won't believe dubious theories about signals for non-partisan reasons. (Likewise, if I want to signal my partisan affiliation on the pandemic, I'm not going to look at a hard question like 'how much economic damage is worth it to mitigate spread'; I'm going to say something totally unhinged like 'Outdoor gatherings are a public health risk if you're having fun at the beach but totally safe if it's a BLM protest' or 'wearing a mask is about authoritarians conditioning us to submit to their orders' precisely because there's no non-partisan way to misupdate on evidence badly enough to reach those beliefs).

Expand full comment
Boaz Barak's avatar

Somewhat related, in machine learning there is a notion of a “learning rate” which you can think of as the amount by which you update your prior based on new information. In the Bayesian outlook there is a precise right value for this parameter, but in practice people play around with it.

Interestingly, it’s often much better to underestimate it than to overestimate. If you underestimate the learning rate (which results in a tendency to stick to your priors) then you might not be as efficient with your computation or samples, but eventually you will reach the right point. In contrast, if you overestimate it, you may well never converge.

The above might be why it’s a decent heuristic to err on the side of under-correcting priors than over-correcting.

In the ML analog, trapped priors would be like a bad local optimum that we cannot escape from without making a large change. This also can happen sometime.

Expand full comment
None of the Above's avatar

For many of these, it seems like there's something else going on--some kind of tossing out of outliers. If you have normally-distributed data and see some value that's +10 sigma out, it's almost guaranteed to be measurement error or something. But if your model is "normally distributed data," but the actual source of the data isn't really normal but instead some weird distribution that allows for +10 sigma events to happen pretty regularly, you'll filter out the problems with your model as being errors or lies or something.

When you tell me you saw a meteor shower last night, or a bad accident on the road today, my model says that's plausible, so I assume you're probably telling me the truth and may update my understanding of the world based on that. When you tell me you were abducted by aliens last night, or saw a giant moth flapping its wings over the city today, I'm probably just going to conclude that you're delusional.

When you make a plausible-sounding argument for medicare-for-all or shall-issue laws, I probably update my internal understanding of the world toward those being sensible policies. When you make a plausible-sounding argument for sacrificing babies to Moloch or re-imposing slavery, I probably update my internal understanding of whether you're a nutcase whom I should ignore instead of my understanding of whether maybe there's a good argument for baby-sacrifice or reopening the Atlantic slave trade.

In all these cases, what's happening is that your evidence/claim/argument is updating two different parts of my model, the "is the object-level thing they're claiming true?" and "are they a good source of evidence/claims/arguments?"

Expand full comment
Vampyricon's avatar

Yeah, I'm thinking along these lines too, though that doesn't explain bitch eating crackers syndrome.

Using the cynophobia example, perhaps each friendly puppy updates by what the brain considers a negligible amount, and so the brain treats it as 0, but then the number of times you update by is so great that the "negligible amounts" actually rival the original prior. Like adding 0.00001 to 1 is approximately 1, but if you correct it to 1 each time you add 0.00001, and you add 0.00001 to 1 100k times, what you would get is 1 even though the answer is 2.

Expand full comment
Matt A's avatar

I don't think you really discussed it in the post, but an important characteristic of the types of scenarios that lead to trapped priors as you describe them is abstraction of goals. For example, the goal of a fear response to a dog is to avoid getting mauled by a dog. When you gather data from experiences with dogs, the thing your brain should* care about is whether the dog mauled you. But instead of updating P(I Get Mauled | I See A Dog), your brain updates P(Get Terrified | I See A Dog). So your brain is updating on an outcome that is, in a sense, one level abstracted from the thing it "should" care about.

This misalignment appears critical to to the phobia example, but I'm not sure if the same model makes as much sense in the politics examples. Using the same construct for some Policy X that my side supports, we'd have the brain updating not for P(Policy X Is Good | Evidence About Policy X), but instead for P(My Party's Policies Are Good | Evidence About Policy X).

I'm not sure how intuitive that sounds, which makes me wonder if the way the abstraction works in the two examples is different.

*at least in some sense

Expand full comment
nostalgebraist's avatar

I'm not fully convinced that trapped priors can happen as a natural side effect of Bayesian reasoning. That is, I'm not convinced that purely "cognitive" trapped priors are possible.

As you note, when Bayesian sees evidence for X, they should never update in the direction of not-X. If you think that bad things are 90% likely to happen whenever you interact with a dog, and you have an interaction with a dog where nothing bad happens, you should update the 90% number to *something* lower.

Maybe only a tiny bit lower. But if it goes *up* afterwards, you simply aren't doing Bayesian reasoning anymore.

Perhaps you're doing some approximation to Bayesian reasoning that has this failure mode. But if so, it matters which approximation, and how it causes the problem -- if the approximation is doing all the work of explaining the pathology, then the treatment ought to focus on whatever is wrong with the approximation.

------

Related: in the dog phobia example, I think you may be conflating two things

- "how dangerous a typical dog interaction is likely to be"

- "how dangerous that particular dog interaction actually was"

The former is what the prior is about, and what gets updated to form the new prior. If you're Bayesian it should always *move* in the right direction.

The latter doesn't move, you just conclude whatever you conclude. It can *point* in the wrong direction, i.e. belief = dangerous while reality = not dangerous. But this one example doesn't become your entire prior for future experiences. Your prior is just whatever it was before, plus one update on a non-dangerous dog.

------

I want to connect this with jeff wade's comment about Kalman filtering. Kalman filtering really *is* an instance of Bayesian reasoning that can get off track and never really converge with the evidence.

What's the difference? Kalman filtering is about estimating a changing state, not a static fact like "how dangerous dogs are are a species." You predict the next state from a dynamic model, you make noisy observations, and you combine your prediction with the observation to estimate you actual state.

Here, the *state* gets updated, but the *dynamics* does not. If you predict "things get worse," and things look better instead, you will update to "things are currently [less bad than I expected, but worse that they look]." However, you don't update towards being less pessimistic about the future; you'll keep making pessimistic predictions and then partially correcting them, forever.

In the account of belief updating in this post, it sounds as if the prior gets updated twice -- first you combine prior and raw evidence to determine "what happened," then you combine "what happened" and your previous prior to get your next prior.

This is not how Bayesian inference works when you are estimatic a static fact. However, it's kind of close to what Kalman filtering does to estimate a changing state. The first step is normal Bayes to figure out where you are right now; the second step takes where you are now, and projects where you will be next.

I think that maps best onto human cases of continued engagement with one individual, like the example of abusive relationships. In such a case, someone could keep predicting "things will get worse" and getting corrected back to "things are the same," but never corrected towards more optimism. It might work for political groups if the belief is of the form "the Republicans are always getting worse" rather than "the Republicans are bad" It probably doesn't work for dogs.

(Hmm, maybe it's that every *individual* interaction involves some dynamics. Like, you always think the dog is going to get really mean in the next 60 seconds, and every 60 seconds of benign caninity makes you update to "it only got *slightly* meaner," and that still adds up over time.)

Expand full comment
Melvin's avatar

> As you note, when Bayesian sees evidence for X, they should never update in the direction of not-X...Maybe only a tiny bit lower. But if it goes *up* afterwards, you simply aren't doing Bayesian reasoning anymore.

Maybe there's situations where seeing evidence for X _should_ cause you to update in the direction of not-X.

Let's suppose you hear about a controversy in some far-flung country about the existence of some animal, the Green Spotted Tree Lizard. Some people say it exists, others say it doesn't, you have no strong prior either way. You contact the President of the Green Spotted Tree Lizard Believers' society, who invites you to come to his country so he can prove to you once and for all that the Green Spotted Tree Lizard exists.

When you arrive, he grandly shows you his evidence: some lizard tracks on a piece of bark, a collection of lizard droppings, and an extremely blurry photo of a lizard which may or may not have green spots.

Which way do you adjust your priors? Strictly speaking, these _are_ evidence in the direction of the existence of the Green Spotted Tree Lizard, but the fact that you've just been presented with this crappy evidence as a slam-dunk case should probably make you suspect that the Green Tree Lizard Believers are full of crap, and the lizard is a myth.

To a Green Tree Lizard Believer, this looks like confirmation bias. You saw the dropping and the photo, and you updated your priors _away_ from the lizard hypothesis? But to me it seems rational, because you found out that the strongest available evidence for the proposition was pissweak and that the people who believe in it are kooks.

Expand full comment
Yoav Ravid's avatar

This seems like an especially good post to cross-post to LessWrong. I would love if you crossposted more in general, but this one is especially fitting.

Expand full comment
hnau's avatar

Probably a dumb question: Why does the brain work that way?

Like, suppose you have sensations S1, S2, ..., S10 each of which is mild evidence against a trapped prior P1. According to this model your brain reasons from S1 + P1 to an unchanged (or stronger) prior P2, then from S2 + P2 to P3, and so on. And none of the sensations is enough evidence to overcome the prior, so things just get worse.

But in any reasonable framework, S1... S10 together are stronger evidence than S1 alone. And with enough sensations that should be enough to overcome P1. So the correct Bayesian math would be for the brain to update P1 + S1, S2, ..., S10 to a lower prior P'. In other words it combines its original prior with all the evidence since that time.

Why can't the brain do that? In algorithmic terms, is it a complexity / memory / "storage" limitation?

I guess one explanation might be that we don't actually remember sensation, just perception. (Partial because it begs the question "why not?") But that's hard to square with the political applications-- do we really "not remember" evidence regarding our political beliefs?

Expand full comment
Glenn's avatar

Presumably the "why not" is "storage limitations", per the previous paragraph? There's no way we can even come close to storing all the "evidence" we receive, in the form of sensory input, forever. We have to summarize.

Expand full comment
TLW's avatar

CS analogy:

>>> cur = numpy.float16(1.0)

>>> for x in range(10**6):

cur = numpy.float16(cur + cur * 1e-4)

>>> cur

1.0

This calculates x = (x + x*1e-4) 10^6 times, a.k.a x = x*(1 + 1e-4) 10^6 times, a.k.a. (1 + 1e-4)^(10^6), so the real answer should be something like 10^43 or so.

So what gives? Finite precision. We can't store an infinite amount of information.

(There are ways around this - notably stochastic rounding - but they have other issues.)

In other words, I'm challenging this statement:

> But in any reasonable framework, S1... S10 together are stronger evidence than S1 alone.

Expand full comment
The Time's avatar

"...should try to identify a reliable test for prior strength [...] then observe whether some interventions can raise or lower it consistently. Its goal would be a relatively tractable way to induce a low-prior state with minimal risk of psychosis or permanent fixation of weird beliefs, and then to encourage people to enter that state before reasoning in domains where they are likely to be heavily biased."

Isn't it currently a Dark Epistemic Art? That is, to see the best available method, shouldn't we just check how people that succeeded in convincing others of really weird shit do it, and then try to replicate the results but without pre-determining the conclusion?

Expand full comment
Phil Getts's avatar

I think that the ideas you've presented relating depression to placing low priors on sensory evidence might explain the apparent high rate of trapped priors in politics.

We have a word for people who place a low prior on sensory evidence: rationalists. The word is Latin, but the division of the Western world into people who trust sensory evidence (empiricists) and people who don't (rationalists) goes back to Thales and Parmenides.

Rationalists always conceal the existence of this division when they dominate public discourse, and we're in one of those rationalist-dominated periods today, despite the continuing successes of empirical science. So today, most people think "reason" and "rationality" are synonyms, when in fact "rationalism" is an oversimplified caricature of reason which states that:

- the world is split up neatly into categories by the words of our language

- every statement expressed in those words is either True, all the time, everywhere; or else it is False, all the time, everywhere

- Reason means beginning with a set of axioms or divine revelations, then applying logical deduction in order to deduce new Truths

- you can believe anything you've deduced with 100% certainty

Empiricists approach politics as a practical matter, in which we have a limited amount of resources to allocate in a way that will optimize some measure of utility. Rationalists approach politics as a moral matter, in the mindset of Plato when writing Republic.

It so happens that, today, every political party is rationalist. (This is usually the case. Empiricists are usually out discovering stuff, making stuff, or trading stuff, rather than playing politics. Democratic Athens, the Republic of Venice, the Netherlands and some Italian city-states during the Renaissance, and the American Revolution are the only empiricist states / political movements I can think of at the moment.)

- All Christians are rationalist, because Orthodox theology, and to a lesser extent the New Testament, were based on Plato.

- All Hegelians are rationalist, because Hegel was very Platonist. (Even if you disagree, you still gotta admit he was a prototypical rationalist.) This includes Nazis, Marxists, and Progressives. "Progressive" doesn't mean "a person who wants to make things better"; it means someone who believes in the divine march of Progress, under the guidance of the World Spirit, as explained by Hegel, to achieve the ultimate "perfection" (a Platonic concept) of the world.

- Libertarians are rationalist, because they have one supreme value (liberty), which takes priority over all other values, at all times and in all places.

- For the same reason, environmentalists, feminists, post-colonialists, and almost any "one great cause" movement is rationalist.

Rationalists are by definition people who place low value on empirical evidence. This is why Christians still won't believe in evolution, Marxists don't care how many times Marxism fails disastrously, and the "defund the police" movement doesn't care that excluding the police from the CHOP in Seattle for 3 weeks made it the most-violence-prone place in all of America for those 3 weeks. Theory has epistemological priority over observational evidence to rationalists.

Expand full comment
Phil Getts's avatar

Relating rationalism to the precision of sensory evidence suggests a new theory about when rationalism predominates. Until now, I've thought that rationalism has been the better policy in dangerous times when cultural survival is threatened, while empiricism has been a luxury that cultures could afford only in safer times. But "high variance of observed conditional probabilities" is a more-precise and more-obviously-sensible substitute for "dangerous times". People may then revert to rationalism in times of rapid change, even if nothing bad is happening, because they should then rely more on reason and less on experience.

I wonder if high variance of sensory stimuli activates the FOXO network in animals.

Expand full comment
JKPaw's avatar

Sorry, but you sound as though you invented the word (rationalist) yourself. Why are you so invested in promoting such limiting definitions as if they came from on high?

Expand full comment
Cosmic Derivative's avatar

> For example, in the 2016 election, Ted Cruz said he was against Hillary Clinton's "New York values".

Actually he said he was against Donald Trump's "New York values". Hillary Clinton had nothing to do with this statement; it was made in the sixth debate for the 2016 Republican primary. https://time.com/4182887/ted-cruz-new-york-values-donald-trump-republican-debate/

Expand full comment
JKPaw's avatar

Good point. I so dearly await Cruz' next plaintive instructions on which values I should follow. Sounds like he's got it all figured out.

Expand full comment
Oleg S.'s avatar

Mechanistically, trapped priors may be related to the phenomen when memories become experience. So, one spend time with a dog, the conclusion that it's terrible is confirmed. After that one recalls the experience, and brain uses the posterior+real experience to generate new "memory" which replaces experience. This biased memory is then combined with previous priors to give new and worse posteriors (which becomes prior next time one recalls the events). And so every time one recalls something terrible, it becomes even more terrible. And if the event is "triggering", the recall/reconstruct cycles are accelerated, one thinks about terrifying events much more frequently, and the process of replacing real memory with biased ones is very fast.

Expand full comment
Pat's avatar

The description of trapped priors sounds a lot like Lewis's reply to Hume on miracles: having decided a priori that miracles are impossible, Hume dismisses the (rather large) volume of testimony about miracles. He cites the credulity of the witnesses as evidence against their reliability (why credulous? c'mon, they're talking about angels and demons and stuff!), and then concludes from the absence of evidence that miracles are impossible.

Make of Hume's argument what you will, but it's a striking structural similarity to the trapped prior phenomenon.

Expand full comment
Carl Pham's avatar

Or like St. Anselm's proof of the existence of God: God exists because I can't imagine Him not existing.

Expand full comment
Tim Martin's avatar

I'm quite skeptical of this post.

I studied neuroscience in undergrad. I recall the brain having fairly idiosyncratic machinery for... well, everything. Including fear and anxiety.

Like if you were going to *design* a brain with an anxiety-processing center called an amygdala, you'd probably figure out what inputs the amgydala needs and just... give it those inputs. But no, the brain takes perceptual information and sends it to the amygdala via two different pathways: a fast path that goes straight to the amygdala, and a slower path that goes first to the cortex and then to the amygdala. (https://en.wikipedia.org/wiki/Joseph_E._LeDoux#Work_on_threat_response,_anxiety,_and_emotions)

Does this have implications for fear and trauma and memory? LeDoux thought so! (Though my information is dated now.)

My point is, I don't think the brain is "just doing bayesian updates." I don't think that all neural processing is to predict things and then update those predictions in a bayes' theorem-esque manner. (I know Scott has written about this in the past. I'm not sure to what extent he thinks this is literally true.) I think that the brain is messy, and that no matter how the low-level computation works there is some really weird and idiosyncratic "programs" that are built on the low-level stuff, and at least some of that is involved with fear and looks very little like Bayes' theorem.

I think this post goes too far in the "everything is Bayes" direction. It's an attempt to unite disparate observations, which is great, when it works. I don't think it works here. It seems too pat. And not very predictive.

----------------------

Those are my overall thoughts. I also have a grab-bag of random things that came up while I was reading:

1. This post seems to imply that a lot of biases can be described as "bayesian reasoning + up-weighted priors." You've got: bias in favor of one's tribe, self-serving bias, and I imagine that "bias in favor of thinking good things about oneself" would also be included.

That said, what is this post meant to explain? If the theory doesn't logically entail that humans will have *particular* biases, then it's not very helpful. Why do people have "trapped priors" when it comes to their tribe? Why not some other thing?

Also, if "trapped priors are purely epistemic and don't require emotion," why are all of our good examples of trapped priors related to emotion? Maybe there's more going on here that this simple theory can't explain.

2. If the theory is meant to suggest ways to "unstick" sticky beliefs (say, by using psychedelics), do we expect psychedelics to affect all of the biases that (we claim) result from trapped priors? Will people show less self-serving bias? This strikes me as predicting too much.

3. A number of people in the comments are now saying "I have a trapped prior for thing X," meaning, "I have trouble changing my thoughts/feelings about thing X."

But this is post hoc. The trapped prior isn't predicting anything. "Trapped prior" is becoming a name for a thing, but not a *cause* for a thing.

This is, obviously, not Scott's fault. But the point should be made that we need independent evidence for calling something a trapped prior, and the fact that we don't have that is currently a weakness of the theory.

4. "...you only habituate when an experience with a dog ends up being safe and okay. But being in the room with the Rottweiler is terrifying. It's not a safe okay experience."

By this logic, it would be very difficult for the rats in your example to habituate to the bell, wouldn't it? The bell would cause them to predict pain, just as the dog causes some people to predict pain.

(Yep, I studied neuroscience in undergrad and an explanation like Scott's made sense to me at the time. Now, I don't really think it does.)

5. "...[psychedelics] can loosen a trapped prior, causing it to become untrapped..."

What does "loosen" mean in statistical terms? Push a prior toward uncertainty (.5)?

This question is less because I want an answer, and more because if everything comes down to Bayes theorem, then it would be helpful to *use simple terminology that applies to Bayes theorem* when talking about this theory. "Loosen" is very vague.

Expand full comment
Phil Getts's avatar

There's a good functional reason there are 2 paths to the amygdala: The fast path is faster. It's used, IIRC, for things like dodging something thrown at your head. (Not sure about that example, as most visual processing is in the neocortex; but it could be triggered by the "blindsight" visual system.)

There's also a good historical reason there are 2 paths to the amygdala: The fast path is a lot older. The slow path evolved later, to take advantage of more-intelligently processed information from the neocortex.

Expand full comment
Tim Martin's avatar

I agree with you.

Expand full comment
a real dog's avatar

I think people on, and immediately after, psychedelics show less self-serving bias. Maybe it just temporarily increases empathy so you're more concerned about strangers, but isn't that the same thing?

Expand full comment
Tim Martin's avatar

I think you're talking about something like "being nice to other people." According to wikipedia, self-serving bias is a set of biases that help maintain positive self-esteem by viewing oneself more favorably. It's not about being nice to other people.

Anyway, the principle of my qualm here is that I feel like Scott's theory could apply to *lots* of cognitive biases, but I'm less convinced that down-weighting priors would attenuate all of those biases as the theory predicts. I'd be happy to be wrong about this, but for now this seems off to me.

Expand full comment
a real dog's avatar

Not quite being nice - including their well-being in the "self" of "self-serving" bias, by Scott's definition.

So, example hypothesis is: a rich person under psychedelic influence would be more likely to believe something that benefits poor people (e.g. that expanding welfare is a good idea on the net).

Expand full comment
Deiseach's avatar

"But no, the brain takes perceptual information and sends it to the amygdala via two different pathways: a fast path that goes straight to the amygdala, and a slower path that goes first to the cortex and then to the amygdala."

It makes sense, though; if you're standing there contemplating "is that movement in the grasses a tiger or just the wind?", you could end up a tiger's dinner. If you're wired to respond "eek! a tiger!" when you hear the rustling, then once you are safely up a tree, *then* you have the luxury to go "oh no, silly me, it was just the wind".

Expand full comment
Tyler Sayles's avatar

this is related to the concept of negativity bias or where humans—weaksauce—prefer a threat detection system with false positives over certainty

https://en.wikipedia.org/wiki/Negativity_bias#Social_judgments_and_impression_formation

Expand full comment
Andrew Trollope's avatar

Not really the point of the post, but I think it’s very plausible to think that joe Biden personally doesn’t support rioters but most people in his administration — starting with Kamala Harris — do. How else would you interpret tweeting out a bail fund for rioters?

Expand full comment
CB's avatar

My interpretation: supporting that bail fund was a popular thing on Twitter that week, so a campaign staffer wrote a message supporting it. I doubt the candidate herself ever actually saw it.

(My prior: both Harris and Biden are lizard people with no principles, and will shapeshift between tough-on-crime legislators and prosecutors depending on what they think will help them win a news cycle)

Expand full comment
CB's avatar

*still learning to live without an edit button

Expand full comment
David Friedman's avatar

copy, delete, paste, edit, post.

Expand full comment
Andrew Trollope's avatar

Sure, that's reasonable. I would just say that his and Kamala's staffers are likely to be far more liberal in their actions than Biden in particular is in his public comments. So it is reasonable for conservatives to think that Biden's public stance is not the reality of what will happen on policing and similar issues.

Expand full comment
Melvin's avatar

I think that vanishingly few political disagreement are actually about facts, so it's hardly surprising that evidence about facts is unlikely to change them.

Facts are tiny side-issues that people spend a lot of time arguing about, but which are usually pretty tangential to the point. If Alice and Bob sit down to argue about the death penalty, then Bob can bring a whole bunch of studies that show that the death penalty reduces crime rates, and Alice can bring a whole bunch of studies that show that the death penalty actually has no measurable effect on crime rates, and they can both spend many hours arguing about the relative merits and flaws of these piles of studies.

In the end, though, this debate is all for nothing -- Alice disagrees with the death penalty on principle, and even if you convinced her that the death penalty _does_ slightly reduce crime rates then she's not going to change her mind, she'll say that a small increase in crime rates is a small price to pay for the abolition of that horrible barbaric punishment. (Nonetheless, she will fight tooth and nail against having to admit that it actually _does_ increase crime rates.) And same deal for Bob.

Or, if you listen to libertarians, you will hear a great host of arguments about how getting the government out of such-and-such would lead to improved outcomes. I have never once heard a libertarian say "Look, to be honest, government interference in XYZ actually probably has positive outcomes, but I still think that the government should get out of XYZ on principle".

Are increased minimum wages good or bad, overall? It doesn't matter, because everyone seems to either support them on principle or oppose them on principle.

Why do people insist on debating facts when their disagreement is about values? Possibly because they are arguing for the benefit of some hypothetical listener who might have no particularly strong value opinion on the matter but are pragmatically inclined towards good outcomes. And maybe these people do exist out there in the middle somewhere, but they tend to be relatively quiet. Among the sorts of people who debate politics on the internet, though, meaningful fact-based disagreements are practically absent.

Expand full comment
Aapje's avatar

> I have never once heard a libertarian say "Look, to be honest, government interference in XYZ actually probably has positive outcomes, but I still think that the government should get out of XYZ on principle".

Why would they say this unless they are not just libertarians, but anarcho-libertarians? And if they are that, they are unlikely to believe that government interference is ever better?

Expand full comment
David Friedman's avatar

I'm an anarcho-libertarian and I have long conceded that there are situations where government intervention can improve outcomes. The problem is that we have no way of structuring a government that has the power to intervene in those situations, will intervene correctly, and won't use that power to intervene in other situations where their intervention makes things worse.

Now does that statement, which is demonstrably true (about my position, not about whether the position is correct), cause either Aapje or Melvin to alter his priors?

Expand full comment
Aapje's avatar

I said 'unlikely,' not 'never.' So my statement is not incorrect, if it can be reasonably be argued that you are an unlikely person :)

Expand full comment
Hilarius Bookbinder's avatar

"I think that vanishingly few political disagreement are actually about facts, so it's hardly surprising that evidence about facts is unlikely to change them." I think the truth is the exact opposite of that. People mistakenly think that their disputes are about values when they are really about facts. For example, consider abortion. Classic dispute about values, right? Fetus rights vs. mother's autonomy, women's rights over their bodies vs. something something patriarchy. Nope. What all parties agree to is the value that "killing persons is deeply wrong without some really serious mitigating reasons." Conservatives and liberals about abortion agree to that. Key points of difference are factual. One is metaphysical: are fetuses persons in the moral sense? Liberals say no, conservatives say yes. But that is not a dispute about values. Another difference is what counts as really serious mitigating reasons, and liberals and conservatives will have differing takes on that. But again, that is not a dispute about values. Abortion is a perennial topic because (1) people get emotional about it, and (2) the relevant factual/metaphysical issues are thorny. It is not because there is a root-level values disagreement. There's not.

Expand full comment
David Friedman's avatar

Similarly for the minimum wage. Do you think most supporters would continue to support raising the minimum wage if they believed that the main effect would be that people now getting the minimum wage would become unemployed? There are probably some who would, on the grounds that being unemployed entitles those people to unemployment compensation, which is better than working for $10/your, but not most.

For the death penalty again, if one were convinced that each execution deters ten murders — I think one early statistical study got a result on that order of magnitude — surely at least some opponents of the death penalty would drop their opposition.

Expand full comment
Deiseach's avatar

That's just another version of the trolley problem - take one life to save ten.

My response to that is that I don't know for sure that it would save ten lives, but I do know for sure this execution is taking a life, and since I don't think that revenge-killing even for having murdered is good, I am anti- the death penalty for pretty much the same reasons I am anti-abortion: killing is not a solution to a problem.

I freely admit I am heavily influenced by the Catholic 'seamless garment' ethic https://en.wikipedia.org/wiki/Consistent_life_ethic

Expand full comment
ThePrussian's avatar

BECS is probably the thing that generates the most ill will. As I'm on the right, I constantly hear that so-and-so - or I - is racistsexisthomophobe because of innocuous comment X. I don't think there is anything so guaranteed to cause ill will as "Oh, you _really_ mean X when you say Y."

(That said, I've tussled enough with White Nationalist types to be aware of Scott's point on Weak Men.)

The best thing I can think of is that, even if you think this, not to say it. Try and engage with the literal meaning of the words as best as possible (the late Hitch was good at this). And if you must say "This looks like X to me", explain that it is something you suppose, and why you think it, not just say "You really mean X".

For example, I've dealt with a lot of people who have nothing good to say about Israel, and people who have nothing good to say about the new South Africa. And who in both cases offer euphemisms about the terrorist movements each face (yes, S.A. has its white nationalist terrorists; they are called the AWB). Now I cannot say that each of these people is anti-semitic/racist, but what I do say, if I hear enough of this, is to say "Look, this is giving me a really bad feeling, and here's why."

That's the best I've come up with at the moment.

Expand full comment
Medieval Cat's avatar

So there are multiple ways to have trapped priors? Let's take a group of people that have a strong trapped prior that dogs are dangerous:

* Alice suffers from a weird sensory disorder that makes dogs look really dangerous.

* Bob suffers from a weird brain disorder that makes him unable to update his prior with new information.

* Carol has cynophobia. A dog "is so scary or hated that it's aversive to have to perceive it directly, [Carol's] mind decreases bandwidth on the raw experience channel relative to the prior channel so that [she] avoid the negative stimulus."

* David hates dogs for political reasons. Once again, a dog "is so scary or hated that it's aversive to have to perceive it directly, [David's] mind decreases bandwidth on the raw experience channel relative to the prior channel so that [he] avoid the negative stimulus."

Can we think of more cases? How much does these four cases really have in common? What is the actual difference between someone who has a strong prior that dogs are dangerous and someone who has an equally strong but *trapped* prior that dogs are dangerous?

Expand full comment
A.'s avatar

Here's one more case of a trapped prior for you if it helps.

I hate dogs. (With exception of obviously well-behaved dogs, such as service dogs.) My prior is that humans allow dogs all kinds of bad behavior that isn't tolerated in humans and will take the dog's side against a human just about all of the time. It seems obvious that this makes dogs very dangerous. A human is not allowed to assault another human. A dog is allowed to bark at you, growl at you, lunge at you, show its teeth and act like it's going to bite you, and humans generally won't even reprimand the dog but will say something like: "He's just playing", "He didn't like how you looked", "You were wearing a piece of clothing that makes them do that", and so on. I can't help thinking that most dog owners would probably also be fine with the dog biting a random person without provocation if it wasn't a life-threatening bite. Once I saw a dog bark at and lunge at a toddler in a stroller, and the owner of the dog didn't even say "sorry".

For a while I had to walk past a fraternity that had 2 dogs that came quite a distance away from the building to harass passers-by. One day one of these dogs bit a piece off a piece of my clothing (the piece of clothing was red and cost $10, so I figured there was no point in complaining, since the loss wasn't big enough, and since everyone would take the dogs' side like usually anyway). The dog's mouth was really close to my fingers, and I felt lucky it bit at the clothing and didn't bite off a finger instead. A couple years later, one of these dogs bit through someone's leg; that person was coming from a bbq and probably smelled tasty. For this, the dog was finally euthanized. The fact that it was finally euthanized for biting somebody didn't do anything to convince me that I was somehow safer from dogs still alive. I'm not sure what it would take to convince me.

Expand full comment
Medieval Cat's avatar

That doesn't look like a trapped prior. Presumably you would change your mind and not think that all dogs are misbehaved if you got to spend time with a well-behaved dog.

Expand full comment
A.'s avatar

I know that there are well-behaved dogs. I have no problems at all with service dogs, because I've never seen a service dog menace anyone. As far as I'm concerned, service dogs might just as well be a different species than dogs. Throughout my life, I've also been friends with a few dogs whom I knew very well.

But there's no way for my prior of "dogs are dangerous because people let them misbehave" to update, because I keep seeing examples of dogs misbehaving and their owners not caring. I know people don't get bitten very often, but I know people who were bitten, and there's no way to know that the next person it happens to won't be me.

As far as I can tell, any dog that's not a service dog is just seconds away from becoming a mass of teeth and bark lunging at me. How would you ever fix this?

Expand full comment
Sean Traven's avatar

Perhaps we need an easier approach. For example, I have found it helpful to ask students, "Are you willing to accept that your idea is wrong if you are shown evidence that it is wrong?" Prime the person to overtly and rationally accept the possibility of change. Have you or other workers in this area tried that?

Also, you are looking at behavior change, but there is a pretty solid body of evidence that behavior change is more enduring when the new behavior occurs at higher frequency with less latency. You and others ignore frequency, as far as I can tell. If you want people to change responses, it might be more effective to train them to change a small response quickly and then do it repeatedly. Stop looking only at quality and look at frequency.

I will give you an example. If you want your client to stop feeling intense fear of dogs, do something to help her quickly change the feeling. For example, give the person a picture that causes a fear reaction, then teach her to change that feeling quickly. I don't know your clients, so I don't know what would do that, but you could possibly give a mild fear stimulus and then very quickly show a picture of a dog or some non-doggy but delightful, relaxing, or humorous stimulus that would quickly alleviate the feeling of fear. Alternatively, you could teach a mental reaction that would relieve the fear and teach her to quickly engage this reaction (e.g., a statement of some kind). Then have her do this more and more quickly for a minute or so, using multiple stimuli and trained reactions or competing stimuli.

This is just an outline of an approach that might work. The main point here is that frequency and latency are both important, and as far as I can tell, you are not much aware of that.

Expand full comment
Jiro's avatar

'For example, I have found it helpful to ask students, "Are you willing to accept that your idea is wrong if you are shown evidence that it is wrong?"'

The answer to this should be "no" almost all the time because of epistemic learned helplessness. If a missionary for the Mormons came to your door and gave you arguments for Mormonism that looked unrefutable to you, should you believe in Mormonism? No. If a homeopath gave you a 500 page book about homeopathy and you for some reason actually read it, but you couldn't refute everything in the book, should you become a homeopath? I wouldn't.

It's also related to "if you believe X, would you bet on X", to which the answer is "no, since I'm not confident in my ability to find loopholes in the bet".

Expand full comment
qbolec's avatar

Here's my response to this post https://www.lesswrong.com/posts/JtEBjbEZidruMBKc3/are-dogs-which explains my view on how one should perform the updates on dog encounters, and what can go bad if you let the final verdict influence your (meta)priors.

Expand full comment
a real dog's avatar

Unless I misunderstood your post, you're talking about rationally analyzing canine virtue while Scott is talking about evaluating the subjective experience of meeting the dog.

If you're scared enough _it doesn't matter_ if it's a good dog, you'll have a terrible time anyway and update in the direction of "dogs cause terrible experiences, they should be avoided".

Expand full comment
a real dog's avatar

I agree with most of your analysis, but I've met a whole bunch of people for whom "punish women for being sluts" was pretty much the explicit reason of choosing a pro-life position. Typical mind fallacy.

Expand full comment
DABM's avatar

How much work is "pretty much" doing here? (Though I also had the reaction to that bit of "but some conservatives really do think like that, even if they won't admit it!' I think this one is more in the category of 'being woke is class snobbery in disguise' (to chose a current conservative complaint) rather than 'liberals like COVID restrictions for their own sake'.)

Expand full comment
Jacobethan's avatar

Is the axis on which you're distinguishing the wokeness/snobbery and liberal/COVID examples just "relative plausibility," or something else?

Funnily enough, I sort of think both are true. And true in closely related ways -- that is, both ride upon the same basic intuitions about contemporary liberalism and the tendencies in American culture it's currently choosing to amplify.

Assuming "wokeness is a veiled form of professional-class courtly etiquette" is the thesis less in need of explication/defense, I'll briefly state the case for the other:

I certainly don't imagine liberals are going around thinking, "a world with COVID is preferable to a world without it, insofar as it brings us closer to our terminal goal of imposing COVID restrictions."

At the same time, however, it seems clear to me that many liberals have developed a strong identification with and sense of value around the act of observing COVID rules, in a way that floats at least somewhat freely of their practical impact. Put otherwise, following such rules -- and being seen to do so -- has become more or less central to how some liberals appear to define virtuous citizenship, the sort of character society should ideally be organized to promote.

Where this dovetails with the wokeness/snobbery thesis is that I think both are tapping into a view of the contemporary center-left that sees it as more closely connected to the Puritan moral culture out of which American liberalism developed, and with which it's always retained close sociological relations, than to the propositions and impulses liberals defended for much of the 20th century.

Expand full comment
DABM's avatar

Plausibility. And I was thinking of the claim that liberals like COVID rules because they like the government controlling people as an end in itself. Your reading is somewhat less implausible, and I suppose might be what Scott had in mind.

Expand full comment
Jacobethan's avatar

Well, these aren't totally separate things, right?

I mean, part of the reason liberals seem to "like" following the rules (in the complicated sense I suggested above) is because it's a way of showing respect for authorities they see as legitimate. That is, the rules are good because they're informed by the judgment of scientific experts, and trusting scientific expertise is what sets us apart from the barbarians.

And that sort of Cult of Dr. Fauci impulse is obviously not unrelated to a whole debate over the credibility of certain kinds of mainstream institution and the wisdom of letting people operate outside officially approved channels that's only intensified since 2016. And in which liberals have become increasingly identified with the project of trying to tamp down "fake news" and "misinformation," a project that currently seems to be sidling up to the idea of enlisting government power on its side.

So I do think there's a non-spurious connection between some of the things we might be able to agree liberals associate positively with mask-wearing, on the one hand, and some of the ways liberals might want to expand the state's capacity to police what they see as bad actors, on the other.

I'm not going to belabor what I'd guess is the familiar point about selective application. Gathering to protest the state's own lockdown policies evidently ought to be illegal, as should gathering to offer worship unrelated to government aims, but mass marches on behalf of concepts and slogans endorsed and promulgated by the state is the one permissible form of collective activity. Put like that, I don't think most liberals would say they see this as a desirable state of affairs. Yet that so many liberals sequentially argued for each step in the chain, showing such minimal discomfort in doing so, is... interesting.

Finally, while I don't think having governors order businesses to close has ever been any sort of core goal of liberalism, there've been moments when I've gotten the distinct impression from certain liberals that they see the things getting shut down as mostly stuff we're better off without anyway. I'm thinking of the way lockdown critics often got cast as animated by a sort of petty-bourgeois consumerism: "God, can you believe these people who only care about if they get to go to Cheesecake Factory?"

To which my first emotive response is something like: "Look, I don't eat at Cheesecake Factory either. But I'm sure there are families whose weekly visit there is a cherished ritual, an opportunity to bond and bask in the pride of the paychecks that provide just enough to afford this single indulgence. So who the f*** are you to say what difference it makes if it closes?"

But when I step back and consider this in the dreaded "context" (meaning: with more latitude for me to make stuff up), I hear something slightly different: "Will we never be rid of these Trumpy proles, poisoning themselves with their disgusting food and vomiting contagion in their megachurches, leaving the rest of us to pick up the tab?"

And in those moments I do sense something -- ephemeral, perhaps, maybe not altogether conscious -- like a positive liberal satisfaction with the shutdowns, akin to what their Puritan forbears might've felt if a public health law had just incidentally happened to shutter the gin joints and music halls.

I guess the question is whether in hearing that note I'm being more, less, or about equally as fair as real dog is in hearing a certain kind of conservative attitude around abortion as "punish women for being sluts."

Expand full comment
DABM's avatar

'...but mass marches on behalf of concepts and slogans endorsed and promulgated by the state is the one permissible form of collective activity.' So it was definitely massive hypocrisy to have a go at Republicans for demonstrating then by fine with BLM demos, and as far as I can tell a vast swath of American liberals were guilty of this but the state was in Trump's hands at the time. Indeed, Republicans also controlled both houses of Congress! How was it a state-endorsed cause? I feel like this is so wildly incorrect, that it's a sign of deep bias.

As for churches: they are (usually) *indoors*. Of course they are worse for COVID than an outside demo!

Expand full comment
Jacobethan's avatar

The federal executive was in Trump's hands. (Republicans also controlled one house of Congress, not two, though that's neither here nor there.) The use of executive power by President Trump in 2020 is quintessentially NOT a mode of governance whose reach liberals might be tempted to extend. So I'm not sure how this is relevant to understanding which ways of wielding state power liberals *did* approve of, and why.

The federal government under Trump also wasn't in the business of deciding who got exempted from lockdown orders and who didn't. State governors, of course, were. And a pervasive feature of liberal politics in early-mid 2020 was the juxtaposition of various blue-state governors against Trump as avatars of a more responsible approach to the prerogatives of office. That paradigm hasn't aged well. But it doesn't seem crazy to see the things liberals idealized then (in Whitmer, Cuomo, etc.) as indicative of some underlying view of the proper scope of state authority.

Seen in that light, I think the central facts about the George Floyd protests are these:

1. Numerous blue-state officials -- in many cases possessing at the time near-plenary powers to order or countermand virtually any aspect of daily life within their jurisdictions -- praised, marched with, and deployed state facilities in affirmation of the protestors and their cause.

2. These officials, in justifying the protests' extraordinary dispensation from the lockdown rules, generally cited extenuating public-health factors (it's outdoors, they're wearing masks, etc.). But they also made it abundantly clear that the protests merited this special consideration to begin with because *their substantive themes were ones the officials themselves approved of and agreed with.*

What I'm saying here has nothing to do with whether "the state" in some wider, national sense supported the protests or not. My point is that for many liberals the protests represented a sacred cause, that in those local instances where liberals controlled state institutions they used that power to give the cause a commensurately exceptional status, and that liberals at large seem to have regarded this as a wholly salutary model for the exercise of state authority.

Expand full comment
unresolved_kharma's avatar

From the mathematical point of view it seems to me that there are two intertwined but different issues.

1) You start with a prior that is extremely narrow, a Dirac delta for any practical purpose, or maybe just something with a finite support (i.e. zero outside a certain interval). This means that even if the Bayesian updating works perfectly you can't escape the (for all practical purposes infinite) prior information that you start with. By the way "prior with finite support" is what came to my mind when I read the phrase "trapped priors".

2) The updating algorithm deviates from perfectly Bayesian and can actually update the prior in the wrong direction (it could also be Bayesian at the core but with some inbuilt "data preprocessing" that amplifies evidence in one direction disproportionately).

Maybe a similar point has been raised in other comments (I haven't read them all)... Anyway it's a probably superfluous "technical" comment that doesn't change the take-home message of this brilliant analysis.

Expand full comment
Nazzim's avatar

Just wanted to register my thanks for this comment -- I had the same basic intuition (that what we're really talking about is something like overly narrow priors as opposed to "underweighted" priors) but lacked the math knowledge to express it well.

Expand full comment
WB's avatar

Hah, we just covered this in a course I'm taking. Our model was like this: a queue of people have a prior that restaurant A is better than B with 51% probability. Now each person on the front receives a signal that one of the restaurants is better than the other with 52% probability.

If the first person receives the signal A>B, then everyone in the queue will end up going to A: When they're at the front, they have their prior, their signal and they can infer the first person's signal is A>B (since they ended up going to A). Combining this information will always yield A>B regardless of their own signal.

Expand full comment
Luke G's avatar

That's an interesting example and I think it ties in well with a lot of real-life group behavior. On the other hand, this is often mitigated when it's not a "one-shot" situation, because there's incentive to explore other options in case your priors are wrong. There's a whole field of results about exploration vs exploitation.

Expand full comment
test's avatar

"Trapped prior" is a good way to describe Scott Alexander's attitude towards the New York Times. The sensory evidence (the NYT being full of people wanting to destroy Scott Alexander's life and not afraid to lie in service of that goal) has been swamped by the prior (the NYT is the font of all that is true and holy)

Expand full comment
a real dog's avatar

Untrue (not anymore), unnecessary (everyone understands your point already), and unkind (obviously).

Expand full comment
DABM's avatar

He explicitly said he thought that the NYT article as actually published was an attempt to punish him for complaining about them revealing his name, so not really. Personally, I find this unlikely: the NYT just criticized him, in a somewhat shady way, for flirting with right-associated positions on race that the NYT really hates. Of course a newspaper is going to mention if a public intellectual they're writing about has flirted with views they hate. (The evidence they cited for this was misleading, and perhaps designed to give the impression that Scott has explicitly endorsed the view not just flirted with it, but Scott really has flirted with the view!)

Expand full comment
Jiro's avatar

He thought that the NYT originally had only harmless things in mind and actually was going to write an article about rationalists successfully predicting COVID. He had to admit that the article they published was malicious, but he seemed to minimize the amount of malice involved to a ridiculous extent.

Expand full comment
DABM's avatar

Yes, but the claim is that Scott still can't criticize the Times now and has a reverential attitude towards it now. Which is blatantly false given what he's recently written about the Times.

Incidentally "malice" is a somewhat shady word in this context, because it's mixing descriptive and evaluative. I.e. if I think the Times wanted to right a 'popular blogger has bad views' article, but that they have every right to do that, because it's just normal criticism of a public intellectual (even if the particular way they executed the criticism is shady) for spreading controversial views, from people who think those views are bad, then do I think they acted with "malice"? If I say "no", it sounds like I'm saying that they didn't knowingly attempt to make Scott pay a reputational cost for the views he promotes, but if I say 'yes', it makes it sound like I concede that their doing so was automatically wrong (since by definiton "malice" is bad). As it happens I do think the article was shoddy, but not because telling people a blogger has views with intent to make them think badly of them is itself wrong, but because they implied Scott has expressed agreement with Murray on race in a post where he'd done nothing of the sort. On the other, hand Scott does think there's a fair chance Murray is right about race and IQ, so it's not exactly the worst misrepresentation ever in the scheme of things. The complaint about the feminism stuff is much flimsier, basically just that Scott said 'don't tell people I said X' after saying 'X', which doesn't seem much of a defense.

Expand full comment
Alex G's avatar

false

Alex Gjust now

I feel very skeptical about entirely-internal-belief-feedback-loops but more open about a mixed internal+external-belief feedback loops (or local minima in belief-space which, say psychedelics might help with)?

Say, if you are a partisan Democrat and have a discussion with a partisan Republican probably the interaction will be very bitter and shouty and you will develop more of an ugh-reaction to Republicanism

Expand full comment
Donald's avatar

Consider the hypothesis "you are in a russian roulette environment." Ie an environment that has some small chance of killing you.

So immagine your priors say, either dogs are safe and quiet, or dogs bark a lot and occasionally kill. Then when you see a barky slobbery dog, your belief in their danger goes up. Maybe the hypothesis "dogs are barky but safe" is in there too. If the chance of dogs killing you in the dangerous dogs hypothesis is 1%, thats going to take many 100s of very scary dog visits before it shrinks away. (Enough that you would expect to be dead if dogs were that dangerous)

So with these fairly sane priors, and perfect baysian updating, you get the behaviour where they walk away from a dog uninjured, but are more scared of them.

Of course, many things look baysian with strange enough choice of priors.

The doomsday cultists. Ok, they are just not baysian, but that could be a lot of selection there, in that the least rational are most likely to be doomsday cultists.

Expand full comment
bagel's avatar

Don’t be so down on cognitive trapping; it can have big benefits too! It’s fast (all precomputed, no serious thinking needed in the hot path in response to stimuli) and it’s immune to lyin’ liars (because no matter how eloquent they are you’ve presupposed their falsehood).

And there’s all sorts of reasons you might want fast and durable mental tools. You want them if a tiger is hoping to eat your face. You want them if an abuser is trying to gaslight you. You want them if you’re being tempted away from the high road. All things where you don’t have time to think and it costs a lot to be wrong.

It’s sort of the opposite of the scientific method, which is all about focusing on exactly what stimuli should flip your prior. But science sometimes takes decades, or lifetimes, or more. Time enough to cut through the noise.

The other way through cognitive trapping, I believe, is humor. My father taught me that the way to defuse any situation is with a gentle joke. Perhaps if you’re laughing with everyone then you can’t register the experience as negative? And therefore you can begin to update your prior in a more Bayesian fashion? I do get the sense that my father’s use of humor has more to do with him being a middle child and less to do his neuro PhD or his pediatric neuro MD, but I’ve come to believe in his method.

Aside: one of the really fascinating things about working with geniuses - not just very smart people but geniuses - is that they develop a strong cognitive trap that most people are wrong and they are right. It’s not totally irrational! Except when it is.

Expand full comment
Phil Getts's avatar

Perhaps one evolutionary "purpose" of humor is to help escape trapped priors?

Expand full comment
bagel's avatar

Does humor have an evolutionary purpose? That would be a very interesting claim to demonstrate.

Expand full comment
JKPaw's avatar

I love your last aside -- and maybe it points to why there are so many brilliant sociopaths. (I'm just assuming that's true cuz it sounds right.)

But I think you also point to a basic societal compact that we all pretty much have agreed to (to a greater or lesser extent): we've all become amateur statisticians, and live accordingly, treating our neighbors as the people they most LIKELY are, given the cues they display -- rather than investing our precious time to see if we've pigeon-holed them accurately. We do this systemically as well, in the courts, in the schools, etc. It's purely utilitarian, in that the resultant efficiencies supposedly outweigh the suffering of the minority who are prone to mischaracterization.

Expand full comment
bagel's avatar

I mean, who can know another's mind? Few enough people even know their own. All we can do is listen to what people say and watch what they do.

But we gravitate to ambiguous signals because we can all agree on them ... because they're ambiguous and mean something slightly different to everyone. Weak agreement, I once heard it called. If there is such a quantifiable thing as EQ, I figure that it's more or less the ability to reverse engineer what someone wants, thinks, or will do based on the signals you can see.

Expand full comment
JKPaw's avatar

I think the goal ought to be neutrality -- unless and until we've had the opportunity to confirm the incoming signals. I mean this in the ideal sense, when circumstances have accounted for risk factors involved with generosity of spirit. This is a heroic trait in our TV movies: when someone is finally willing to risk giving the old witch in the dark house the benefit of the doubt.

The alternative is bad enough because certain people with the wrong "scars" are too frequently dismissed and fated to live with often reduced agency. But compounding the deal is that it typically gets reinforced by social structures that feed on it -- which compounds the impact to the individual, and privileges the facile power of narrow-mindedness.

Expand full comment
Carl Pham's avatar

How do you *know* when a genius is wrong? Taking a vote among the less gifted doesn't really help.

Expand full comment
Majromax's avatar

An interesting analogy to this issue might be "regularization" in machine learning (neural network) frameworks. To prevent over-fitting (i.e. a network that learns irrelevant patterns or "noise" in its training set), data scientists optimize a fitness function that includes some features of the network itself.

One common regularization option is L1-norm regularization, which penalizes the network's fitness score by the sum of the absolute values of its weights. This normalization encourages networks to push as many weights as possible towards zero, forming a sparser (fewer-factored) explanation for any inference.

Shifting back to "trapped priors" and using this as a metaphor for human reasoning, habituation requires us to replace a simple explanation ("all dogs are scary") with what is at first a more complicated one ("almost all dogs are scary, but this dog isn't scary because it's licking its own rear end right now.") If we suppose we're instead applying a strong preference for a simple rule, then a simpler explanation that's less accurate ("all dogs are scary, and also this dog is scary") may still be preferred. Habituation would then work because the strong belief is not directly challenged (even if all dogs are scary a *picture* of a dog can't be scary), so the preference for a simple belief is not triggered in the same way.

Expand full comment
RCR's avatar

Isn't this as simple as "Priors should be falsifiable"?

Expand full comment
Wtf happened to SSC?'s avatar

> I can't unreservedly recommend this as a pro-rationality intervention, because it also seems to create permanent weird false beliefs for some reason, but I think it's a foundation that someone might be able to build upon.

It creates weird false beliefs because lowering the prior doesn't remove the error.

There's a type-I/type-II error tradeoff here. Stick more strongly to (typical) priors, and you'll be less of a conspiracy theorist but more vulnerable to failing to notice that the building pandemic really is that bad. Stick weakly to priors, and you'll be quick to panic about COVID in January 2020 but also become convinced that lavender cures autism or whatever. The larger the errors in the inputs, the more important this tradeoff becomes.

Since my whole mission here is to figure out grey-tribe failure modes, here's a speculative one: the grey tribe likes to think about weird ideas because it thinks bad ones will fail and good ones will succeed. Smart grey tribers have relatively low error in their analysis, but the error is heterogeneous, and if you consider enough weird ideas you will eventually land on one where your error convinces you of a false premise strongly enough to lock. And the tradeoff can't save you: low-prior people are more vulnerable to the error, and high-prior people are more easily locked into it.

Expand full comment
madqualist's avatar

I believe the framing a person applies to an issue is capable of overriding everything else, and that this is the most common way trying to be an educated, informed person goes wrong. This is not only a trap; this is THE trap, as long as you are human. I'd say most important sources of stupidity that originate in otherwise brilliant people are just manifestations of them framing or contextualizing information in such a self-serving, or, as you formalize it, in a prior-confirming way. I think almost everyone has a natural tendency to do this, especially with emotionally charged issues. (There are simpler mistakes, but those are fixable)

I think a primary exercise of having a healthy perception of the world should include ensuring that one remains humble and attempts to question their most entrenched worldviews as an active practice. This probably includes a mixed information diet from sources that have many different perspectives.

Maybe constantly trying to undermine one's own worldview from all sides is actually the most essential practice if you want to have the best chance of being correct about things. This could be the real central praxis of rationality, even more so than dealing in certain mathematical formalisms (which, while great, can easily be used in service to one's ideology)

Expand full comment
Timo's avatar

Supporting the idea of trapped priors independent of any emotional component, you can very easily end up with a trapped prior in a naive computer implementation of a Bayesian filter that uses normal likelihoods instead of log-likelihoods: if your prior becomes small enough, the floating point value will underflow and turn into zero.

In theory, it can happen with log-likelihoods as well, as IEEE floating points can become infinity if sufficiently large values are added together. However, the amount of evidence you need to gather for this to occur is quite obscene.

Expand full comment
apxhard's avatar

Have you considered a trapped prior might actually be an extremely useful tool to have?

Suppose the trapped prior is 'everything will be ok'. Whenever a situation seems to be not ok, if that prior really is trapped, you'll rely on it, which will make you feel that everything is, indeed, OK, and as a result, you get more evidence for the trapped prior.

This seems like a pretty decent description of what religious faith gives people. If you have 100% confidence that things will be OK, that they happen for a reason, and your job is to always accept your circumstances and do the best you can, then you can likely interpret any situation - even ones most people would say are very awful - as being OK, so long as that prior is strong eonugh.

This is the approach i've been taking the last few months, and it seems to be working for me: https://apxhard.com/2021/01/18/the-hypnotoad-pill/

The only hing I don't use this prior for is to anticipate that there will be no bad consequences from a possible course of action. It would obviously not be good to say, "i won't buckle my kids' car seats because nothing bad will happen." I don't' think most religious people are doing that, but i can see how this sort of thinking would not be great at convincing people they should wear a mask or want to reduce carbon emissions, for example.

But if you're crippled with anxiety and fear about the world, a trapped prior that says "i will be ok" seems like an extremely useful tool.

Expand full comment
JKPaw's avatar

From a psychological perspective I can sympathize with Scott's assumption that less trapping means less neuroses means a happier life (he didn't say all that, but his ending implies a focus on treatment) -- but your comment is a reminder of the enormity of the task. It's an interesting question though, along the lines of how much reality can people bear? Some more than others, no doubt. A meditator, Scott suggests, readily invites more direct experience into her/his perception, whereas someone who spends an equal amount of time in rote prayer (I suggest) is probably inviting just the opposite, by actively reinforcing bias, maybe because the actual reality of the moment would be devastating.

Expand full comment
awenonian's avatar

"I don’t know why this doesn’t happen in real life, beyond a general sense that whatever weighting function we use isn’t perfectly Bayesian and doesn’t fit in the class I would call “reasonable”."

Assuming the model, it seems to me that the reason for the problem is that there's a cycle. In your diagram, the gray box isn't where the Bayes happens. It's just some sort of weighted average function between experience and priors. The Bayes happens between the perceptions and the priors. But since perceptions are modulated by priors, this is factoring your priors into your evidence, which is a little silly. After all, it means you can update on nothing: take a 0 experience, average it (in any way) with your prior, and you'll have a non-0 perception to update on.

Most of the time, this probably is fine, because it just makes you update slightly less or more than you should, and in some cases could be helpful (for example, if, say, your brain isn't capable of making large enough jumps in prior in a single bound, then having your prior reinforce itself lets that happen over time, maybe). In some cases, it'll cause problems, like if the average between a large negative prior and a small positive experience becomes a small negative experience, the opposite of the actual experience. So, a proper Bayesian would have to do the updates on prior and experience *before* funneling that into perceptions. Or maybe do updating and perceiving completely separate. At least, the perceptions part would not be an input into the Bayes function.

Expand full comment
DABM's avatar

This is a very very smart comment and I hope it is highlighted.

Expand full comment
Bob Frank's avatar

WRT the "cynophobia" thing, what about cases where dogs truly do behave abnormally aggressive towards a person?

I've long since lost count of the number of times some person, innocuously walking their innocuous dog near me, has ended up totally shocked as the beast goes into a rage, straining at the leash trying to reach me and attack me. They always apologize profusely, saying the dog *never* behaves that way. Well... for me they do, pretty darn consistently.

In a similar vein, I have never in my life met a mean cat. Not even ones whose owners warn me about how mean the cat is. I can always get them to cuddle up in my lap and purr as I pet them with no trouble. Again, this frequently surprises the owner; watching the responses of others is a good way of getting outside my own biases and knowing something real is going on.

Maybe I just smell like a cat on some pheromonal level or something?

Expand full comment
Ravi D'Elia's avatar

I guess my question would still have to be why this actually happens. Is there just too much friction in the system? Is the brain using some sort of computationally efficient approximation of Bayes Theorem that can get caught up on things like this?

Expand full comment
jb9905's avatar

Conspiracy theories are another case of this, where they can fully explain the lack of evidence ("because it's a conspiracy!") and become trapped.

More broadly, priors are not just a number -- they're a number attached to a whole mechanistic theory about the world.

If I tell you an urn contains a red and white ball, and then I pull a white ball out of it, then when you see the white ball, your confidence that the next ball will be red goes up.

If I tell you I filled the urn with balls of a single colour, then when I pull a white ball out, your confidence that the next ball will be white goes up.

The same observation, but you update in different directions depending on your prior!

The issue with trapped priors might not be that the updating process is irrational, but that the prior itself was way overconfident to begin with. A properly ignorant prior will contain some seed of doubt in any conspiracy theory, which grows faster on any evidence of no conspiracy than the conspiracy theory's cover-up likelihood. But if you don't have that seed of doubt to begin with, a prior of zero still updates to zero no matter how much evidence is in its favour.

While we're at it, consider how you update on the evidence in these cases.

People say economists are always predicting stock market crashes that don't happen. (Until they do). So some people start to treat the history of past wrong predictions as evidence that decreases their confidence in any further predictions. And I can't say they're wrong to do so.

But what about earthquakes? After a seismologist starts to worry about a build-up of stress in a fault line, and sound a warning about a potential earthquake, each passing year in which that prediction rings false might *increase* the seismologist's confidence that the earthquake will happen the following year. And I can't day they're wrong to do so either.

This is because their model is of a stress value climbing and climbing, which triggers at a threshold that is known to very low condidence. When you start to think that stress is within your trigger range, you start to worry about an earthquake, and when it doesn't trigger you worry even more.

But that's also, more or less, the model that economists use to predict crashes.

Expand full comment
Ch Hi's avatar

You don't need psychedelics to get permanently trapped weird beliefs. Claiming that they are false, however, seems excessive. I have a friend who believes in big foot. He seems to be oblivious to the lack of evidence. But then *I* don't believe in big foot, so my evaluation of the evidence is different from his.

FWIW, I tend to think of psychedelics as something that relaxes the error checking protocols of the brain. If these "trapped priors" are a manifestation of the error checking gone awry, then the whole thing makes sense. But note that the error checking exists for a valid reason. That the mechanism isn't perfect doesn't mean it isn't important.

ISTM that your line of reasoning would indicate that the best way to convince someone with a trapped prior is to present them with lots of REALLY weak (and unemotional) arguments that they are correct. I've heard this approach suggested before, but it's really difficult to intentionally create weak unemotional arguments that you believe are incorrect.

Expand full comment
no-nonsense-crypto's avatar

One thing I'm noticing here is that all the priors you mention seem to be beliefs about the believer's own identity. A Democrat can't believe Republicans are right because they believe that they (a Democrat) are right. The cynophobe, at some point, isn't afraid of dogs any more, they're afraid of themselves--specifically, their own reaction to dogs. The cultists ARE reacting to evidence when prophecy fails and they double: they're reacting to the evidence that faith has been rewarded with acceptance within the cult, and a strong counterexample to the cult's belief system is an opportunity to demonstrate greater faith, thereby gaining greater acceptance. There's a reason faith being rewarded with acceptance is part of the memetic code of many religions: it's an effective "immune system" which defends the religion against outside evidence.

The coyote/polar bear example is a harder stretch, but as someone who was rewarded repeatedly growing up for "being smart", I definitely have some beliefs about myself which cause me to react negatively if ANY of my priors are challenged, simply because I've been conditioned that acceptance comes from having correct priors. I think this is a fairly universal cause of irrationality, but those of us who have been rewarded for being right a lot perhaps are the worst afflicted.

The interventions you mention fit this pattern to some extent: putting the cynophobe in the room with the Rottweiler immediately addresses the fear of the dog but does nothing to address the fear of the cynophobe's reaction to the dog--the slower exposure addresses the cynophobe's fear of their own reaction. Psychedelics are correlated with decreased sense of self, even to the point of complete ego death. In some Eastern practices, decreasing sense of self is an explicit goal of meditation (though I see this de-emphasized in adaptations in the individualistic West). Christian meditative practices do tend to have some giving up of the self, though usually the goal is to supplant yourself with god--but I can see this opening people up to beliefs that don't oppose their religion (i.e. the coyotes/polar bears example). Diet, exercise, etc. might help because they shift the balance of fight/flight hormones: it seems apparent that the same hormones that regulate fear of harm to the physical self would regulate fear of harm to the psychological self.

Showing my hand here: there's a number of points here where I'm well outside my own areas of confidence: i.e. on the hormone idea I'm maybe at 20% confidence. I think I can develop a fairly confident position based on a bunch of weak pieces of evidence (i.e. 10 studies with P=0.7 results actually result in a P=0.03 confidence) but it would be easy to change my mind with a single stronger piece of evidence.

Expand full comment
no-nonsense-crypto's avatar

As a random aside, this is why the "privilege" narrative being promoted by the neoliberal left is SO counterproductive. When you label certain groups (white people, men, cisgender people, etc.) as privileged, that's interpreted by people in those groups as an attack on their identities--perhaps the intent isn't to say "privileged people are bad" but that's how it's interpreted by the people being called privileged. Almost everyone has an "I'm at least a moderately okay person" prior, so you're basically taking your own beliefs and tying their fate to a prior which is then almost certain to become trapped. The result is polarizing the privileged people AGAINST the beliefs that are tied to the privilege narrative.

The privilege narrative isn't necessarily factually incorrect. It's just not persuasive to the privileged, and doesn't achieve the goals of the underprivileged.

In a broader sense, narratives based around blame tend to only work on people other than the people being blamed. That can be effective if the people being blamed don't have much power (in the context of a courtroom, blaming murderers for murder is an effective strategy, because murderism doesn't have much pull). But in personal relationships or representative democracies, blame is a pretty ineffective tool.

Expand full comment
DABM's avatar

Suppose I think the privilege narrative is correct *and* counterproductive in this way. What should I do?

Expand full comment
Luke G's avatar

Avoid the word "privilege" and talk about others' disadvantage instead.

Expand full comment
DABM's avatar

That's probably a good start, yes.

Expand full comment
John Schilling's avatar

What Luke says. If Alice can walk down the street without being harassed by the cops whereas Bob is stopped and frisked every day, the problem is that Bob is being harassed by the cops and not that Alice has not-being-harassed-by-cops privilege.

The state we want to achieve is the state in which everyone has what Alice has, where not being harassed by cops is the default and not the "privilege". You don't get there by describing that desired end state in a way that comes off as derogatory. Derogatory terminology is for the thing you're trying to get rid of, which is presumably Bob being hassled by the cops every day.

And you don't get Alice on your team by saying "this thing that you have, that you reasonably believe that you should have, is an unearned 'privilege' and it's morally dubious that you have it". That's just going to make Alice think you're going to try and take away her not-being-hassled-by-cops thing and next time you look she's flying a thin blue line flag. Saying instead "Bob over here is being oppressed and that's wrong; could you help us put an end to that?" is more likely to win Alice's actual support, as opposed to her fear.

Expand full comment
Jiro's avatar

"that's interpreted by people in those groups as an attack on their identities"

This is another case of conflict theory versus mistake theory.

Your implication here is that they're just trying to do some rational argument, but they innocently say something that gets interpreted as an attack on someone's identity, and that this interpretation is one of those human flaws that sometimes keeps up from being rational.

But the conflict theory scenario is that it really is an attack on someone's identity, and detecting it as such therefore isn't a human flaw at all.

Expand full comment
no-nonsense-crypto's avatar

Well, I tried using "perhaps" and "maybe" to weasel out of actually blaming either side here--if I am taking an anti-blame stance I can't readily start by blaming people, can I?--but clearly I didn't succeed. I did bad.

WRT conflict vs. mistake, I choose *complexity*. When talking about large groups of people, some of those people are trying to do a rational argument and just mistakenly attacking someone's identity, while some are genuinely attacking other people's identities. We could venture guesses of what percentage belongs to each group, but it's not particularly useful information even if we could nail it down perfectly. I'm saying, "the blame thing doesn't work, stop doing it", does it matter why they were doing it?

If the conflict theory is correct, then "detecting it as an attack on your identity" isn't a human flaw, but the underlying trapped prior which causes you to detect it as a an attack on your identity still is a human flaw. It's quite possible to say true statements with the intent to harm, and if accurately detecting intent to harm causes you to misjudge the truth of the statement, that's absolutely a human flaw. In this case, I'm not saying that the privilege narrative is true or false, I'm saying that the fact that those being called privileged interpret it (correctly or incorrectly) as an attack prevents them from evaluating the truth or falseness of the privilege narrative rationally. The intent behind saying a statement is a pretty poor indicator of the truth of the statement, so over-weighting statements said with malicious intent as false is irrational.

Expand full comment
Jacobethan's avatar

I find this conversation interesting, because I feel like I don't have any clear sense of what the moral valence of privilege in the "privilege narrative" actually *is.*

Is being privileged like being healthy? We can easily make sense of statements like, "You've always been healthy, so naturally it's harder for you to grasp what it's like to live with a chronic disease." We might warn healthy people not to pride themselves on their good fortune or overlook the skills and insights of the sick; we might insist on the obligation to redirect resources to those whose health is poor. But nobody thinks of being called healthy as an insult or an attack; nobody strives to "defend" a public figure accused of good health by pointing to a really tough childhood bout with chicken pox.

Or is being privileged like being rich? We mostly don't see the state of being rich as in itself a moral flaw, although Western culture has given considerable weight to the teachings of a thinker who at times seems to be saying exactly that. Rather, we tend to treat the desire for wealth as both psychologically normal and morally dangerous. Whether being called "rich" registers as praise, scorn, or neither will depend heavily on a whole range of contextual factors. Richness seems to lack an intrinsic normative weight -- and can be used in a purely neutrally descriptive way -- but acquires a negative valence in certain argumentative contexts.

Or is being privileged something worse than that? Since we might see health and wealth both as *forms* of privilege, what additional moral information, if any, is conveyed by saying that someone healthy and/or wealthy is also privileged?

I honestly can't really figure out what the answer is supposed to be. And don't know whether this reflects my own confusion, the exceptionally disingenuous motte-and-baileying that appears to dominate this area of discourse, or some genuine incoherence in what the authors of the "privilege narrative" actually believe.

Expand full comment
Bahatur's avatar

How does this relate to values people commit to regardless of the consequences? Consider:

1. A man never backs down from a fight.

2. [Receives vicious beating]

3. A man never backs down from a fight.

I suppose this turns largely on the question of whether "Do X" can be a trapped prior. Since OCD is a thing, probably?

Expand full comment
Doctor Hammer's avatar

"But in fact many political zealots never accept reality. It's not just that they're inherently skeptical of what the other party says. It's that even when something is proven beyond a shadow of a doubt, they still won't believe it."

This part worries me a little, specifically that first sentence and the notion that there is some fundamental underlying reality that we can access and know. Inherently the bulk of political disagreement tends to revolve around things that the "scientifically correct"* answer is "we don't know", as opposed to things like "There is a polar bear visible from an LA freeway." The bear thing is a lot easier to develop evidence towards proving than say whether or not gay marriage is a better policy than civil union laws or something. The latter probably requires some robust experimentation over a long period of time to judge properly, where as the former has evidence about that is reasonably easy to get.

Yet people often treat these questions as binary True/False statements instead of Maybe? My suspicion is that this is because for many things an answer of Maybe implies a course of action similar to False: do nothing right now. This stands in direct opposition to those who believe the answer is True, or at least want people to do the things True implies should be done. So you are a badwrong climate denier if you are skeptical.

On the other hand, Maybe looks dangerously close to True from the standpoint of some who believe False. You might be willing to do a little bit of something that shouldn't be done, after all. Give an inch, and those bastards on the other side will take a mile. So you are a badwrong libertarian anarchist if you think there might be some virtue in taking a look at police accountability.

To me, this suggests a problem with politics in general: no one pays the price for their beliefs, so extreme beliefs are optimal. We may dress them up as scientific or evidence based, but most evidence actually points to "We don't know" or "Don't do anything, ever", so it becomes a big game of rationalization, not rational decision making. An external, testable reality has just about nothing to do with it.

*needs bigger scare quotes perhaps

Expand full comment
Sandro's avatar

I wonder if the processes you describe are related to the phenomenon whereby a trauma is made worse by how the people around them view the traumatic event. For instance, I recall reading that rape victims who are surrounded by people who consider rape as the worst possible thing that can happen to someone, generally fair much worse than those who aren't in such an environment, ie. people treat their trauma like other terrible circumstances that befall people every day.

Theories I've read suggest that in the former case, the trauma becomes a more central part of their identity, and so they cling to it in some way, but I never quite understood the mechanism behind how this works.

While you were describing how the brain attenuates the sensory input of the traumatic event, and how this just reinforces your trapped prior, this seemed kinda-sorta similar to the trauma scenario above: maybe you don't remember all of the details of your trauma with perfcet clarity, but you have a context (the people around you) constant reinforcing the view that your trauma was the worst possible thing that could happen to a person, and so that heightens your fear of it and traps you in this prior that you are a victim of horrendous abuse that will forever haunt you.

Couple that with the facts that memories are at least partly reconstructed (presumably priors factor into this as well), and I can see how the environment in which a victim is immersed could quite readily affect their recovery and how they cognitively engage with the world.

Expand full comment
JKPaw's avatar

I nominate a fourth type of bias as equally significant to the three Scott highlights (cognitive, emotional, self-interested). Social bias warrants its own category, I think, because it is so much more than mere external context, -- since society plays such a reifying role in our perception of ourselves, and vice versa, making it a literal extension of ourselves -- and its gravitational power is simply enormous. No doubt one could use the trapped priors model to analyze societies (and subsets) as well as individuals.

I plead guilty of course to all of the above -- but I'm not 100% certain it's pathological. I admit I have a bad bitchy crackers problem, say, watching films of Hitler patting a dog (in fact, every fucking thing he does STILL annoys me!). And I would think we'd have to gear up into some new quantum-leap kind of stage of human development before we'll be collectively ready for a fully rationalist approach where we'll effectively view someone like Hitler without emotional bias during the dog-patting, and simultaneously recognize that he indeed was one of those rare polar bears that shouldn't be denied or ignored. As emotionally painful as it was, it seems relatively healthy that society (at least western, writ large) developed a severe enough trapped prior, lasting more than 50 years, that our "irrational" fear of brutal populist authoritarians actually helped keep them at bay for that amount of time. (Which brings up the question of time passage, at least as a factor when it comes to releasing societal trapped priors.)

Expand full comment
David Friedman's avatar

"but God coming down from the heavens and telling you anything probably makes apocalypse cultism more probable, not less.)"

There is actually a Talmudic story where God speaks from the heavens telling the majority party of the Rabbi that they are wrong, and their response is "Butt out." (It is not in Heaven. You have told us to follow the majority, so that's what we are doing.)

I believe the source for the evidence that more scientifically sophisticated people are more likely to agree with their side is Dan Kahane, although someone else may have produced similar results. His view is that it is rational behavior. Whether you believe in evolution or catastrophic anthropogenic global warming has almost no effect on the world, since you are only one person. But it can have a large effect on how well you get along with the people around you who matter to you. So it is rational to talk yourself into the belief those people hold even if it is wrong. The more sophisticated you are, the better you are at doing so. This is a little like your rich man/taxes example, but not quite.

Expand full comment
David Friedman's avatar

For some reason, I can no longer reply to anything except for the top level where I am starting a new thread. I assume this is some sort of bug and may be affecting others.

Expand full comment
David Friedman's avatar

I tried switching from Firefox to Chrome, and now I can reply to posts. Still no idea why the reply buttons are not there on my Firefox.

Expand full comment
David Friedman's avatar

Also, there used to be a ~new for new comments. Has that now been removed? How am I supposed to find comments that are new since I last read the site?

Expand full comment
Lambert's avatar

The devs at substack are currently fiddling about with the code for ACX (in prod, apparently); they've got rid of the hearts and changed some CSS. Hopefully normal service will be resumed shortly.

Expand full comment
Eharding's avatar

Why did they get rid of the hearts? It seems unfair.

Expand full comment
Riley Haas's avatar

To me, this sounds very much like a more rigorous exploration of Kolakowski's Law of Infinite Cornucopia "…. for any given doctrine one wants to believe, there is never a shortage of arguments by which one can support it.

A historian’s application of this law might be that a plausible cause can be found for any given historical development. A biblical theologian’s application of this law might be that for any doctrine one wants to believe, there is never a shortage of biblical evidence to support it. "

Expand full comment
Jacobethan's avatar

>>"Along with the cognitive and emotional sources of bias, there's a third source: self-serving bias. People are more likely to believe ideas that would benefit them if true; for example, rich people are more likely to believe low taxes on the rich would help the economy.... I don't consider the idea of bias as trapped priors to account for this third type of bias at all; it might relate in some way that I don't understand, or it may happen through a totally different process."

Maybe I'm misunderstanding, but I'd see "self-serving bias" as following from the model in an almost formal sense, once we add the stipulation that the experience you're integrating with your priors is always *your* experience.

Insofar as, e.g., low taxes are in your (narrowly construed) self-interest, then every time there's a tax cut your direct experience will be of a positive outcome. Maybe you also see some studies or hear some anecdotes about the tax cut being bad for other people, but these seem dubious. Your prior is that tax cuts help people, so if these other people aren't being helped, isn't that more likely to be for some exogenous unrelated reason?

Of course, if you encounter enough evidence of the latter kind, you might eventually come around to the view that your case is exceptional and the optimal policy *for you* isn't the one that's best for society on net.

The point is just that your own immediate experience of [Policy X --> Positive Outcome] is always going to introduce a certain initial weight drawing you away from reflective equilibrium. Which is precisely the metaphor underlying "bias" in its original meaning.

Expand full comment
walruss's avatar

Interesting article for me to stumble on today, as I've been thinking a lot about this dichotomy recently in my personal life and in observing my friends' problems.

Imagine the perfect narcissist. This person has no awareness of his surroundings - he has substituted any real observations for his own internal world. He has no awareness of the emotional states of others, obviously. He does not even have any awareness of his own emotional state.

Such a person could not function in the world. People with narcissistic personality disorder have to be getting in some data - in fact, a lot of the symptoms like rage wouldn't be possible if they weren't. These symptoms occur when the real data contradicts the narcissist's narrative.

Without an external data feed, you absolutely cannot grow as a person. And the process of taking external data, which doesn't give a crap about you, and bending it to your own context and your own needs, is a little selfish.

If you didn't do this, there'd be no difference between you and an insect. Stimulus->Response, with absolutely no middleman. But I'd argue there's a scary other end to that spectrum where your internal labyrinth entirely overwhelms experiential data, and that you can find most mental illness over there.

As a culture, I think we've overvalued our internal world. And maybe the solution to it is to mindfully increase our incoming data bandwidth - make a conscious effort, whenever you think of it, to fight the urge to retreat into your own mind and instead focus on sensation directly. Intentionally give more weight to the sensory over the analytical, the gestalt intuition over the carefully though-out, and the immediate over the future or past.

In other words, stop being rationalist, for a little while each day, and just do some things having no idea whether or not they're good ideas.

Is this a good idea? I don't know.

Expand full comment
Bepis's avatar

"seems to create permanent weird false beliefs for some reason"

Maybe there are multiple layers of "trapped priors". As we grow up we develop some base trapped priors that help us stay relatively normal (using the assumptions of the people around us). While psychedelics can help break higher level trapped priors, they also might break some of the lower level trapped priors, allowing the individuals to explore ideas further outside the norm. This can lead them to eventually redeveloping trapped priors that are far outside the norm, making them "weird"

Expand full comment
Alsadius's avatar

> Although I don't have any formal evidence for this, I suspect that these are honest beliefs; the rich people aren't just pretending to believe that in order to trick you into voting for it. I don't consider the idea of bias as trapped priors to account for this third type of bias at all; it might relate in some way that I don't understand, or it may happen through a totally different process.

Seems like it's probably the same mechanism. You flinch from things that can hurt you. That can be emotional pain (like traumatic experiences), but it can also be expected future pain. "This argument will cost me down the line" is a type of pain to your subconscious, and it seems plausible that it'd have similar defenses.

Expand full comment
Carl Pham's avatar

I feel somewhat that the trapped prior here is the assumption that *what people are doing* when they have intercourse with others is gathering information for the purpose of critiquing their own points of view. But surely that is not obvious. My hypothesis in general is that almost all exchange of information between people is in fact for the purposes of persuasion and/or reinforcement of existing points of view. Id est, people quite rarely converse for the purpose of finding out something new, but much more often in order to reinforce something old (or just for social signaling that sorts other people according to their (old) beliefs).

To take an extreme example, the purpose of a wartime Department of Propaganda reading enemy newspapers is *not* to evaluate whether they should actually be an enemy, but to find new ways to defeat them. Even learning about the enemy's strong points -- at which he is good, noble, and true -- only serves as grist for the propaganda mill: knowing how and where he is strong, we can devise better attacks and defenses. Indeed, knowing the enemy is strong might very well *increase* our devotion to our own cause, inasmuch as we realize victory hinges on a fiercer dedication.

I wouldn't say as a rule we are trying to destroy each other, but there is probably a strong ongoing competitive ecology of social (and even personal) memes among us tribal primates, and it seems not unlikely quite a lot of our intercourse with others represents mere "foreign intelligence" and/or "propaganda" work in service of competing memes.

If that's the case, then we would naturally be confounded by the effect of evidence on beliefs, because we would have the mistaken believe that the mental "engine" at work has as its main purpose the accurate weight of evidence.

I thought it was Napoleon who said "Reasonable men have always a tendency to believe that others are like them, in which point of view they are not reasonable."

Expand full comment
Tom Grey's avatar

It's really too bad the "Rationalists" are unwilling to apply the irrational dog-phobia to their own irrational Trump-phobia:

<i>"Less-self-aware patients will find their prior coloring every aspect of their interaction with the dog. Joyfully pouncing over to get a headpat gets interpreted as a vicious lunge; a whine at not being played with gets interpreted as a murderous growl, and so on. This sort of patient will leave the room saying 'the dog came this close to attacking me, I knew all dogs were dangerous!'"</i>

Some are so non-self-aware they will even admit Trump did not say certain words ... yet still their Trump-phobia makes them CERTAIN that Trump means what he did not say. When Trump says "peaceful protest" they hear "violence and riot".

Since reading The Passion of Ayn Rand, by Barbara, the cheated on wife of Nathaniel who was with Ayn, it was clear that even smart folk, maybe especially smart folk, can rationalize whatever untruth they want to believe. Of course, if they hear "violence" when Trump is saying "peaceful protest", it's something different than mere rationalization. This phobia note explains it.

Expand full comment
NotNervous's avatar

I suspect this is a semi-widely shared anecdote: one effective antidote to political trapped priors seem to be genuine interpersonal connections. Having discussions with friends you otherwise respect can make you interpret their ideas more charitably than coming from a stranger, and raise your threshold for the mistake theory --> conflict theory trigger. Not 100% sure if either part of this fit in the perception or context part of the model.

Deeyah Khan's white supremacy documentary shows this quite well imo, where a group of white supremacists grew found of her who has middle-eastern parents, through personal interactions, to the point where she (accidentally?) deradicalised a few of them. Many such cases in history ofc. I believe this was also Christian Picciolini's major point.

Expand full comment
betulaster's avatar

I think that at least to some extent, the problem with generalizing the scary-dog-friendly-dog example to a policy-evaluation case is that frequently, the "evidence" is not pure in the first place. For anyone but an extremely deeply cynophobic person, it would be harder to interpret more (as-per-non-cynophobe) friendly dog behavior as aggressive/scary, however narrow the evidence band.

What I mean by that is that when the absolute majority of people want to find out what "the science" says about X, they're not actually going to go and read papers - they'll read what their trusted journalists and/or bloggers say the science says, which will be partisan/skewed anyway. Even if they find a source that is very committed to not skewing research, reporting will rely on dumbing down the study to some extent, which inevitably introduces the risk of simplifying the conclusions into something they are not saying.

Expand full comment
Jeremy Ray's avatar

I wonder how effective it'd be to take someone with trapped political priors completely out of their environment -- both geographical and virtual, to get them away from their friend circles and echo chambers. Being transplanted into a different community would not only increase the evidence they're exposed to for alternative views, but there'll also be a social element to consider. Throwaway comments that are completely based on everyone's collective priors (and evidence) will no longer be welcome, and will be challenged.

As someone who grew up in a highly conservative and religious area and then moved away, I can vouch that some entrenched priors were removed, though it took years in some cases. It's obviously a large undertaking and wouldn't happen unless the person involved is wanting a life change, or up for extreme experimentation. And of course, many of these folks with trapped priors won't *want* to be transplanted into a different community. Nevertheless, if those hurdles are overcome, it could accomplish the goal of bringing the trapped priors back from the "threshold"?

Expand full comment
kaminiwa's avatar

I think the "Point of No Return" is relative to not just the "amount of evidence you'll be exposed to" but more importantly the amount of SAFE evidence you're exposed to - an individual "zone of safety" that constrains what defines evidence.

I think if you have someone who knows what a trapped prior is, and recognizes yours, and they're someone you trust, you can actually get out of a trapped prior pretty easily. As you mentioned: slow and gradual exposure in safe doses.

For something like a phobia, this seems relatively easy to provide. I had a friend significantly reduce my phobia of dogs just by teaching me a few things about dog body language using pictures and videos. I'm still not great around them, but it's a lot easier to deal with them when I need to. I think it was maybe an hour of her time, total?

In particular, she mentioned that a lot of "hostile" dog body language in movies is actually a friendly dog visual, and then angry growling is dubbed in - handling an angry dog during filming is much harder than just using a generic growl! She used a few examples and fixed that prior of mine, and now I can at least tell whether a threat is real or just me having a panic attack.

For other domains, like politics, it's probably a lot harder to develop that sort of trust. But just understanding that some people out there have trapped priors should improve your ability to cut short unproductive debates.

Expand full comment
Vasily's avatar

Now I understand that I am probably doomed in covid issues. That's interesting that a year ago I thought myself moderate, I warily supported restrictions though was suspicious of lockdowns.

After five months of being locked up in a tiny apartment and then an escape from Chile (where I lived for five years) to Russia (where I was born and lived before emigration) I became doomed exactly in a terms described by Scott. Now when I hear somebody tells something like "Let's wear masks, this prevents more hard measures" I can't help but think that person is either extremely naive or evil. In Chile people were mostly mask-compliant and they quickly got a long lockdown that is still partially in action (and it is summer there). In Russia many people tried their best not to wear masks, (some people were beaten, at least one killed after they insisted, that others should wear masks) - and there was a short lockdown in Moscow, that ended in a month and no lockdowns after that. Last November an official from my city (Saint Petersburg) said "If people do not wear masks they will not stay at home, so no lockdown". And the same trend you can see through all the Europe and America. So I think every person who enthusiastically wears a mask objectively increases the chance of a new strict lockdown.

Is there any way out of this rabbit hole?

Expand full comment
Eharding's avatar

Russia's COVID deaths per capita are 3x Chile's though.

Expand full comment
Vasily's avatar

You probably exaggerate the difference. I suppose you got this number by dividing raw excess mortality from https://github.com/dkobak/excess-mortality by country population and got 0.001 for Chile and 0.003 for Russia. I am not sure it is a good estimation, as it implies that all excess deaths are covid deaths - this approach would also show that in Australia covid resurrects people, as excess mortality is negative there.

The data from the same site shows that excess mortality rate as excess mortality divided by expected mortality is not so different for these countries: it is 23% in Russia vs 16% in Chile. I think it is a better indicator of an overall covid influence to deaths as it takes into account the underlying mortality rate.

And even if you are right I don't think these numbers justify the total destruction of normal life.

Anyway my main point was that wearing masks usually leads to more strict measures instead of preventing them. It happens because masks are not effective enough to really impede the spread but become some sort of signal that we are all concerned it is very important, that leads to a justification of more severe reposnse. (There are obviously exceptions in Europe like Denmark, but scandinavian countries are always full of mystery)

Expand full comment
Doblio's avatar

I confess I have suffered from trapped priors. What helped me was engaging in an activity that forced me to often and consistently update my beliefs based on empirical data: trading. This somehow loosened up my trapped prior, as if my whole "system" had become more "open". I believe other activities could have a similar effect. Building something, craftmanship, well I guess anything non-theoretical could work. But 100% agree this is a hugely important field to study.

Expand full comment
everam's avatar

I think Yudowsky said that many rationalists originate in coding because it's the only profession where you find out you were wrong several times an hour.

Expand full comment
Paul Harrison's avatar

Practical application of Bayesian statistics often involves probabilities so small that the only way to work with them is on a log scale. Enough very weak evidence over time can overcome a badly specified prior that giving this sort of tiny prior probability to the truth. On a log scale, evidence keeps adding up.

Expand full comment
TLW's avatar

...if the prior is stored in sufficient precision, and/or you do fancy things like nondeterministic rounding modes.

Expand full comment
Incurian's avatar

"Ironically, my prior on this theory is trapped - everything I read makes me more and more convinced it is true and important. I look forward to getting outside perspectives."

Your deep dives into the ultimate question of perception, cognition, and everything, when considered together as a series, are beginning to look kabbalistic.

Expand full comment
Emdash's avatar

I'm not sure how much rationality as a cognitive exercise has to offer for this particular issue. This idea of the Bayesian Brain is really helpful and informative for conceptualizing a lot of things (I think this paper really influenced that idea: https://www.psy.gla.ac.uk/~martinl/Assets/MCMPS/KnillPouget04.pdf). But biology doesn't actually care about truth, and your brain was not evolved to process information and find truth. That is mostly a happy by-product of the fact that a world model is generally more useful for survive if it corresponds with truth, but that might not always be the case. I think expecting everything to be Bayesian just because some things are is in itself a trap.

Particularly for things directly related to survival (food, safety, sex, group membership) it seems likely that your brain just isn't going to be prioritizing objectivity, maybe to the point where objectivity is physically impossible based on whatever biological implementation these have. If this is true then no amount of information can ever change this sort of belief. You would need to tap into whatever incentive structure created it in the first place.

Expand full comment
Andrew Swift's avatar

You didn't mention the role of heros, idols, experts, cult leaders, ideologues etc.

In the case of a phobia, it's not relevant, but for ideological fixation, a trusted expert can go a long way.

Most people don't think for themselves. They set a sort of network of trusted experts and then go with the flow.

These respected allies provide ongoing confirmation that the cult is on the right track.

"If these people, whom I trust, all believe X, then I believe X. If you show me evidence to the contrary, it is necessarily false, however convincing it might seem superficially."

If the cult leader sits everyone down and says hey, it was all a lie (with lots of details)... perhaps some people might stick with the lie but there's a much better chance that many cult members will be released.

The problem then becomes networked falsehoods spread by major influencers. Still a big problem, but not the one you're describing.

Expand full comment
Eharding's avatar

Is anybody else upset at the likes being gone? Now I can't distinguish the popular comments from the unpopular ones, which hurts my ability to read the thread.

Expand full comment
Jordan R's avatar

Perhaps a summary is that priors can “update” input signals before those transformed inputs are used to update the prior. Trapped priors occur when the input signals are always transformed such that they subsequently confirm the prior. This can actually be rational behavior if the reward experienced for having the prior confirmed is sufficiently large.

Expand full comment
Jon Sinnreich's avatar

Having trapped priors in your brain sucks but it's still better than having trapped prions in your brain.

Expand full comment
RZB's avatar

Well, this is #549 for “Trapped Priors As A Basic Problem Of Rationality” : What a loaded title to unleash a torrent of comment.

Humans exist in one physical universe but live in the multi-verses of the mind created by the brain with sensory input, connected by neurologic (somatic and autonomic nervous system), and actuated through muscular systems. Every aspect of these systems exhibit a distribution which influences capabilities, behaviors, and abilities etc which vary in time. Terms such a “standard observer”, “neuro-typical”, normal, abnormal, sane, crazy are used to characterize populations. These latter terms are particularly important to Scott’s primary profession.

Scott characterizes the learning process of the brain’s neural network as “the brain combines raw experience (eg sensations, memories) with context (eg priors, expectations, other related sensations and memories) to produce perceptions. You don’t notice this process; you are only able to consciously register the final perception, which feels exactly like raw experience.” For some reason he prefers the term priors to history and perceptions to thoughts.

Now rationality implies logic. Logic and rationality { https://en.wikipedia.org/wiki/Logic_and_rationality }

As the study of arguments that are correct in virtue of their form, logic is of fundamental importance in the study of rationality. The study of rationality in logic is more concerned with epistemic rationality, that is, attaining beliefs in a rational manner, than instrumental rationality.

Economics: Rationality plays a key role in economics and there are several strands to this. Firstly, there is the concept of instrumentality—basically the idea that people and organizations are instrumentally rational—that is, adopt the best actions to achieve their goals. Secondly, there is an axiomatic concept that rationality is a matter of being logically consistent within your preferences and beliefs. Thirdly, people have focused on the accuracy of beliefs and full use of information—in this view, a person who is not rational has beliefs that don't fully use the information they have.

Debates within economic sociology also arise as to whether or not people or organizations are \"really\" rational, as well as whether it makes sense to model them as such in formal models. Some have argued that a kind of bounded rationality makes more sense for such models.

Others think that any kind of rationality along the lines of rational choice theory is a useless concept for understanding human behavior; the term homo economicus (economic man: the imaginary man being assumed in economic models who is logically consistent but amoral) was coined largely in honor of this view. Behavioral economics aims to account for economic actors as they actually are, allowing for psychological biases, rather than assuming idealized instrumental rationality.

Artificial intelligence:” Within artificial intelligence, a rational agent is typically one that maximizes its expected utility, given its current knowledge. Utility is the usefulness of the consequences of its actions. The utility function is arbitrarily defined by the designer, but should be a function of \"performance\", which is the directly measurable consequences, such as winning or losing money. In order to make a safe agent that plays defensively, a nonlinear function of performance is often desired, so that the reward for winning is lower than the punishment for losing. An agent might be rational within its own problem area, but finding the rational decision for arbitrarily complex problems is not practically possible. The rationality of human thought is a key problem in the psychology of reasoning.

As an example, consider the work on autonomous vehicles which use SLAM: Simultaneous localization and mapping.

https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping?wprov=sfti1.

By analogy, humans navigate in their multi-verse for numerous purposes: staying alive, reproduction, among others. Reasoning and rationality likely evolved in humans because in a Bayesian analysis they are traits that lead to positive outcomes such as staying alive, reproduction, among others.

As an example on distribution, human vision has color, focus sharpness, response time, spectral sensitivity vs light intensity etc.

In the United States, about 7 percent of the male population—or about 10.5 million men—and 0.4 percent of the female population either cannot distinguish red from green, or see red and green differently from how others do. More than 95 percent of all variations in human color vision involve the red and green receptors in male eyes. It is very rare for males or females to be “blind” to the blue end of the spectrum.

The 8% of colour blind men can be divided approximately into 1% deuteranopes, 1% protanopes, 1% protanomalous and 5% deuteranomalous. Approximately half of colour blind people will have a mild anomalous deficiency, the other 50% have moderate or severe anomalous conditions. { https://iristech.co/statistics/ } Women tend to have enhanced blue sensitivity compared to men.

Differences in color categorization manifested by males and females { https://www.nature.com/articles/s41599-019-0341-7 }

Everyone’s multi-verse is potentially shaded differently as is their history

Expand full comment
David Lindemann's avatar

This is certainly consistent with my work as a former military physician on the problem of PTSD from a neurologic (versus cognitive/psychiatric) perspective. Generally the best results when people heal are from eliminating and resolving "trapped priors", which I have seen most commonly (although not always) related to unprocessed shock-level events. I don't find therapeutic value in treating the behavior patterns or trapped priors as independent entities from the experience ... it simply reinforces the avoidance which is a feature of the trapped prior itself.

I have seen this resolution happen routinely by the individual coming into the memory from the outside (i.e. what would you say to encourage that person). This creates a positive association, instead of the reinforced negative of the dog perception example. This could be likened to putting lotion on a burn and has more quality control over the experience ... it's still a burn, but the experience was fundamentally soothing and not irritating, so the relationship with the burn changes (to one of attraction to soothing/love that heals instead of avoidance of pain).

Since most hyperpriors are established by especially important/intense or repeated events, things that stabilize the brainstem function and signal survival are usually helpful - basic breathing and nursing movements provide a bottom-up sensory stimulus of safety that can be accompanied by a top-down interpretation to build a stronger experience ... but it still has to be connected back to the point of unprocessed experience or you will be fighting a long time!

The problem with attempting to overcome priors without resolving the causative events (i.e. reason or stimulate your way out of them without interacting with the primary event) is that you still have unresolved/unprocessed sensory data that will continue to inductively reinforce the prior - strong, unprocessed sensory experiences still hurt (i.e. unprocessed) and serve like an infection which will continue to result in future sensory suppression (which perpetuates the problem) until resolved.

The other challenge is that these are short-term protective (against pain), and so you are fighting against a survival/protective mechanism - that some evidence suggest may relate to physiologic vasoconstriction of focal areas of the body and brain, implying potential therapeutic value in understanding circulatory mechanics.

But on the survival side, as Vincent Filleti (ACES work) says about maladaptive behaviors (substance use, etc ...), "you are dealing with people's solutions, not their problems". We view them as problems, but they view them as physiologic solutions for pain, and that has to be understood in terms of why these are so hard to simply eliminate with a snap of the fingers!

Thanks for the great summary and post! Very stimulating and consistent with my "perception and experience thus far" of the world :) Biases successfully reinforced. Great day!

Expand full comment
Paul T's avatar

> The basic idea of a trapped prior is purely epistemic. It can happen (in theory) even in someone who doesn't feel emotions at all. If you gather sufficient evidence that there are no polar bears near you, and your algorithm for combining prior with new experience is just a little off, then you can end up rejecting all apparent evidence of polar bears as fake, and trapping your anti-polar-bear prior. This happens without any emotional component.

I can see that in theory this could happen without emotions, but I’m wondering how frequently we see practical examples (in humans) where the trapped prior isn’t tied to an emotional response.

It seems the cases shown here include:

1. Clearly emotionally-charged subjects in politics, where it’s pretty easy to see the “disgust” / “fear” response flooding the perception and preventing a correct update of priors.

2. Potentially less-clearly emotional cases like tribal / religious identity (cool-headed analysis of political questions that nonetheless fall prey to bias? Religious cults?). Do we want to classify these as not emotional? I could see a case either way. Tribal identity seems like a pretty deeply-rooted part of how the human brain processes social information/situations. And the outgroup responses seem to tie in to emotions at a pretty low level. But I could see an argument that this is distinct machinery from lizard-brain fear/disgust responses.

3. Clear pure-epistemic cases, where there is no emotional flooding blocking the updating of priors. I’m unclear if this set is empty. Is flat-earth-ism one of these? Someone gets infected with a meme that injects the belief “The conspiracy is going to try to convince you that I’m wrong with scientific-sounding facts. If they sound science-y and come from people in lab coats that will be proof that this belief is correct”. Now without any emotions, the update mechanism is subverted. Perhaps we put the cognitive bias of attending more to facts that agree with our previous beliefs here?

> But the context is a very strong prior that dogs are terrifying. If the prior is strong enough, it overwhelms the real experience.

While this is described as the “pure cognitive / no emotion” case, this sounds more like 1, strong emotional flooding. Even without lowering the bandwidth of the present-experience through the trauma response, if the fear association is strong enough it floods/drowns out the “objective” / external experience.

Maybe 3. is actually a different mechanism; 1/2 could be purely explained by emotional flooding causing the subjective experience to be negative, therefore priors aren’t updated. 3. could have something to do with the non-Bayesian nature of the human updating machinery that Scott mentioned in the previous article:

> it can't possibly be this simple, maybe "bad" and "not bad" are binary states and once your brain decides something is bad it fails to process that it was slightly less bad than expected

If “binary” seems too simplistic, it could be some sort of quantization or even just a highly non-linear scale where input of -0.9 is scaled to -0.99999 which would require more than a lifetime of updates to shift meaningfully.

In summary I think this is a useful concept, but there could be two fairly distinct mechanisms at play here that are both causing the overall family of “trapped prior” update failures. In particular it’s not obvious to me that the rationalist experiencing an emotional flood should respond in the same way as a purely epistemic case where cognitive biases are preventing updates from happening properly.

Expand full comment
Paul Kube's avatar

Or as we used to say, One person's modus ponens is another's modus tollens. (Didn't need any of that fancy real-number probability stuff, and we liked it.)

Expand full comment
Mormegil's avatar

“I'm not sure what level of evidence could possibly convince them.” reminded me of the “arguments” trying to disprove something by “if this were true, some proof would have been found somewhere” but missing the point that for the proponents, the proofs _had been_ found. It’s just the opponents are ignoring the “proofs” as obviously nonsense/crackpot/irrelevant/…

Expand full comment
rdn's avatar

This is exactly what is happening with people who are excessively fearful of COVID (ie, believe that COVID is worse than a once-per-decade flu):

- self-serving bias: I can work remotely and I am saving more money that ever. My life is calmer. Therefore, the economic damage to poorer people is similar, and I don't have to expend too much mental energy drilling down into the question.

- "more scientifically literate people are more likely to have partisan positions on science". In this case (as in the case of eugenics in early 1900s), they "follow the science" without thinking to critically.

Expand full comment
tcheasdfjkl's avatar

"The other promising source of hope is psychedelics. These probably decrease the relative weight given to priors by agonizing 5-HT2A receptors. I used to be confused about why this effect of psychedelics could produce lasting change (permanently treat trauma, help people come to realizations that they agreed with even when psychedelics wore off). I now think this is because they can loosen a trapped prior, causing it to become untrapped, and causing the evidence that you've been building up over however many years to suddenly register and to update it all at once (this might be that “simulated annealing” thing everyone keeps talking about. I can't unreservedly recommend this as a pro-rationality intervention, because it also seems to create permanent weird false beliefs for some reason, but I think it's a foundation that someone might be able to build upon."

If I'm understanding this correctly, I don't think the "for some reason" part is actually mysterious - if a drug makes you more heavily weight actual experiences over priors, and also gives you weird actual experiences uncorrelated with reality, then it makes sense that you'll both be more likely to update on the true evidence you've been collecting and ignoring in your everyday life AND that you'll also be more likely to update on the false evidence the drugs are feeding you.

More generally it seems like this post takes the implicit view that generally people will tend to have overly strong priors and can improve their reasoning ability by weakening them; this may well be true at the population level, but I'm pretty sure it's also possible for one's priors to be too weak. (Which may look like e.g. taking supernatural explanations too seriously when you did not already believe them. (As opposed to a trapped prior in favor of a supernatural explanation, which is common too.))

Maybe this is why there seems to be a rationalists-->[people who are into woo and don't seem to make a ton of sense or be especially good at reasoning] pipeline? For a while weakening your priors has good effects on one's reasoning ability compared to baseline, and then as some point it has bad effects instead?

Expand full comment
Maynard Handley's avatar

This is plausible, but let me give a somewhat different theory (as always, the truth may well involve both).

Let's consider the standard model Level-Public explanation of our times:

person did <X> because of <reason couched in terms of ideology>

The salient feature here is the *ideology*. 200 years ago it was Christianity, 50 years ago it was Communism, today it's Woke. The point, in each case, is not that there's a basic moral intuition

- "do unto others as would be done unto you"

- "pay workers a fair wage"

- "treat women like men"

it's that there is a massive collection of deductions as to how the world should be, excuses for why the world does behave the way the model says it does, and a closed off worldview such that the ideological worldview is always able to triumph over "truth", regardless of how truth is demonstrated.

OK, now let's consider a level-N+1 explanation of the prevalence of ideology.

Consider these two statements:

- "the Nazis executed a very clever tactic during the invasion of France whereby they deliberately bombed towns and villages of no military value so as to create streams of refugees that clogged the roads and blocked Allied transport to the front"

- "Ivanka Trump is substantially above average in physical attractiveness"

My experience is that most people (at least, let's say, Democrats for the second) have a real problem with these statements. Their minds rebel at the concept of something being "good" along one axis and simultaneously "bad" along an orthogonal axis, and most people prefer to resolve this by simply asserting that there is no such tension -- the Nazis were not clever, they were simply evil; Ivanka is not pretty, she's also just evil.

Think I'm being ridiculous? Look at the response to Karlheinz Stockhausen's 9/11 comments, eg

https://nymag.com/news/9-11/10th-anniversary/karlheinz-stockhausen/

This n+1 level is a superset of the public level, because it obviously explains the public level ("truth" or <some argument> is good along one axis, but bad [disagrees with the ideology] along an orthogonal axis; let's resolve the tension by picking ideology over truth/<good argument>)

But this explanation is more powerful because it covers cases of tension (like the Nazi case, as opposed to the Ivanka case) that are not objects of active ideological battle right now - few are generating their opinions about Nazis by considering "what have my tribal leaders said about this, what have the tribal opponents said, let's synthesize a response that maximizes the first and minimizes the second")

But we can, I think, do even better. What's we've concluded so far is that most people can't sustain a multi-dimensional "stance" towards something; they feel they have to collapse all the dimensions down to a single-dimensional "good vs bad" continuum (and mostly not even a continuum, two clusters, one of good, one of bad). So why should that be?

I suspect the underlying driving factor is some sort of tribalism inherited from the very earliest social species days. Things were judged in terms of "good for me/the tribe" (and usually, like GM and the US, those were synonymous) vs "bad for me/the tribe". The subtlest this got was maybe something like "Yes, they are our enemies and we want to kill their men. But while we're there we might as well rape their women, always good to have a backup copy of the genes floating around."

This gets us to essentially the same place as Scott, but the mechanism (most minds collapse all value judgements to a single dimension) and the reason for that (ancient tribalism) are substantially different from Scott's explanation (their Bayesian machinery has broken down).

I think my explanation is more powerful than Scott's (though his mechanism may be a part of how my larger mechanism works) because Scott's machinery doesn't really have a story for the sort of example I gave for clever Nazi's, cases where people don't want to admit a good vs bad tension, but this is not really part of living in a bubble/locking one's priors to absolute truth.

(And such cases are common! There are similar very clever financial manipulations [that hurt a lot of innocent people], or beautiful art works by horrible people, or very clever design elements in <chip/operating system from Apple/Google/MS/Intel/Facebook/whichever tech giant you choose to hate>, or elegance in the design of a fusion weapon.)

Expand full comment
snav's avatar

Step 1: first we reinvent Freud's theory of repression:

> Van der Bergh et al suggest that when experience is too intolerable, your brain will decrease bandwidth on the "raw experience" channel to protect you from the traumatic emotions.

Step 2: then we pull from one of Freud's favorite influences, Spinoza, to reinvent his "associative" theory of emotions:

> you're in an abusive or otherwise terrible relationship. Your partner has given you ample reason to hate them. But now you don't just hate them when they abuse you. Now even something as seemingly innocent as seeing them eating crackers makes you actively angry.

Step 3: then we reinvent Zizek's theory of ideology, but without the parts that specifically focus on how it's specifically *symbols* that people tie their deep priors to (i.e., you can tell a Republican about a program that Democrats would support, and they will be in support of it so long as you don't tell them that it's a *Democrat* program).

> zealots' priors determine what information they pay attention to, then distorts their judgment of that information.

There's also a subtle normative/ethical stance in Scott's post, by implying that some political stances are more "detached" from "reality" than others, i.e. that in an entirely symbolic game, some people are making "wrong" or "bad" decisions which we need to "fix".

Toss in some REBUS, and we're back in MK-ULTRA territory, trying to use psychopharmacology to "fix" people's beliefs, a program that worked "very well":

> Its goal would be a relatively tractable way to induce a low-prior state with minimal risk of psychosis or permanent fixation of weird beliefs, and then to encourage people to enter that state before reasoning in domains where they are likely to be heavily biased.

Finally this:

> Tentatively they’re probably not too closely related, since very neurotic people can sometimes reason very clearly and vice versa, but I don't think we yet have a good understanding of why this should be.

My stance is that they're absolutely related. Think of a "trapped prior" (i.e. a traumatic kernel, in Freud's language) as a "black hole" that "warps mental space-time" for the individual. When you're traveling around (thinking) far from the black hole, then the thoughts "make sense", but the closer you get to the black hole, the more things get distorted, irrational-seeming.

The problem is, as Lacan says:

> there is no position of 'mental health' which could be called 'normal'. The normal structure, in the sense of that which is found in the statistical majority of the population, is neurosis, and 'mental health' is an illusory ideal of wholeness which can never be attained...

So space-time will form black holes, the only question is where are they located. Perhaps you want people's mental space-time to warp in more familiar places ("my bias is actually the normal"), and want to establish an ethical program to ensure that more people are "like me".

This wouldn't be the first time that institutions or groups have embarked on such a program, but I urge you to think through the ethical implications, if "eliminating bias" no longer has a "privileged" position with respect to "reality". One thread to pull on: what happens to the ecosystem of thought and communication if we successfully eradicate "diversity of bias"? And what will the material consequences be?

Expand full comment
sclmlw's avatar

I'd like to take a different lesson from the psychedelics research. They focus a lot on 'set' and 'setting'. This is the idea that it matters how you enter into and in what context you experience a trip. They focus a lot on this, and I've heard some insist that it's entirely the difference between a good and a bad trip.

I've never used them myself, but I hear a lot of descriptions of psychedelics as effectively eliminating the 'prior' side of the models above, leaving only sensory experience as an input.

There's a place we can build this into Scott's model. He brought up the idea of weighting priors/sensory evidence, but never asked the question, "What's the weighting function?" Yet from his desensitization example, as well as the discussion about emotion-motivated setting of priors, we can make the following guesses:

- Priors are weighted by the emotional context under which they were established

- Sensory evidence is weighted by the set/setting under which it is experienced

This is why the gradual approach of pictures of puppies works well, but the locked room doesn't. It's not that priors can't be reset, it's that the sensory evidence that should reset them only gets sufficient weight when amplified by the correct modulator (set and setting).

Expand full comment
Steve Glanville's avatar

Andrew Huberman often refers to the cognitive regulation of emotion through eye movement - there is evidence that a subject recounting a traumatic event while moving the eyes laterally extinguishes the fear or distress. This kind of study: https://www.jneurosci.org/content/38/40/8694

Expand full comment
Logan's avatar

> this might be that “simulated annealing” thing everyone keeps talking about

Umm, who's talking about this? I feel deeply out of the loop, but google is just returning results from computer science. Is there a fad among rationalists of trying to overcome biases by believing obviously-false claims to see if they turn out to be true, and they're calling it "simulated annealing," and I haven't heard about it?

Expand full comment
TLW's avatar

Some thoughts based on the observation that computational power (both operations and data storage capacity) isn't free.

*****

Speculation: the writeback of the updated prior is a nontrivial portion of the total energy cost of a Bayesian update in humans.

If so, then one obvious optimization from an overall-energy-cost point of view is to only do a writeback if the prior has changed by a non-negligible amount.

...and then you end up in the sorts of trapped priors that you describe.

*****

One other related observation: you end up with exactly the same sort of thing with deterministic rounding modes and restricted-and-finite bitwidths for the prior, such as you would end up with if you were trying to minimize storage requirements.

(If the rounding was _non_deterministic and the bitwidth small then you could end up in a funny scenario where small amounts of progress generally wouldn't happen - you'd see a nontrivial amount of progress or none. I don't know enough to know if this sort of thing happens.)

Expand full comment
Jim Ehrlich's avatar

I love this piece. An assumption of Bayes Rule is that priors do not influence new evidence, but obviously they do. If magnitude but not direction of evidence is altered, then we still converge on truth. If direction is altered, we have "trapped priors".

Expand full comment
Figo2000's avatar

Maybe we can explain the trap if there is a prior on the expected physiological reaction that constantly underestimates the physiological reaction because it is tainted by some rational expectation. My rational brain tells me that being with a dog will be ok. But my physiological reaction turns out to be quite bad for no rational reason. That might reinforce the physiological reaction (the reaction was actually bad and much worse than I expected overall) and it may keep the rational expectation unchanged because come on, puppies. No clue whether rational vs physiological priors are a real thing but it seems hard to escape.

Expand full comment
David A. Oliver's avatar

Yet, as Stephen Senn persuasively argues, "the placebo effect is largely a myth." Perhaps a more profitable approach would be to figure out how so many people avoid falling for illusions.

Expand full comment
Aditya's avatar

I used paint's color picker to check if the raw pixel data at the chess pieces in both the images are grey. I tried to avoid the smoke that was hard. https://imgur.com/avQC6Kx I think the white pieces are actually grey and look whiter than it actually is. But the black pieces are actually black not grey. I think it would be better if you could post a 3rd image without the smoke.

Expand full comment
Geoffrey Archer's avatar

I still can't get the image of some monks stuck in a precarious position every time I read 'trapped priors'.

Expand full comment
Robert Jones's avatar

I think perhaps "bitch eating crackers" is a poor example, because other people eating crackers is mildy annoying. If a stranger were eating crackers opposite me on the train, I might be irritated by their lack of consideration. If a friend is eating crackers, I'm less likely to be bothered. In part, I think that's because I *could* ask my friend to stop if it was bothering me too much (whereas I would feel uncomfortable making that request of a stranger), and the mere fact that I could make the request puts the cracker noise within my domain of influence, rather than being something externally imposed.

With the toxic relationship, the problem is (I speculate) that the guy fears that if he asks his partner to stop eating crackers, that would trigger a massive row. In turn, the mere fear of that row, without his partner actually doing anything, makes him furious, because what sort of relationship is it where you can't even make a such a simple request? That might all be paranoia, in that the partner would actually happily refrain from eating crackers if she knew it was annoying, but it *might* be completely accurate. The partner really might understand a request to desist from cracker eating as hostile and might react negatively to that. If so, there really is a problem in the relationship. In that case the guy's problem isn't epistemic but pragmatic: he correctly identifies a problem in the relationship, but is unable to identify a solution and (perhaps) reacts in a way which only exacerbates the situation.

Expand full comment
Reed R's avatar

I don't have time to read all the comments, so I assume someone may have addressed this. But this isn't an issue with Bayesian processing in my view. If we assign a 0% probability to an outcome in our prior, then Bayesian updating will produce "trapped priors". This just highlights the fact that regardless of the data observed, the observer must be open minded about change.

In my experience, Socratic questions opens priors. But it is exhausting to pursue and takes forever.

Expand full comment
madacol's avatar

> (this might be that “simulated annealing” ...

closing parenthesis is missing

Expand full comment
Michael Vassar's avatar

No you don't. You have made that very explicit in our one on one conversation. You want to want to get outside your perspective, but you accuse me of witchcraft on the grounds that I have a historical track record of getting people to change their perspectives.

Expand full comment
Amy Russell's avatar

While this discussion is about phobias, traumas and negative priors, I wonder if it also works for some positive priors / emotional orientations toward things? ie whether having a massive crush on someone means that any time you interact with them, you get a hit of good feelings (or "they make you feel really good" as you may interpret it), which strengthens your prior that they're genuinely wonderful, which means next time you see them they make you feel great again... people do seem to get temporarily trapped here, but it doesn't entrench the way phobias do. Maybe because our brains are a lot more reactive to single bad things than to single good things?

Expand full comment
Stefan Schubert's avatar

A recent paper tried to replicate the Fernbach et al findings referred to above, and found that they didn't replicate.

"Asking People to Explain Complex Policies Does Not Increase Political Moderation: Three Preregistered Failures to Closely Replicate Fernbach, Rogers, Fox, and Sloman’s (2013) Findings

Fernbach et al. (2013) found that political extremism and partisan in-group favoritism can be reduced by asking people to provide mechanistic explanations for complex policies, thus making their lack of procedural-policy knowledge salient. Given the practical importance of these findings, we conducted two preregistered close replications of Fernbach et al.’s Experiment 2 (Replication 1a: N = 306; Replication 1b: N = 405) and preregistered close and conceptual replications of Fernbach et al.’s Experiment 3 (Replication 2: N = 343). None of the key effects were statistically significant, and only one survived a small-telescopes analysis. Although participants reported less policy understanding after providing mechanistic policy explanations, policy-position extremity and in-group favoritism were unaffected. That said, well-established findings that providing justifications for prior beliefs strengthens those beliefs, and well-established findings of in-group favoritism, were replicated. These findings suggest that providing mechanistic explanations increases people’s recognition of their ignorance but is unlikely to increase their political moderation, at least under these conditions."

https://journals.sagepub.com/doi/abs/10.1177/0956797620972367

I haven't looked into this literature myself, however.

Expand full comment
Lucas Morton's avatar

There's an interesting discussion on this issue in E. T. Jaynes' "Probability Theory: The Logic of Science". Section titled "Converging and Diverging Views" on p113. If you like Bayesian probability theory and rationality, you really will enjoy this book.

Expand full comment
Walde's avatar

I suspect our mind pulls a simple but effective trick on us. When faced with new sensations that run counter to our strong prior, we actively remember past sensations (scary dog memory, all of my old arguments against capital punishment). But we _falsely_ interpret this rehashing of old facts as new input. Repetition of already known facts is an effective tool of persuasion even if it should have zero value from Bayesian point of view. And we do it to ourselves. So our strong prior is in effect updated with weak evidence against and strong (echo) evidence for.

You might see this play out when reading an oped you disagree with. Rather than only interpreting the new input and judging it as unconvincing, you’re probably pulling out some of your favorite counter-arguments and (if you’re anything like me) repeating them loudly to anyone unfortunate enough to be within earshot. Being already known, they should not update your prior. But I suspect they actually do.

Expand full comment
Lotophagoi's avatar

Something that doesn't make sense to me about this model: If VdB et al suggest that experience is decreased to protect from negative emotions, that suggests that emotion belongs to 'sensation' or the empirical input of experience. But surely, 'scary' refers to the emotion of fear ('dangerous' would be the emotionless empirical equivalent), yet here the model depends on a prior of a scary, not a dangerous dog, suggesting emotion belongs instead to the predictive 'prior' input to experience.

Also, it just seems obvious that people can and do deliberately eat crackers in a bitchy way.

Expand full comment
Sylv's avatar

Look forward to advertising and PR agencies getting their hands on that research that shows a reliable method for releasing trapped priors. Every aspiring cult leader will want to make use of that data.

"The biggest problem I used to have is people used to have intractable biases against buying the snake oil I was trying to sell them."

Expand full comment
Krzysztof Wolyniec's avatar

The fundamental error of this approach is the need to explain biases in the first place. Unbiased ness is suboptimal in finite sample inference. Before you start looking for explanations , first you need to make sure you have the proper model of optimal cognition.

Expand full comment
Gabe Appleton's avatar

I would recommend using a dial in the center of the gray box to represent weighting. That way you can still use your restricted pipe metaphor while also capturing what the color output would be normally.

Expand full comment