335 Comments
Comment deleted
Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment
Feb 1, 2022·edited Feb 2, 2022

In a nutshell, for me at the very least, the Catholic faith solves this problem. From a skeptical outside perspective, ignore the deistic epistemology entirely and focus on the phenomology.

One benefit of faith is that it makes your utility function non-monotonic and flexible. It does this by putting a presumably conscious entity at the peak of the hierachy of values (think virtue ethics here). So you get an ultimate determiner of value that is pretty rigid but can decide between competing values. If Christ isn't this for you, well, I enjoy sharing this experience with 1 billion + people

¯\(°_o)/¯

24 hour later edit:

A few responders to this post seem to be operating on an assumption that no one is capable of being very intelligent, well-informed, intellectually honest, and religious simultaneously. While I don't agree with this perspective, even though it may be a strawman, I suspect many of the benefits of this way of thinking could apply to eg HPMOR fans who want to consider the actions of their favorite character rather than the deity I choose to follow. I think good fiction is invaluable, it allows for an idealized and situationally unique perspective, to which one might consider themselves an "apprentice" in a way that they would balk at doing for a real person. I think holding yourself to a higher ideal is generally a great thing, even if it's not mine (and from where I'm sitting, if my assessment is correct an honest and courageous attempt at this will have the same ultimate endpoint one way or another).

(PS I have cried more than one at HPMOR Harry and his interactions with dementors and phoenixes, it's better than the original IMO)

Expand full comment

What if it would make your day great to see the lion because you’d run away and then feel like a hero?

Expand full comment

I think in the lion-in-corner-of-eye example a lot of people would freeze, which also explains the tax behavior.

It makes sense from an evolutionary perspective, where if a predator doesn't spot you they'll eventually leave you alone, but the IRS doesn't work that way.

Expand full comment

My mailbox definitely projects an "Ugh Field" for me. I know I need to check it, but it only ever brings me junk mail or problems. So every time I think "I should check the mail" another part of me is thinking "Do I have time to solve problems right now? Do I want to? No and no."

Expand full comment

This reminds me very much of Plantinga's evolutionary argument against naturalism. https://en.wikipedia.org/wiki/Evolutionary_argument_against_naturalism#Plantinga's_1993_formulation_of_the_argument

In case that's helpful.

Expand full comment

For social creatures who evolved in small groups, challenging the group consensus could create a sort of negative hedonic reinforcement learning. There's little to be gained from sharing your opinion, even if it's true, if everyone else is going to get upset at you for saying it. So, best not think too hard about whether what your community thinks is true is actually true, and just go with the flow.

Expand full comment

I try not to fall too hard into the cliche of "constantly explain complex brain functionality with simple comparisons to something from my field", but with the field in question being machine learning, it's impossible to resist these comparisons, especially as of the last few years. Too many subsets of our functionality seem so analogous to the way many artificial neural networks work!

This example also gets more interesting if we add other fundamental ML concepts in, e.g. learning rates (what magnitude of events correspond to what kinds of belief updates? are there some areas gradients are passed to more strongly than others, and if so what changes and modulates this?), weight freezing (at some point we have to learn to recognize basic objects and patterns - at what point(s) is this most adaptable, and which parts of it are immutable as an adult?), and of course backprop and other hyper-parameters, which are already interesting enough to contrast on their own. This also reminds me of a deepmind paper I saw yesterday https://deepmind.com/blog/article/grid-cells in which they construct an ANN similar to grid cells https://en.wikipedia.org/wiki/Grid_cell

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I pretty much agree with everything you said.

One of 5 or so places in the brain that can get a dopamine burst when a bad thing happens (opposite of the usual) is closely tied to inferotemporal cortex (IT). I talked about it in "Example 2C" here - https://www.lesswrong.com/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine#Example_2C__Visual_attention Basically, as far as I can tell, IT is "making decisions" about what to attend to within the visual scene, and it's being rewarded NOT for "things are going well in life", but rather for "something scary or exciting is happening". So from IT's own narrow perspective, noticing the lion is very rewarding. (Amusingly, "noticing a lion" was the example in my blog post too!)

Turning to look at the lion is a type of "orienting reaction", I think. I'm not entirely sure of the details, but I think orienting reactions involve a network of brain regions one of which is IT. The superior colliculus (SC) is involved here too, and SC is ALSO not part of the "things are going well in life" RL system—in fact, SC is not even in the cortex at all, it's in the brainstem.

So yeah, basically, looking at the lion mostly "isn't reinforceable", or to the extent that it is "reinforceable", it's being reinforced by a different reward signal, one in which "scary" is good, as far as I understand right now.

Deciding to open an email, on the other hand, has basically nothing to do with IT or superior colliculus, but rather involves high-level decision-making (dorsolateral prefrontal cortex maybe?), and that bran region DOES get driven by the main "things are going well in life" reward signal.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

Not sold on the "visual-cortex-is-not-a-reinforcement-learner" conclusion. If the objective is to maximize total reward (the reinforcement learning objective), then surely having your day ruined by spotting a tiger is better than ignoring the tiger and having your day much more ruined by being eaten by said tiger. (i.e.: visual cortex is "clever" and has incurred some small cost now in order to save you a big cost). Total reward is the same reason humans will do any activities with delayed payoffs.

Expand full comment

Relevant webcomic:

https://i.imgur.com/kFqkPEb.jpeg

"That."

"That?"

"That is why we're screwed. That number. That number will doom us."

"I hate that number. Can we run away from it?"

"No. It's a number. It represents an immutable fact."

"Can we fight it?"

"No. <...> What?"

"Sorry. Evolving a threat response over a half-million years on the African savannah hasn't really left me with any good mechanisms for dealing with a threatening number."

"That is also why we're screwed."

Expand full comment

Betting money on things seems like one way to push your brain more toward the less-hedonically-reinforceable regime.

Expand full comment
Feb 1, 2022·edited Feb 2, 2022

I think all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

If you do a lot of research on epistemic facts related to your political beliefs, the first order consequence is often that you spend hours doing mildly unpleasant reading, then your friends yell at you and call you a Nazi.

In the case of doing your taxes or the lion, that unpleasantness is modulated by the much larger unpleasantness of being sued by the IRS and/or eaten alive by a lion. So there's a normal tradeoff between costs (filing taxes is boring, seeing lions is scary) and benefits (not being sued or devoured).

But in the case of political beliefs, the costs are internalized (your friends hate you) and the benefits are diffuse (1 vote out of 160 million for different policy outcome). So it's no wonder that people aren't motivated to have a scout mindset.

Expand full comment

"This question - why does the brain so often confuse what is true vs. what I want to be true?"

Going back to first principles of natural selection one would presume the human brain to be:

(a) **well adapted** to discern true facts that have positive impacts on reproductive fitness (e.g. identifying actual lions, learning which hunting or gathering techniques work best, etc.);

(b) **well adapted** to engage in useful self-deceptions that also have positive impacts on reproductive fitness. (e.g., believing that your tribes socially constructed superstitions and political rules are "true" so that you fit in and get along.

(c) **Non-adapted** for determining true facts that might make your life more financially profitable, enjoyable and stress-free but that don't have any direct impacts on your hunter-gatherer reproductive fitness. (e.g., realizing that you shouldn't procrastinate on your taxes, or shouldn't worry about things you can't change.)

Sadly, the human brain is evolved to make more surviving humans, not to make us happy or successful in a modern capitalist economy. I think the happy/high-achieving people are probably those who are more successful in somehow tricking their brains to move category (c) issues into category (a). If only you can convince yourself that the search for absolute truth is a life or death hunt that will keep you from starving to death and instead allow you to have sex with the hot cave woman https://youtu.be/gSYmJur0Npw?list=PLVVuOIA1lowEKOVGZgf4_GsmH51Lu4511&t=68.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I thought motivated reasoning was.... Reasoning with a motivation. Reasoning, but, WITH AN AGENDA!! DUN DUN DUNN... Like, you want something, so you spend a lot of time entertaining counterfactuals which seem like they could be pieces of plans you could make which lead to you getting the thing you want, as opposed to Perfect Unmotivated Reasoning: reasoning done entirely for the sake of Truth, and not utility.

I thought motivated reasoning meant reasoning with ulterior motives

Expand full comment

"Maybe thinking about politics - like doing your taxes - is such a novel modality that the relevant brain networks get placed kind of randomly on a bunch of different architectures, and some of them are reinforceable and others aren’t. Or maybe evolution deliberately put some of this stuff on reinforceable architecture in order to keep people happy and conformist and politically savvy. "

I wonder if it is not simpler to just consider a dichotomy between tasks we have evolved for (e.g., learning to speak) and those we have not (e.g., learning to read), rather than the epistemic/behavioural dichotomy. Detecting threatening animals is clearly an evolved ability, while doing one's taxes is not at all. This could mean that "doing one's taxes" will not be done automatically, and will depend on many factors, including of course the pleasantness of the task (and also the tendency to ignore the future, trust in the group, anxiety, etc.).

Expand full comment

I think you're thinking about this too much from a model where everybody is a good-faith Mistake Theorist.

In a mistake theory model, it's a mystery why people fail to update their beliefs in response to evidence that they're wrong. If the only penalty for being wrong is the short term pain of realising that you'd been wrong, then what you've written makes sense.

I think that most people tend to be conflict theorists at heart, though, using mistake theory as a paper-thin justification for their self interest. When I say "Policy X is objectively better for everybody", what I mean is "Policy X is better for me and people I like, or bad for people I hate, and I'm trying to con you into supporting it".

There's no mystery, in this model, why people are failing to update their "Policy X is objectively better" argument based on evidence that Policy X is objectively worse; they never really cared whether Policy X was objectively better in the first place, they just want Policy X.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

If this is true, then maybe Cognitive Behavioral Therapy is just using your "basically epistemic" brain regions to retrain your "basically behavioral" brain regions. Like, trick yourself into ending up in a better hedonic state than you used to -> reinforcement learning counter updates towards "yay". That could explain why it's so crazy effective despite being "just" talk therapy.

I know less about brains and psychology than just about every other commenter out here, am I on to something here?

Expand full comment

I think it is wrong-footed to proceed as if reasoning always directly connects your brain to the phenomenon that you are reasoning about. Instead, the process is social. You decide what to believe based on who you believe. You decide who to believe mostly based on who you would like to get approval from (or to get imaginary approval from). If in your imagination you would rather hear Joe Rogan say you're a great guy than hear Anthony Fauci say you're a great guy, then when they differ you will work harder to discredit what Fauci says than what Rogan says. And conversely. That is what puts the motivation into motivated reasoning.

Expand full comment

How did AlphaStar learn to overcome the fear of checking what's covered by the fog of war?

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I think you can go a level deeper into this idea - there obviously aren't *actually* parts of the brain that are just magically epistemic, really just about everything in our CNS learns how to do its function to some degree or another. E.g., even the lowest levels of your visual cortex learn how the visual field fits together, how to handle lines of different directions and kinds, and etc. Probably most of that kind of learning is wrapped up and finalized by the time you're past the toddler stage, but it means that even these basic "epistemic" functions formed under a reward function of some kind. We would hope that reward function is something like surprise-minimization (and/or some free-energy whatsit that I don't understand,) but the brain is messy and full of poorly enforced boundaries, so the reward function could also be partially coming from extrinsic signals unrelated to its nominal function- perhaps of the form "if I become aware of X, I feel really bad suddenly."

The upshot of all of which *could be* that particular kinds of bad/traumatic upbringing in very early life could predispose someone towards motivated reasoning, by mis-training "epistemic" parts of the brain. That approaches the kind of thing that could be testable!

Expand full comment

Just a thought, maybe one of many causes for this phenomenon:

Shifting a worldview isn't zero cost - when you shift your worldview many posteriors need to update (which means every situation and corresponding action which might have been 'memoized' before needs to undergo an update process which takes extra time and energy as it is compared against the new worldview).

I use the term worldview here but I guess it applies to anything you've taken as an assumption and might need to shift. I think energy and response time costs might play a part alongside hedonic reward.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

This is also something I care a lot about, but maybe it serves our social animal nature more than our predictive fitness. Hedonic states are evolutionarily wired to confer at least four functions - they feel good, "energize" behavior, confer unique psychosocial strengths (motivation, humor, creativity, willpower, etc) and resilience to stress. These resources are useful are highly useful in the buffeting winds of the social world, so hedonic states becomes a meta-goal independent of their original evolutionary function. Originally, evolution creates a cybernetic system where anticipation of positive outcomes generates hedonic states to motivate behavior, but when people can coopt the benefits of the hedonic states for social functions, suddenly social evolutionary logic motivates individuals toward a positivity bias to generate hedonic resilience for its own sake.

You can extend this logic to other hedonic states like social connection or social status and people suddenly appear in relief as highly motivated to maintain tribal ideologies build on ingroup biases and attribution biases and positivity biases all to generate hedonic resilience. Suddenly the "madness" of doing the same thing and expecting a different result, of maintaining beliefs that are costly, of staying with ingroups that drag us down or of competing in self-defeating games makes sense - people are trapped on a treadmill of attempting to generate short-term hedonic resilience as a social resource, particularly as unlinking from predictive accuracy generates negative consequences and chronic stress, creating even more deficits of hedonic resilience and more need, cyclically. A pernicious spiral dynamic for social animals.

Expand full comment

The beautiful part of this particular rabbit hole is when you realise motivated reasoning is basically the same concept as original sin, which is a combination of social optimisation and the psychological neoteny that allows societies to scale beyond the Dunbar number.

Expand full comment

I think this is a bit unfair on motivated reasoning.

We train people at motivated reasoning for years; here's the conclusion, now argue for it, is basically the standard school writing assignment.

Then we get into the real world, and discover the vast majority of people don't care about the same things we care about, and if we need to convince them of something that is important to us, we have to argue for it from a set of values that were not used to arrive at it.

And truth isn't even that valuable for most people, and is in many cases actively harmful to them, compared to white lies. The weird behavior isn't motivated reasoning, it's being able to accept the truth with an awareness of bad things being constant sources of suffering.

Expand full comment

Having separate reinforcement and epistemic learners would be the elegant solution. There's also the ugly hack, which is to make "there might be a lion" even scarier than "there is a lion" so that checking is hedonically rewarded with "at least now I know".

Successful horror movie directors can confirm evolution went for the ugly hack, as usual.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

This seems very wrong to me. Visual systems can clearly learn. A veteran spy agency field agent, veteran NFL quarterback, NBA point guard, infantry commander, F1 driver, pilot, whatever, can all see things and read a scene much better after years of experience than they could beforehand.

The problem with your contrived example is the negative hedonic nudge from getting spooked by something that looks like a lion but isn't one is a tiny signal, dwarfed by the gigantic evolved-in prior that things that look like a lion need to be properly identified and avoided. It isn't that the visual system learns any differently. You're just comparing it to realms where signal strength is unclear and there may be no instinctual priors at all. You can throw inputs at a model with randomly initialized weights and watch it very quickly update its predictions, and then throw exactly the same inputs at another model with exactly the same learning algorithm but weights already stuck in a local minimum, and it won't budge at all. The learning algorithms are still the same. Human brains aren't randomly initialized models. For "is that a lion," we have very strong priors. For "when should I do my taxes," we don't.

Expand full comment

In my long running commentary theme of 'missing the point' of what you're talking about.

I.

The 'I'm right about politics and the other side sabotaged our program' is a fairly poor proxy for your current argument because it is true. When you have endless think tanks writing articles about how to sabotage the other side for decades and they brag about it every chance they get along with funding offical and unofficial controlled opposition, abuse of law enforcement, selling political favour for censorship actions by media companies, etc.

All to get their ideas across and suppress the other side's ideas....then it would fall into the rational side of ideas to think rather than the wrongthink side you describe of running away from a picture of a lion on a box of cereal in a grocery store.

When the FBI and republican senators and fundraisers and think tanks and such were all colluding against the civil rights movement on a multi-pronged front and the FBI had a floor to ceiling stack of papers tracking the peaceful anti-war activist, reverend, and civil rights activist MLK to harass him, blackmail him, and eventually assassinate him.

Along with many thousands and thousands of other prominent activists over a multi decade span on time...I'd say it is pretty reasonable to think that one side is using legal, illegal, offical, and unofficial channels and every funding mechanism they have to push their ideology.

When you go through the events on the day of Malcom X's assassination and you see how the FBI had contact and was tracking them and how they and the police uncharacteristically pulled their protection/monitoring team from the event hall that day when they'd been showing up in force to all his other events for months....it doesn't take a genius to see the kinds of sabotage you've dismissed as poor quality thinking.

If that's what they were doing then...what is happening now? Does anyone think that after Snowden exposed the NSA etc. that their spying was actually reduced? Nay, it has certainly advanced to a degree far higher and more sophisticated than what they could do before as technology has progressed.

The idea of GOP sabotage of government is so well understood that it has existed in parody for decades. When Bush appointed anti-UN extremist Bolton to the US's top UN post or put his veterinarian etc. in charge of a large bureaucracy....or when Trump literally failed to bother appointing new heads for hundreds of offices of government causing a general slowdown and disruption, etc.

Or when shows like Parks and Recreation can parody the very widespread right wing rural middle aged white guy's genuine perspective to 'starve the beast' and intentionally mismanage the bureaucracy in order to show how bad it is....what is a person to think? That they're being an irrational fool?

II.

On the main subject of the post and it's intended topic, I doubt there would be any physical element of the brain we could identify in terms of when our ideas and learned reactions to the world are correct or incorrect. This is because 'reality' is not represented in the brain - there is no 'truth' there against which our ideas form around or avoid in proper or in proper brain processes.

A large part of this is somewhat random in terms of what any individual experience and learns....not to mention that in evolutionary terms the main thing that is causing this confusion with lions in your example is about representational depictions of reality. In our history as a species if you saw a lion it was probably a lion, not a realistic image of a lion.

If the occasional child got a bit frightened at night as the firelight flickered on the cave art depiction of a lion...well that too is and advantage as you WANT to instil a generalised fear of lions in the children as you pass on the stories of your people and their knowledge of the landscape and how to survive. If a tiny handful of people end up 'misfiring' in their learning process in that context, then it is not going to be very harmful very often and evolution will have little to say in terms of selecting brains for or against any strange learning errors which pop up.

Evolution has the tool of fecundity and death. Expose them all to reality and let god sort em out to paraphrase the more common idea of kill em all and let god sort em out phrase.

Expand full comment

The perspective isn't primarily neurological, but if you haven't already read The Righteous Mind by Jonathan Haidt, that has a lot to say about motivated reasoning surrounding things like politics to the extent that politics often involves moral commitments and involvement in a community defined by its moral commitments.

A very poor explanation is something like poliltics = religion because they both define communities of people with similar values, and saying your party's policies don't work is a lot like saying that your religion's sacraments don't work.

Expand full comment

The underlying intuition here about reinforcement learning is incorrect.

> Plan → higher-than-expected hedonic state → do plan more

No, it's: higher (relative to other actions) hedonic future state *conditioned on current state*. The conditioning is crucial. Conditioned on there being a lion, RL says you should run away because it's better than not running away.

It gets tricky with partial observability, because you don't know the state on which you have to condition. So instead, says RL theory (not so much practice, which is a shame), you can condition on your belief-state, *but only if it's the Bayesian belief-state*. If you're not approximately Bayesian, you get into the kind of trouble the post mentions.

But being Bayesian is the RL-optimal thing to do. You get to the best belief state possible: if there's a lion, you want to believe there's a lion, litany-of-Tarsky style. The visual cortex could, in principle, be incentivized to recognized lions through RL.

I suspect people don't open IRS letters not because their RL is fundamentally broken, but because their reward signal is broken. They truly dislike IRS letters, and the pain it causes to open one is truly more than their expected value. People probably also underestimate the probability and cost of a bad IRS letter, but that's due to poor estimation, not poor deduction from that estimation.

Perhaps it's easier to see in organizations, where you can tell the components (individuals) apart. It's sometimes hard to tell apart the bearer-of-bad-news from the instigator-of-bad-news. This disincentivizes bearers, who might be mistaken for instigators. With enough data, you can learn to tell them apart. Until you do, disincentivizing bearers to the extent that they really could be instigators is the optimal thing to do.

Expand full comment

Everybody does some amount of motivated reasoning, but there's probably some systematic variation in propensity to motivated reasoning and it's probably possible to measure that and from that infer some ideas about the evolutionary tradeoffs involved. I slightly suspect it comes down to whether your ancestors won more of their bread by dealing with things or dealing with people and the latter would relatively increase motivated reasoning -- so a verbal tilt should correlate with motivated reasoning. Less motivated reasoning may have made great^20 grandpa a more competent farmer, but more motivated reasoning may have made him less likely to reach taboo conclusions and be shunned or burned as a heretic.

In the context of Gwern's Algernon argument (https://www.gwern.net/Drug-heuristics#loopholes), all three of the loopholes might apply to the project of reducing motivated reasoning.

1. the environment is different -- people aren't getting burned as heretics so much anymore. They just get kicked off of twitter and fired from woke corporations.

2. Value discordance. We value truth more highly than the blind idiot god of inclusive-fitness-maximizing.

3. Evolutionary restrictions. The brain is such a rube-goldberg kludge that it's not clear evolution could have eliminated motivated reasoning even if that were strongly beneficial to inclusive fitness.

Expand full comment

I see Steve Byrnes has already commented, but in case you haven't/aren't planning to read his stuff on this topic, I'd recommend checking it out. E.g. https://www.alignmentforum.org/posts/frApEhpyKQAcFvbXJ/reward-is-not-enough

Expand full comment

Interesting question.

Perhaps "action planning" is the primary domain of reinforcement learning, whereas perception is primarily achieved via self-supervision / free energy minimization / etc?

... and "turn my head 45 degrees to the right" perhaps isn't an action plan but rather part of free energy minimization.

Expand full comment

Perhaps this is far too basic, but what if we look at decision-making algos being sorted into "survival" vs. "thrival," where the surviving decision tree overrides the thriving decision tree?

So, turning one's head when they see a yellow blob in periphery might threaten ruin one's hedonistic state (anti-thriving), but since surviving to see more hedonistic days vs. less overrides hedonistic/thriving decision tree?

Expand full comment

Thanks Scott, really good discussion. The idea that the brain uses two types of reinforcement learning (Epistemic and Hedonic) is really, really important. It has deep implications for the structure of the brain, and the ways we should try to model it. What you call the epistemic network reinforces associations based on correctly predicting the outcomes of agent actions, even if those agent actions have negative outcomes. Hedonic reinforcement works as a controller of behaviour, reinforcing possible agent actions based on predicted hedonic states associated with the epistemic outcomes. To model it, you have to see them as two separate networks working in tandem, one caring only about the (Bayesian approximate) accuracy of its' predictions, and the other caring only about the hedonic consequences of those predictions.

Your point, which had never occurred to me, is that this implies there will be cases where agents will actively avoid seeking new information where experience has taught them that this would have negative hedonic effects. It's a real insight I think. But if it's true, and I think it is, then you have to assume that the epistemic/hedonic dichotomy is an architectural principle of all animal minds.

Another interesting thing is that when you try to model a reinforcement learning neural network that operates according to this epistemic/hedonic architectural principle, it's very difficult, and you end up with something that doesn't look much like the sort of models I see in the ML literature.

Expand full comment
founding

The way I think about it is there are 4 levels of feedback loop that biologicals systems use to process information.

Level one is evolution itself: does it kill you or not? individual organisms don't process this, but life as a whole gains information this way.

Level two is reflex: You have various sensors for light, chemicals, touch etc, and you automatically react by like, flinching away from pain or a dangerous chemical stimulus.

Level three is reinforcement learning, to either gain reactions to stimuli that are non-reflexive, or to gain control over reflexes and not react. This is like when you learn to like a flavor because despite the bad taste, it has provided you with good nutrition in the past, or something like that.

Level 4 is memetic and abstract. This is having an information system that is able to learn from communication, observation, and reasoning, to a greater or lesser extent. It enables us to hear phrases like "If you go into those woods, a tiger will eat you"

Each of these plays into lower levels given enough time and evolution. Sufficient repetition of abstract memes will train you into automatic responses to certain stimuli even if you have no actual experience, eg flinching away from poisons that you have never tasted yourself.

Things like paying your taxes stays extremely abstract for most people and thus is not particularly good at compelling action.

Expand full comment

Re the visual cortex part - in humans, there’s cortisol to help out - if you see something scary enough, the cortisol faucet goes on and later recall becomes more difficult/memories harder to access. That seems to me like the biochemical cheat for the situation described - if the visual cortex does sometimes learn by reinforcement, but in situations of heightened threat that would clear the “don’t look” bar, cortisol makes the cortex not retain it.

Expand full comment

The brain trains on magnitude and acts on sign.

That is to say, there are two different kinds of "module" that are relevant to this problem as you described, but they're not RL and other; they're both other. The learning parts are not precisely speaking reinforcement learning, at least not by the algorithm you described. They're learning the whole map of value, like a topographic map. Then the acting parts find themselves on the map and figure out which way leads upward toward better outcomes.

More precisely then: The brain learns to predict value and acts on the gradient of predicted value.

The learning parts are trying to find both opportunities and threats, but not unimportant mundane static facts. This is why, for example, people are very good at remembering and obsessing over intensely negative events that happened to them -- which they would not be able to do in the RL model the post describes! We're also OK at remembering intensely positive events that happened to us. But ordinary observations of no particular value mostly make no lasting impression. You could test this by a series of 3 experiments, in each of which you have a screen flash several random emoji on screen, and each time a specific emoji is shown to the subject, you either (A) penalize the subject such as with a shock, or (B) reward the subject such as with sweet liquid when they're thirsty, or (C) give the subject a stimulus that has no significant magnitude, whether positive or negative, such as changing the pitch of a quiet ongoing buzz that they were not told was relevant. I'd expect subjects in both conditions A and B to reliably identify the key emoji, whereas I'd expect quite a few subjects in condition C to miss it.

By learning associates with a degree of value, whether positive or negative, it's possible to then act on the gradient in pursuit of whatever available option has highest value. This works reliably and means we can not only avoid hungry lions and seek nice ripe bananas, but we also do compare two negative or two positives and choose appropriately: like whether you jump off a dangerous cliff to avoid the hungry lion, or whether you want to eat the nice ripe banana yourself or share it with your lover to your mutual delight. The gradient can be used whether we're in a good situation or a bad one. You could test this by adapting the previous experiment: associate multiple emoji with stimuli of various values (big shock, medium shock, little shock, plain water, slightly sweet water, more sweet water, various pitch changes in a background buzz), show two screens with several random emoji, and the subject receives the effect of the first screen unless they tap the second. I'd expect subjects to learn to act reliably to get the better of the two options, regardless of sign, and to be most reliable when the magnitude difference is large.

For an alternative way of explaining this situation, see Fox's comment, which I endorse.

OK, now to finally get around to motivated reasoning. The thoughts that will be promoted to your attention for action are those that are the predicted to lead to the best value. You can roughly separate that into two aspects as "salience = probability of being right * value achieved if right". Motivated reasoning happens when the "value achieved if right" dominates the "probability of being right". And well, that's pretty much always, in abstract issues where we don't get clear feedback on probabilities. The solution for aspiring skeptics is to heap social rewards on being right and using methods that help us be more right. Or to stick to less abstract claims. You could test this again by making the emojis no longer a certainty of reward/penalty, but varying probabilities.

Source: I trained monkeys to do neuroscience experiments.

Expand full comment

"Maybe thinking about politics - like doing your taxes - is such a novel modality...." ???

Dunno, chimpanzees and all the other non-solitary primates seem to be quite keen on politics, so this modality must have been around for the past 15 million years, or even longer, if you are willing to stretch the meaning of the word. Richard Wrangham's "The Goodness Paradox" strongly supports the idea that we have a very fine, evolved grasp of politics. It's not about being happy or conformist but yes, being savvy enough to viciously cut the other guys off at the knees at just the right moment and yet look quite saintly while doing it, is just what Mr Darwin ordered. The life of Aspies was always precarious on the veldt.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

I dunno. These models seem so widly oversimplified trying to tease out the logic strikes me as about as productive as Aristotle trying to theorize about how to improve metallurgy using only the idea that everything is made of Fire, Air, Water and Earth in varying proportions.

I mean, you've described feedback structure that could be implemented with a dozen transistors. What are the other 79,999,999,988 neurons for? Not to mention even the feedback structures we know at the most basic biochemical or sensory processing level are already far more complex. We've got delayed feedback, feedback that is added and subtracted and differentiated. We have edge and motion detecting feedback circuits. We have feedback that is integrated, often with nonlinear kernels, and for all we know there's FFTs being taken and all kinds of feedback circuits operating in transform space, too.

That's not to even mention the biggest elephant in the room, which is that we have no idea what the intermediate metrics of cognitive success actually are. Sure, optimizing survival of the related gene group is the ultimate metric of success -- but how does that translate to more immediate measures of optimal individual decision and action, the kinds of things that can be optimized by an individual brain in real time? The fact that the bookstores and libraries groan with pop-psych self-help books, and pro libraries under slightly more polysyllabic versions of same, is pretty clear evidence that we have at best a tiny handful of scattered clues.

Expand full comment

I’ve always thought of that motivated reasoning as a way to economize compute power. Your brain says: Something bad may happen, I can’t do anything about it, my avoidance of detecting it doesn’t much change the odds of it happening, so I’m not going to think about it because if I’m going to have any hope I need to explore other stuff. The bit about not wanting to see the lion because it will give you a bad day I think only practically works if the lion future is competing with another future with a higher goal. “The tribe is starving. Janice remembered gathering some berries by the river a ways back. She took our remaining food to give her strength to run. There are lions there now and she may get eaten, but if she doesn’t gather the berries and remember exactly where they were to get back in time the tribe dies.” So the lion looker module gets downgraded against the berry finder module.

But at the edges where these things are fuzzy, as they usually are in real life, with lower stakes and overlapping error bars, this probably becomes procrastination/willful ignorance.

Expand full comment
founding

When I see a picture of a large spider, i get really creeped out, even though it is just a picture. Isn't this a lowering of my hedonic state? If I close the book with the spider picture, it gets removed from my visual cortex, thus causing my hedonic state to increase. The same is true if I saw an actual spider and freaked out and ran away. Running away removes it from my cortex, increasing my hedonic state. Why does this need to be reinforcement learning, isn't it just an evolved instinctual reaction to scary animals?

Expand full comment

As a neuroscientist studying reinforcement learning, I feel like I ought to have something for you here. As you know, we broadly think that reinforcement learning in the brain is mediated by dopamine. So one way of framing the question "which parts of the brain are / aren’t reinforceable" is "which parts of the brain do / do not receive dopamine input." Interestingly, the brain regions with the most dense dopamine innervation are the striatum (input nucleus of the basal ganglia, dealing a lot with movement and motivated behaviors) and to a lesser degree the prefrontal cortex (dealing with executive function / decision-making). Notably, sensory cortices do *not*, by and large, receive dopamine input. It strikes me that this relatively cartoonish diagram of the dopamine system maps on quite well to your intuition that "behavioral" regions should be reinforceable and "epistemic" regions should not, with executive regions somewhere in between.

Expand full comment

"But suppose you see a lion, and your visual cortex processes the sensory signals and decides “Yup, that’s a lion”. Then you have to freak out and run away, and it ruins your whole day. That’s a lower-than-expected hedonic state! If your visual cortex was fundamentally a reinforcement learner, it would learn not to recognize lions (and then the lion would eat you). So the visual cortex (and presumably lots of other sensory regions) doesn’t do hedonic reinforcement learning in the same way."

I think this logic is wrong. Standard RL algorithms learn to maximize *long-term* summed reward, not just the reward from the immediate next state. (If they only learned to maximize the immediate next reward, they'd be awful at most tasks.) So a visual cortex that learned via RL would learn to recognize lions because, even though it leads to a worse state immediately, it avoids a huge punishment down the road. I agree with you that sensory cortex isn't implementing RL, but this is not a good argument for why not.

Expand full comment

Rather than a purely "is reinforceable" vs "isn't reinforceable" distinction I suspect the difference has more to do with the relevant timescales for reinforcement. In the foot injury case, we'd have a very fast gait correction reinforcement loop trying to minimize essentially instantaneous pain. In the lion country case it sounds like something slightly longer timescale -- we make a plan to go to lion country and then learn maybe a few hours later that the plan went poorly so we shouldn't make such plans again. In the taxes case it's much longer term, it might take years before the IRS manages to garnish your wages, though you'll still eventually likely get real consequences. Politics on the other hand, often cost/reward is so diffuse and long-term that I suspect the only reason anyone is ever right about difficult policy issues is because the cognitive processes that correctly evaluate them happen to be useful for other reasons. The vision example I think is a mistake of timescale; a vision system which learned to not see something unpleasant would get a much worse net reward when you don't avoid the lion you could have seen and subsequently get mauled.

I'm coming at this from the ML side so I'm out of my depth biologically, but perhaps we have different relevant biological RL processes with different timescales? Eg, pain for ultra-short timescale reinforcement, dopamine for short-to-medium timescale reinforcement, and some higher-level cognitive processes for medium-to-long timescale reinforcement.

Expand full comment

One observation: You can't avoid a lion trying to eat you by refusing to look at it. But you might be able to avoid another lecture from your neighbor Ug Bob about how you haven't made the proper blood sacrifices to Azathoth in a while, if you refuse to make eye contact with him and keep walking.

That is to say, huge parts of our brains developed in order to process social reality. And social reality, unlike physical reality, actually does change based on what information you have (and what information other people know you have, or what information you know other people know you have...). So controlling the timing of when you are seen by people around you to acquire certain information likely does have some degree of survival benefit. And the parts of our brains that learned how to do that are probably the ones that are involved in reading letters from the IRS today.

Expand full comment

I think the major problem here is that from an individual perspective the "system" you are learning about is so frigging noisy and generally large/complex that "the answer" to any given scenario is only valid across a large population. Maybe we can slowly increase the percentage from say 60 to 70 to 80 to 90 to even 95 some day but 5% of all people is a catastrophically large number.

Expand full comment

Perhaps the visual cortex is more of a reinforcement learner than you're giving it credit for. Children have to be taught to look both ways before they cross the street. I know that when I started working in a lab, highly-significant translucent specks in a vial of water didn't stand out to me the way they do now. In the lion example, as soon as the yellowish blob was seen - or even as soon as the hunter stepped out onto the Savannah where they knew lions existed - they'd be having worse anxiety not checking if the blob is a lion than they would if they looked directly at the blob.

It takes a lot of learning to carve out significant features from our visual field, build habits about what to pay attention to and what to ignore, and learn to coordinate our bodies in order to execute these sensory routines. That learning seems to happen via reward mechanisms, either self-administered or delivered by others.

In the lion case, it seems like anxiety is the reinforcement learning that forces people to look at the yellow blob. So how do people come to have that anxiety? I suspect it is socially reinforced. Children are punished for being careless and rewarded for being perceptive, and warned about specific dangers until their brain punishes them with more anxiety for carelessness than for caution.

I notice that with the taxes and email examples, these are often activities done in solitude, and where the direct dangers afflict only adults. It may be that there's not enough social reinforcement to instill sufficient anxiety for not doing one's taxes or checking one's email in order to overcome the natural unpleasantness of checking Quickbooks or gmail. Therefore, a whole class of problems becomes a source of akrasia.

Another explanation for these two examples in particular is that it might be that the threat of bankruptcy or losing one's job isn't actually as bad as it seems. Perhaps people are, in effect if not in conscious intention, checking to see if they can keep their job without checking their email, or somehow get out of paying their taxes (perhaps long enough to see a bump in their income).

So maybe we don't need to resort to a whole alternative brain architecture for special cases where reinforcement learning seems to predict some unrealistic behavior. Maybe instead, it's simpler to assume reinforcement learning, and figure out the simplest explanation for how reinforcement learning might produce the outcomes we observe. Then you could test this explanation empirically, and try testing several iterations of more complex explanations. If you try hard and fail to find an empirically verifiable reinforcement-based explanation for a behavior that seems contrary to the 100%-reinforcement hypothesis, then you start thinking that there must be some alternative brain architecture.

Expand full comment
(Banned)
Feb 2, 2022·edited Feb 2, 2022

Humans don't have "decision-making algorithms." Humans are not computers. It's really embarrassing to read stuff written by people who don't understand that.

Expand full comment

You should really read The Master and His Emissary. It does a lot of deep diving and presents many satisfying approaches to answering the question you have. The author is a psychiatrist and also has a great deal of expertise when it comes to studying the brain physically, both in humans and in monkeys. This makes him able to use hard science to approach the issue while still recognizing that humans are not computers. It's an incredible book.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

Hi Scott. You might be interested in the works of Robert Trivers and Hugo Mercier on the hidden role of persuasion in shaping reasoning. My colleague Moshe Hoffman and I also address this in a chapter of our upcoming book. Lmk if it’d be helpful for me to send further details on anything.

Expand full comment

This seems like a back-propagation error, or rather, back-propogation seems like the mechanism which decides how much to update “avoid looking for lions” vs “avoid lion-infested savanas” vs “carry a pointier stick”.

There was a paper posted to lw recently showing that predictive coding can implement back-prop, maybe this is a place where evolution's handy trick to simulate back-prop using local information breaks down.

Expand full comment

It does not seem clear reinforcement learning is being "misapplied" as there might be no alternative and human reinforcement learning is simply far from optimal as it does not recurse the decision tree at all like reinforcement learning algorithms, but instead generalizes from memorized valent experiences with a patchwork of adaptive computational fragments like a babbling large language model. For accurately computing the utility of doing the taxes, you'd have to descend very deep in the decision tree until you arrive at the relatively narrow rewarding branch in the far future—a computation that is infeasible in our wetware. So a better title may have been "Motivated Reasoning As Suboptimal Reinforcement Learning Messing Up Our Epistemology".

Expand full comment

As much as I wanted to believe this when I first read it (because it seems more parsimonious at first glance than my current model of this sort of thing) I'm going to roll to disbelieve because this model does *not* predict that people will feel better after admitting the thing they're avoiding isn't going to go away, and the ugh field will disappear.

ISTM if the issue were purely reinforcement learning, resistance shouldn't disappear upon admitting what you're avoiding admitting.

Now, is it something that affects AI design? Not going to argue one way or the other. But for humans this seems to be missing whatever element(s) predict the collapse of avoidance once you "admit" that what you're doing isn't working. ISTM the Litany of Gendlin wouldn't do anything useful if knowledge-avoidance were as simple as reinforcement learning.

Expand full comment

"why does the brain so often confuse what is true vs. what I want to be true?"

Are you applying this question to yourself or other people? I'm not able to relate to this question if I apply it to myself, but it seems plausible when I apply it to others.

Introspecting while keeping the intent of the question and applying it personally, I think I would rephrase the question as "why does my lizard-brain/base-animal cognition so often win against my metacognition?"

I think that rephrased question is answered by the basic fact that metacognitively I have no motivation. The reason I do anything is for base animal motivations. I mean, even the reason I use metacognition is from the fear of making a mistake or the gratification of being right.

Using the tax example (and assuming I didn't do my taxes), I think the reason for me not doing my taxes would be that my base-cognition would fear the act of doing taxes to the point of shutting down my metacognition in that domain. My motivation would be shifted to playing video games, data binging, or other distractions (to keep my metacognition from generating possible futures that my base-cognition fears).

Expand full comment

I really don't buy the "Then you have to freak out and run away, and it ruins your whole day. That’s a lower-than-expected hedonic state!". You know what's an even lower-than-expected hedonic state? Getting mauled by a lion (or run over by a truck). It's so much lower that in many cases, it's permanently zero. I don't buy that a behaviour that involves *not* investigating potential existential threats to survive long enough. Especially one that is just as, if not even more relevant today than previously (how many lions did a primitive man encounter, and how many trucks could a modern man walk in front of?)

Expand full comment

This post made me wonder, given the good logic of having a bad day after seeing a lion, why do we seem to get so much enjoyment from seeing lions (and large animals in general) up close? People love zoos and for a sizeable number the attraction seems sufficient to sink a lot of money into safaris etc. It feels like this is more than just enjoyment of nature's majesty a la an ocean view or a sunset?

Is this some kind of vestigial love of the hunt, at odds with (and perhaps made all the more thrilling by) the countervailing fear of a mauling?

Most people outright don't like spiders, I wonder if we simply haven't evolved the same excuse-making-fascination with them that we have with big cats?

Expand full comment

Regarding "not turning the head", I see a big positive value in self-reflection and/or therapy. In a behavioural scenario, you want to avoid bad outcomes. But in an epistemic scenario, you want to say "seeing a lion is bad, but noticing it quickly is positive and I did a good job". Reinforcing this kind of thinking may help people to rescue some trapped priors.

Expand full comment

I feel like you’re creating 2 cases out of a single algorithm. Let us say your brain is only optimizing sum of net future rewards (Return). Now if you miss a lion at the peripheral, there’s a chance you’ll die so your brain will focus on that. Once you see the lion, your hedonism will reduce for a short while, but compared to the chance of outright dying the reward is much higher when you spot the lion than when you don’t, so it makes sense that your brain chooses that option. On the other hand what is the net reward for knowing you are wrong and changing your prior. Your hedonism increases a bit because of your internal satisfaction of knowing you are right, but this can be smaller than the decrease for feeling like an idiot. In fact the lessee the chance that that piece of information affects you directly, the lessee the lifetime reward of correcting yourself of that piece of information. This way i don’t think the dichotomy you claim exists

Expand full comment

This might be a better example than the lion:

https://en.m.wikipedia.org/wiki/The_Fox_and_the_Grapes

Expand full comment

I think this is likely to come off as very naive, but I work in a very different field so please excuse me.

I find this kind of question- where we can think about thought in some way that combines decision theory, psychiatry, potentially artificial intelligence and different models of consciousness, to be like *the most important and interesting* thing- are there any groups of people that do research on this? Could I come at this kind of problem from being a practicing psychiatrist?

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

“ This question - why does the brain so often confuse what is true vs. what I want to be true?”

I somewhat regularly find the need to talk myself out of something. If my dream jobs comes along and it falls through at the last minute, I comfort myself by saying, “That’s a sign that it would have sucked.” And that’s more comforting to believe than that it would have been everything I dreamed it could be. So I go with that.

But I know I’m doing it. Do most people not know “on the inside” that they are doing that?

When someone goes to Destin, FL and not St Barths or they buy a Camry and not a Lexus and say it’s just as good or say their super successful friend isn’t really living his best life - they know (at some level) that they are just telling themselves comforting lies, right?

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

I've thought for a while that something like this might happen around the "cognitive dissonance" feeling. That's an unpleasant feeling, and you get it more if you look for info that goes against current belief. Thus, this punishment would breed confirmation bias.

Expand full comment

There is 2 things that make this more complex imho :

Immediate feedback is much more efficient at reinforcement than delayed one. So people not doing things to avoid pain is clearly happening, but much much more for immediate pain than future one. The more future and abstract the pain (or gain) is, the less reinforcing effect it will have. People will certainly avoid walking on an injured foot because it hurt, maybe not look in a dark corner that may hide a lion, but certainly not hesitate to do their tax in fear of a having to pay more than they could in 6 month.

On the other hand, there is a kind of meta reinforcement system active for abstract / delayed stuff : you directly enjoy (or hate) some mental activities, not directly linked to practical consequences. I constantly delay doing my taxes because I hate all administrative paperwork, not because I fear bad surprises. If anything, it's not doing them /rush them that had lead to bad surprises (and will probably do so in the future)... On the other hand, me enjoying math and logic games certainly helped getting my degrees easily... But i did it because i enjoyed this kind of mental activities, not because i saw that doing more of it helped with my studies...

Delayed reinforcement demand a conscious effort, and doesn't work so well (for me, but i usually have a relatively long time preference so i would be surprised if on average future reward /punishment have a lot of reinforcement effect )

Meta reward where mental activity itself activate the brain reward system seems a stronger effect. Then it remain to see why some mental activities are enjoyed more than others, and how and why it varies across people...

Expand full comment

Me too!

Here is my take as to why.

In a post-rational world full of valence-charged facts, it is impossible to have a rational (or even civil) discussion. Being immersed in our post-rational era means we must acknowledge ‘lived experience’ without being able to opine on how it impacts our common goal of ‘equal opportunity for all’ or how we can work as a pluralistic society with differing definitions of “The Good.” Makes me wonder what will happen to the humanities like literature and theatre studies (founded on fiction and using empathy-like mechanisms for exploring differing definitions of The Good), but that’s for another time.

The greatest danger is believing only your opponents are infected with Humanistic Truth. While I hate to admit it, I am basically intuitive, not rational. Just like all people, practically all the time, I make hasty judgments. These are not based on reason. My false reason comes from the hidden operations of cognitive predispositions and a two-track brain. Instead of my mind acting as a judge weighing the facts, it acts as a press secretary seeking to justify my beliefs.

That’s right, “humans are not designed to practice reason. We are designed to make arguments that aim to support our preconceived conclusions, not yours,” The Righteous Mind: Why Good People are Divided by Politics and Religion.

Once a social value has been mapped onto the biological disgust mechanism in social interactions (e.g., valence-charged facts), then it is impossible to have a reasoned discussion about any topic (from racism, inequality, BREXIT, or the election of President Trump). We need a new narrative to allow us to deploy our critical thinking skills and address the growing inequity in our society. I find the best new narratives use allegory and metaphor deployed through humor that must be grasped and not explained.

Watch this video to see what I mean.

https://youtu.be/Ev373c7wSRg

Expand full comment

I think Hansonian signalling/far mode is at work, as usual. People hear you say you believe X. Belief X indicates that you value Y. They value Y, so they socialize with you more. The system that caused you to say X is an adaptation to this. It really just doesn't enter into the question whether X is true.

Suppose someone says "I believe the solution to violent crime is that we need more creativity in public schools" or "I believe the solution to violent crime is that we need more religion in public schools". This does actually vaguely correspond to an empirically testable statement. But that's really far from the point - in a deep sense, nobody cares if it is true or not, they (the speaker and listeners) just care about what it indicates about the person who has said it. If it's false, well, at least this person is right-thinking and therefore someone they want to socialize with.

Expand full comment

Does it really ruin your day to turn your head and see a lion? Or does it save the day to be warned against the lion so you can go on living? Does it ruin your day to see that you're not going to make the tax deadline unless you get down to working on your taxes, or are you glad to remember that you can avert a fine? Does it really hurt you to find out that you were wrong about politics, or does it please you to find out why you were wrong and why you can now confidently make a better choice? When I'm learning new things, it's painful, but afterward I'm so glad I went through the pain.

Expand full comment

To me the difference between the lion case and the taxes case is something like - how quickly are you going to get feedback on your decision/beliefs? In the lion case, you can't actually avoid learning in short order if there is a lion, because it will probably eat you. In the taxes case, you can avoid it for a pretty long time! Short-term bias is a pretty normal factor in how humans make decisions and it seems pretty applicable here too.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

Wait. I'm firmly in "the brain's architecture is complex and heterogeneous and it's probably a requirement for intelligence" camp, but there's no need to assume additional complexity and specialization to solve this particular problem.

Predictive coding framework, as I understand it, says the brain tries to fit expectation (top-down signals from its model of the world) to reality (bottom-up signals from its inputs), and ultimately, to minimize the mismatch between the two. This alone basically covers the "behavioral"/"epistemic" distinction already. Visual cortex tells you that your model is wrong (a yellow blob where you don't expect it), which triggers an action to resolve the mismatch (by checking more closely). That's the only mechanism you need. There's no distinct "hedonic reinforcement learning", there are only "hedonic inputs" from your body, which get processed the same way all inputs do. And there's no distinct "what is true" discovery in the brain, it's literally all about wanting its internal model of the world to be true - and to achieve that, it chooses from different, but potentially equally valid strategies like changing the model, acting so that the inputs change to a better match, or disregarding inputs and substituting its own.

The reason this procedure may cause problems is threefold:

1. That the inputs are imprecise. (Turning your head towards a yellow blob is the right action in clear daylight, but on cloudy night it will likely fail to penetrate the darkness and resolve the confusion, not to mention all the false alarms you'll be needing to process. In terms of detail/clarity/readability, stress, etc., signals from your body are more like the latter case.)

2. That overruling the inputs requires learned experience. (Children do in fact need to specifically learn that closing their eyes does not get rid of the distressing object they see. But we're practicing seeing all day, every day, from the day we are born, we've had plenty of time to find and learn correct heuristics. For taxes, not so much, and for many stressful situations in general, taking your mind off the source of distress and waiting until you calm down is in fact the correct action to take.)

3. That the brain optimizes for the inputs, and not directly for acquiring the best possible model of reality. (So, it will decide to check your taxes if and only if it concludes it will help you reduce your stress. The rational argument that resolving the source of your stress is the obviously correct action to take is merely one of the pieces of evidence to consider, actual practical experience with stress input is another. Say, I'm a depressed ADHD sufferer with a lot of experience in being late on payments, and my learned strategy of "wait for a reminder, hopefully get around to reading it right away, make a single money transfer for the right amount" does in fact reduce stress compared to "search for the bill, check your bank statement to see if you've paid it, manually count how much you're overdue, add penalty/interest, etc. etc.")

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

People often say things like "Thank God I noticed that lion in time!". I think the premise that noticing and escaping from a lion is a negative experience could use examination.

Expanding a little more, a reinforcement learning system can't be solely defined by the method. The reward schedule is a part of the system that can't be separated from it.

Expand full comment

I get a particular kick out of having opinions that contrast to some sort of orthodoxy or are rare. The fitness motivation for rare beliefs is that they have a high potential to be unusually valuable. Everyone says there's a lion to the north but I am motivated and got up three hours early to find out there's five to the south and we should probably go east or west. I want to be that guy

But rare opinions are baseline low hedonic. Being or being percieved to be the only person who is wrong is more bad than being the only person who is right is good

What's my special kick then? I am unusually confident in my ability to be the only right person. Instead of fear of non-conformity I feel the appeal of trying to demonstrate that I have a unique value that could save us from five lions

Regardless of the quality of my actual opinions a key ability I have gained in this respect is to contrast my opinion against people not in my sub-tribe and also against people who are in my sub-tribe. I can direct the motivation to some degree as I choose

Not in my sub-tribe are people who still fear the north lion. But there's also people now in my sub-tribe who agree with me and say "this is why we should keep going north"

Here's where the hedonic motivation loses a lot of steam for most everyone. My desire to be specially correct can also be directed against the "go north" people who agree with me. I'm like "actually with five south lions and one north we should go north by northwest". Now I have something that is prospectively valuable to the whole tribe but is also prospectively agreeable to both sub-tribes. Here's a hedonic (and hopefully practical) north star - finding the rare belief that could be attractive to two opposed sub-tribes

It's notable that this general willingness to be of unusual value is perhaps unprecedentedly reinforced by western culture. We are the red nosed reindeer looking for foggy nights. The underlying hedonic motivator has been overlayed with a large dose of stories about how there is a very delayed and potentially more optimal social outcome

It's notable that Rudolph's situation is not hedonically ideal for him. You can only be so happy. If it's rare you should not typically be able to negotiate routine hedonic losses to reach the rare outcome, even if the group's sum of well being is improved on net. I think it takes some combination of social encouragement plus unconventional personal characteristics to get there as often as you'd like. In my case I have both a very high and low self regard. I'm very confident in my ability to be right in opposition to a chosen group. So I am on average less sensitive to near term low hedonic experiences related to being different. On the other hand I have something like low overall self regard or low expectation of group regard. So I have a lower expectation and ability to experience near term high hedonic group acceptance states that aren't exceptional. I have a lower barrier to the path to unusual prospectively high hedonic states and I have a stronger desire or need to achieve them to feel I have truly gained a secure status within the group

Expand full comment

I'm no expert on cognitive science, but I happen to have written a longish blog post last year from a game theory perspective looking at similar ideas in the context of your post on trapped priors: https://ingx24.wordpress.com/2021/05/27/the-trapped-priors-model-is-incomplete-guys-its-time-for-some-game-theory/

Expand full comment

I am not very attracted to the explanation that taxes and politics are novel problems compared to finding lions in the wild. Early humans were probably exposed to politics like situations that posed threats to them: "Grug thinks Bagug is spreading lies about Grug to tribe", "What if red-rock tribe is about to sack our village to take berries?", "What if bagug hunts bigger game than Grug and thus surpasses Grug in social capital within tribe?"(I guess this last one would probably be less eloquently put by our paleolithic ancestors). Not only a bunch of political situations would be present in their lives, given the current data on the levels of violence between tribes of early humans, I think it is fair to assume that those political threats were an even more immediate and lethal to them compared to our current political quarrels. Illustratively it is fair to assume that Grug is much less safe having and Ugh moment about the red-rock tribe coming to sack his village for berries than some college student having and Ugh moment about going out to vote for who he deems to be the most fiscally responsible candidate in his local senate election. For me the much more obvious explanation is that we feel like not paying our taxes in time or losing some election are not problems in the present but in the future. I want to iterate that I am agreeing with the main idea of motivated reasoning as mis-applied reinforcement learning, I just feel like the explanation of why some problems are put in the Ugh category and others are dealt with surges of adrenaline are easily explainable by time preference. I mean, when the SWAT breaks your door after 10 years of unpaid taxes you'll probably not have much time to have an Ugh moment, but those 10 years were really sweet 10 Ugh years.

Expand full comment
founding

Love is healthy confirmation bias. We choose a spouse, we have children, we love, protect, and nurture them. If our family life is a source of happiness, we assume the person we married is uniquely compatible as a spouse and co-parent. It's probably not true, but it can't be proven otherwise.

Expand full comment

> Motivated reasoning is the tendency for people to believe comfortable lies, like “my wife isn’t cheating on me” or “I’m totally right about politics, the only reason my program failed was that wreckers from the other party sabotaged it”.

I think a large part of the reason people believe things like these is they are crony beliefs, that is, beliefs that one has because they are socially useful not useful in connection with the non-social physical world. <https://meltingasphalt.com/crony-beliefs/>

So if someone thinks their wife isn't cheating on them, they might come across as more confident, which will help them socially.

Of if they have the same political views as all their friends, they won't get ostracised for being a non-conformist.

Expand full comment

"But suppose you see a lion, and your visual cortex processes the sensory signals and decides “Yup, that’s a lion”. Then you have to freak out and run away, and it ruins your whole day. That’s a lower-than-expected hedonic state! If your visual cortex was fundamentally a reinforcement learner, it would learn not to recognize lions (and then the lion would eat you)."

I find this sentence quite baffling. I think you are conflating reinforcement with goodness or pleasantness. But I don't see any reason why reinforcement on a neurological level needs to have any kind of emotional valence tied to it. Brain regions are reinforcement learners that are being reinforced to do something USEFUL, not necessarily something that feels hedonicly pleasurable. The visual cortex isn't reinforced to make you see butterflies and rainbows, it is reinforced to turn complex noisy data into compressed, regularized, USEFUL data that other brain regions can do useful things with. Pleasure / goodness arise as a meta-heuristic to direct behavior in useful directions like survival and reproduction. From a biological perspective pleasure is a tool, not an end.

A visual cortex that recognizes a lion and saves your life will be reinforced because that outcome was useful to your biological imperative for survival and reproduction. Evolution doesn't give a damn that it made you feel bad to perceive that lion, it cares that you located and evaded a threat and survived to reproduce.

Expand full comment

The Elephant in the Brain¹ explains Motivated Reasoning quite differently, and more disturbingly:

Your brain has many parts that operates semi independently, and many of them are not observable to the "you" that's reading this.

One of them we can call your Moral Center. It makes you genuinely want to do The Right Thing, as defined by some rules evolution probably generated to solve eons of iterated prisoner's dilemmas.

Another part, which has no name, rationally, unsentimentally and subconsciously determines what would be in the best interest for you to do. Sometimes that is to do good things for other people, so they'll reward you. Other times it's to do terrible things to other people so you can take their stuff.

Things like the latter don't sit well with the Moral Center. So your brain has another module, sometimes called the Press Secretary, which comes up with a good enough sounding reason for doing the terrible thing that the Moral Center, which isn't very smart, will accept. So you do the terrible thing, reap the rewards, and feel good about your superior morals.

And that's how at least one kind of Motivated Reasoning works.

¹ https://amazon.com/Elephant-Brain-Hidden-Motives-Everyday/dp/0197551955/

Expand full comment

You might be interest in a paper by Firestone and Scholl (2016). "Cognition does not affect perception: Evaluating the evidence for “top-down” effects". One of their arguments is similar to yours: perception is essentially epistemic, and so it would be really really Bad Design if cognition was able to influence it in any substantive way.

Expand full comment

But the visual cortex doesn't learn to see lions, it learns to *see*. And learning to see -- that is, learning to see *everything*, trees, lions, refrigerators -- could, quite plausibly, be achieved by hedonic reinforcement. Which would come not from the emotional valence of the objects in the visual field, but from the satisfaction of building a good working model of the correspondence between the visual field and haptic space.

Expand full comment

You might find this article interesting https://onlinelibrary.wiley.com/doi/10.1111/japp.12577

Expand full comment

I think you might be understating the importance of social relations to this question. None of these calculations are made at the individual level. IOW I can't wonder "is the blur a lion?" without wondering, probably below the level of awareness, what other people think about the blur. If I yell "lion!" and it's not, will people make fun of me and cancel my invite to the reindeer games? Or maybe it's a lion but I know all self-respecting members of our party know that the Great Leader eliminated the lion threat last year. Is it worth yelling "lion!" if I then lose all my friends and allies? (Sure, if the lion is running toward me, but otherwise, maybe not.) This is the root idea of Dan Kahan's work on this, of course: That people value good relations with others in their group far more than they value epistemic accuracy. So there may be a good deal more simple reinforcement happening here, on the order of "say there is no lion"-->all my friends cheer and all my enemies scowl-->feel good.

Expand full comment

You seem to be crediting Roko Mijic with the term “ugh field”, but I believe the term predates him.

Roko says in that article:

> Credit for this idea goes to Anna Salamon and Jennifer Rodriguez-Müller. Upvotes go to me, as I wrote the darn article

This also aligns with my having first heard the term at a CFAR workshop about a decade ago.

Expand full comment