335 Comments
Comment deleted
Expand full comment
author

I claim that you've made a dozen bad life decisions relating to this in the past week, and that several forms of therapy are built on top of it (though they wouldn't frame it in those terms).

Expand full comment
Comment deleted
Expand full comment
author

The ugh fields post linked above has a lot of examples of bad life decisions. In terms of therapy, more speculative, and I'll write more on it later, but I think a lot of blocks in therapy relate to something like "It is easier and more fun to come up with a theory of why I shouldn't have to feel bad about this problem, than to solve it", which I think is this same idea of reward operating on epistemics instead of action.

That is, if there's a problem (you are poor), and your brain can solve it by either coming up with some kind of theory in which it's rewarding (rich people suck, they've all sold out, my poverty actually makes me a good person) or by working hard to get more money, and your brain is indifferent between getting reward by epistemic changes or by real-world changes, making the real-world changes is going to be a really tough sell. My guess is it's much more complicated than this, but I do think this is part of the dynamic.

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment

Ok, but you didn't say it was "untethered from the physical nuts and bolts of MY daily life", you said it was from "daily life". A reasonable reading of this is that you think it doesn't apply to the daily life of any significant number of people.

Expand full comment

Can people really stop feeling something because they have a theory? Or is your model really that people *cannot* stop feeling bad and that's *why* they have to solve the real problem?

Expand full comment

Regarding procrastination and ugh fields, here's an interesting and relevant question: Do animals procrastinate? I don't know the answer, but it certainly seems quite possible to study it. For instance, squirrels gathering nuts for winter: Are they at all like students studying, i.e. gathering information, for an exam? Do squirrels, like students, often laze around til the deadline is near, then put out a burst of effort that exhausts them and sometimes does not permit gathering of a sufficient store of the resource in question? Maybe squirrels' behavior with nuts isn't the best thing to study, since the nuts don't land on the ground at a steady rate all fall but instead all drop over a short span of time. But there have to be some situations where animals really do have the option to procrastinate. And if there isn't, we could create on artifically in the lab with rats.

Anyone here know anything about this?

Expand full comment

We all know that grasshoppers procrastinate and ants don't.

Expand full comment

> The ugh fields post linked above has a lot of examples of bad life decisions. In terms of therapy, more speculative, and I'll write more on it later

If you want to do a post of ugh fields I would greatly appreciate it.

Expand full comment

This is the kind of comment that makes me really curious about what a deleted post used to say.

Expand full comment

It was just a cleverly worded cheap shot, inaccurately implying this discussion doesn’t have practical value.

Expand full comment
founding

I thought your original comment was a little hostile, but I really appreciated your follow-up comments to me (that you've since deleted) and also wanted to reply that I'm sympathetic to what you described as part of your impulse to comment, tho I think having discussions like those in this post is a perfectly fine hobby to have :)

Expand full comment

To remove any mystery here, I’ll say that part of it was a feeling of “Don’t these fellas have any real problems?”

Expand full comment

I don't think it was a cheap shot. I think your underlying point was important and sound, and it resonated with me. It *is* a little worrying when the best and brightest young folks are seriously debating how many angels can dance on the head of a pin *and* seem at least somewhat unaware that the question is a priori frivolous. One can't help thinking someone ought to say something. Recall the wry popular quote: "If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera." There's a lot of fixing the drains stuff that still needs to get done in 2022.

If I had to guess, I'd say you regretted that you expressed it in a tone that came closer to contempt than bemusement, which unnecessarily antagonizes and derails your point. Fair enough, but I could wish you had rephrased it rather than eliminated it.

Expand full comment

The tone was way off base. I don’t want to ruin an apology with an explanation.

Expand full comment

Cleverly worded cheap shots are deletable? I assume that includes sardonic, obscenity-free ad hominem. What a world we live in where judiciously phrased, non-vulgar insults are considered outside the bounds of reasoned discussion. We are all snowflakes now, aren't we?

Expand full comment

I’ve been trying to eliminate snark from my conversations. Serves no purpose if you’re seriously trying to communicate.

Expand full comment
founding

Gunflint deleted their own comment. I didn't consider it "outside the bounds of reasoned discussion" – I, and several others, directly discussed their comment!

Expand full comment

Ok, great. Sorry for assuming.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I'm pretty sensitive to the price of pasta, potatoes, flour, and rice. I'm somewhat aware of the price of meat. But I don't live on milk. It is something I buy rather sporadically, if I get a craving for something I don't eat regularly - rice pudding, or cold cereal.

I suspect that milk consumption in the US is excessive due to PR from the dairy industry driven by...motivated reasoning.

Anyway, I was wondering if Scott has talked to anyone who is diagnosed with schizophrenia recently, and what he's observed or assumes to be their relationship with "motivated reasoning". Sub-question is, do people with schizophrenia go to therapy?

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment

Something I found interesting as a difference in perspective between generations that may get lost in time: I've heard/read people scorn skim milk as unpleasant, watery, and blue colored. At the same time, products of a family farm are idealized as wholesome, rich, organic, unprocessed. But someone I knew that grew up on a farm in the 1940s, didn't like whole milk, considered it greasy and distasteful. Why? Conditioning. On the farm, the fat was always skimmed off, otherwise you wouldn't have butter and cream.

Expand full comment

Possibly they didn’t like it because it wasn’t homogenized. Whole milk is actually worse to drink when the fat floats in greasy globs instead of being smoothly mixed in.

Expand full comment

I haven't ever seen non-homogenized milk in a grocery store.

Expand full comment

Me neither. If he lived on a farm he may have drunk milk from his own cows, and that milk wouldn’t necessarily be homogenized. That’s just what comes to mind when I hear milk described as “greasy.”

Expand full comment

You can buy non homogenised milk as a specialty item in a few shops. I've had it.

Expand full comment

It's probably got a lot to do with what you're accustomed to. Skim milk is more watery than whole milk, but which you actually like better is subjective. If you're used to a certain level of thickness, different levels are probably going to taste wrong.

(On a related note, sugar-sweetened soda gives me heartburn but diet soda doesn't. After drinking only diet soda for a while, regular soda now tastes too syrupy for me.)

Expand full comment

Certainly from the few people I know with schizophrenia, many of them go to therapy - there's a lot of useful technique to be learned re: how to calm down enough to not act on the more dangerous of the unshared experiences, how to survive the anhedonia side effects of the antipsychotics, etc.

Expand full comment

>how to calm down enough to not act on the more dangerous of the unshared experiences

I think this advice could be harmful to anyone who takes it at face value. There's an inherent runaway positive feedback loop. If a consumer discloses that they are concerned about anything, then they have conceded there might be a danger, that they aren't sure of the level, and irrevocably put judgement of it in the hands of others who cannot read minds. Sharing with someone who has professional responsibilities to take action, on balance, might be worse even than talking to a lay friend.

I can't imagine any way to fix the dichotomy between suffering alone from an illness and overreaction/misunderstanding other than coming up with an objective way to determine states of mind based on biological markers. If there is one thing I would like to see in my lifetime it's something analogous to blood sugar and A1C tests, only for depression, suicidality, aggression, psychosis, etc.

Obviously it could be used in harmful ways and many (especially paranoid) people would be fearful of a neo-eugenicist movement, but I think one has to come to terms with how harmful and intractable it is to *not* have any objective measurement to base life and death decisions on.

And I do not believe that good treatments can be developed for anything that does not even have a real definition based on measurement.

Expand full comment
founding

Doing one's taxes is "untethered from the physical nuts and bolts of daily life"?

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment
founding

You did qualify that your comment was "probably unfair" but I guess I'm still confused why you bothered commenting at all.

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment

What is the connection between locksmithing and going toward strangers in the dark?

Expand full comment

I think he's saying locksmiths had to approach strangers in the dark (who were maybe locked out of their car, or are maybe hoping to rob him) and this is a scary action

Expand full comment
Feb 1, 2022·edited Feb 2, 2022

In a nutshell, for me at the very least, the Catholic faith solves this problem. From a skeptical outside perspective, ignore the deistic epistemology entirely and focus on the phenomology.

One benefit of faith is that it makes your utility function non-monotonic and flexible. It does this by putting a presumably conscious entity at the peak of the hierachy of values (think virtue ethics here). So you get an ultimate determiner of value that is pretty rigid but can decide between competing values. If Christ isn't this for you, well, I enjoy sharing this experience with 1 billion + people

¯\(°_o)/¯

24 hour later edit:

A few responders to this post seem to be operating on an assumption that no one is capable of being very intelligent, well-informed, intellectually honest, and religious simultaneously. While I don't agree with this perspective, even though it may be a strawman, I suspect many of the benefits of this way of thinking could apply to eg HPMOR fans who want to consider the actions of their favorite character rather than the deity I choose to follow. I think good fiction is invaluable, it allows for an idealized and situationally unique perspective, to which one might consider themselves an "apprentice" in a way that they would balk at doing for a real person. I think holding yourself to a higher ideal is generally a great thing, even if it's not mine (and from where I'm sitting, if my assessment is correct an honest and courageous attempt at this will have the same ultimate endpoint one way or another).

(PS I have cried more than one at HPMOR Harry and his interactions with dementors and phoenixes, it's better than the original IMO)

Expand full comment

Can you say more here? Having trouble relating this to the post.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I can say more, but I did preface it by saying nutshell. I'll try some additional bullet points for now and if more clarity is needed I can provide it later.

- utility functions that aren't monotonic are susceptible to "Dutch booking" ie a form of exploiting cycles in values

-having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism)

-if you assume virtue ethics, then you have the company of roughly a billion people also asking the question "what would Jesus do?"

Expand full comment

I mean if being tractable is the only thing you care about, why not just base your ethics around something simple, like maximizing the amount of hydrogen in the universe or increasing the amount of entropy? Most people care more about their ethics actually being right than they do about them being easy to follow.

Expand full comment

Right, this is why I asked to ignore deistic epistemology. If you want Catholic apologetics I'm not your man, I'll just say my personal process of developing my faith has included a lot of wrestling with God which is also an ongoing process at times.

Expand full comment

I agree with Paul above, and I don't see how your comment addresses his point. Also, your initial argument sounded something like "Catholicism simplifies ethics", but this comment sounds like even that might not be the case.

Expand full comment

Tractable perhaps, but Catholic ethics is anything but a simplification of the problem. Tbh if you think that I question if you're making a good faith faith effort to understand my position.

As to bringing it back to original point, I suspect having an overarching moral/ethical/ideological system is an extreme benefit to motivational and decisional paralysis, which to me does seem to address the gist of the problem with "motivated reasoning" and reinforcement learners.

Expand full comment

Well, if you are in a reflexive equilibria such that choosing a utility function involving maximizing H2 and choosing one that maximizes human happiness are indifferent, then your re probably better off going with the first. The problem is that that is not where we find ourselves, in that we prefer happiness to H2 is there is nothing we really can, not to metion should, do about it. Whereas if having a simple tractable utility function (doing what God says maximizes utility) that still seems to conform to human values (God says to do things I generally agree are good) then that would, initially, seem to solve the problem quite neatly, by being both tractable and ethical.

Of course, there are plenty of other objections to that paradigm (“how do I actually know what God wants me to do here? If God’s depress are isomorphic to what I already think of as good, why is this elaborate utility function necessary, and if they aren’t, then doesn’t that create mor problems?” But I think this particular objection is answerable.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

What's the answer then? I'm not asking to be tedious. I don't see a good answer.

Expand full comment

My point wasn’t to provide a good answer, but merely to explain why this objection didn’t make sense. I think if you work it out the “Christianity (or any deistic religion with certain qualities) solves certain self referential problems in ethics and motivation” could work, but you would have to do far more work to get there than OP did.

Expand full comment

>having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism

I think you need to be clearer about what rigid means. Utilitarianism doesn't approximate deontology so long as utility is finite, because a sufficient number of specks has more negative utility than torture.

Expand full comment

The problem of hell and comparing infinities something I wrestled with for a long time, and has led me to some difficulties considering heresy. I don't think there's a consensus on the topic among more interested, expert, and intelligent theologians than my amateur musings and I defer to them.

Expand full comment

I'd be quite interested in hearing your thoughts about the problem of hell.

When I was a christian, I simply didn't believe that hell existed. It just felt completely against my personal experience of the divine. The Benevelent God, whose existence I directly felt and saw, would have never created such a place. It contradicted everything on multiple levels. So I just never even thought about it as a philosophical problem

Later I understood that all my religious experiences was not my communication with God, but a glimpse into my own values and ethics. And this allowed me to think about the problem of hell and be really terrified. Not of the cruelty of God, of course, but of people who simultaneously claim that their ethics comes from their religion and that somebody being eternally tortured for finite amount of evildoing is part of this religion. If ones ethics can accept hell it can justify literally anything.

How do you, deal with the existence of hell? This is the mode of thinking that has never been possible for me, so I'm very currious.

Expand full comment

I spent the last few years as agnostic and anti-religion and recently converted to Christianity. One of my big gripes with Christianity was the idea of hell until I came to the conclusion that popular understanding of heaven and hell doesn’t reflect what’s actually spelled out in scripture.

The way I understand it now is that God is Love and Hell isn’t a place, it is the absence of love. God doesn’t send anyone to hell but we instead choose Love or it’s absence.

Closest analogy I can come up with: you’re dating the perfect significant other. You lie, apologize, they forgive you. You cheat, apologize, they forgive you. But you continue cheating and lose the remorse. They call off the wedding and you are stuck for the rest of your life with the agony of losing the perfect spouse. That is Hell.

Important to note that this also means a deeply Loving/selfless Hindu, Zoroastrian, Atheist, etc already knows God and will spend eternity with God

Expand full comment

I don't really have any original or personal thoughts on the topic. What I'll say is, for most things I seem to side with Thomists, but on this question I do "dare to hope", since it's the most legitimate seeming option that's available to me.

https://www.youtube.com/watch?v=dmsa0sg4Od4

Expand full comment

I'm going to post this here and rely on some security through obscurity with my Google profile, I will come back and delete it once someone comments reminding me to do so (please do this is you've got the link saved). I don't want to share early drafts that I haven't at least presented to clergy, but this is my most sincere answer to how I (currently and after attempting to study the issues, but without any claim to expertise) deal with squaring my faith with my understanding of the world.

https://docs.google.com/document/d/1ItOgM-zI5Ybtp_4TTg4xB23BdNryQhIF-z5hUlX4MxE/edit?usp=drivesdk

Expand full comment

> a sufficient number of specks has more negative utility than torture.

Does the majority of utilitarians actually agree with this? I seem to recall that there was plenty of argument on LW on this, and no clear consensus emerged.

Expand full comment

I wonder if this also ties in to self-identity. I remember reading about if kids are praised for being smart, they can start to self-identify as smart and so become reluctant to try new things (since they will initially be bad at them, which clashes with their self-identity). Maybe that was bunk but I think I can see it a little bit in my own life.

If we start to self-identify as being right, or smart, it might make it more painful to update our beliefs because being wrong in the past could clash with that identity. I could see the same thing happening with virtue too (although I've read another Christian say the view is more that everyone is a sinner, which if practiced might prevent that).

For now, I try to think of the truth as a model. The world is too complex to totally understand, but we imagine simplified models in order to navigate it and find patterns. It would be unusual to be upset at finding a better layout for a model train set, since in that context the goal is usually to *have* the best model rather than to *be the best at building models*.

Expand full comment

I like this train of thought. One thing I've found helpful is to not only think, what would Jesus do? But also, what would my "saint self" do. This approach has pros and cons, but fortunately the Catholic approach makes both methods isomorphic.

Expand full comment

Adding another 2c to this:

I once heard someone (I think Leah Libresco but I'm not highly confident in this attribution) describe the "communion of saints" as an attempt to provide cross-temporal bayesian agreement on theological questions. I really like this framing, as it justifies the variety and sometimes imperfect nature of recognized saints. Few if any saints were virtuous their entire lives, and some of the most notable we're quite sinful (a great example here is the apostle Paul, who prior to his conversion persecuted Christians).

Another great benefit, imo, is that Christ was incarnate in a particular time and place, which though his example was indeed perfect, has limited utility in informing people in other contexts. So saints, while not perfect, at least by the end led sufficiently holy lives for the Church to confidently state that they're in heaven, which is a great element for a faith that is living to have as examples.

Expand full comment

The "everyone is a sinner" thing is really good for solving that sort of problem. My own experience with Christianity was that it solved a lot of circular reasoning.

Suppose I'm depressed. I could solve the problems that are exacerbating that depression, but then I'd have to think hard about depressing things, which I obviously don't want to do. The actual truth is that, because of the aforementioned debilitating depression, at this moment I do indeed suck and am doing bad things with my life. But if everyone's a sinner and Jesus loves me anyways, then I have motivation to examine my life with clear eyes. No matter what I find, it won't decrease Christ's opinion of me (even though it will temporarily reduce my own opinion of myself). Then I can make the changes that allow me to love myself like Christ does.

Buddha's four noble truths (notably the one about "everything sucks all the time just cause") can achieve a similar end

Expand full comment

I'm a little confused -- what does the problem have to do with ethics ? I suppose you could say that the fear of opening your tax book because you know you'll find taxes in it, could be labeled as "sloth", but it's a bit of a stretch.

Expand full comment

When you write "and religious", my question is, what is "religious"? It's kind of a cliche that people supposedly like to say they're spiritual but not religious. I don't know if you are using that sense of "religious" or even what the cliche really means.

I'm not religious in the sense of belonging to any community, attending regular services, preferring one type of Christianity, and so on. That doesn't seem directly related to being intelligent or honest.

I was raised by people who I think were essentially atheists but probably would've abhorred the label. I think, but am not sure, that they came from their respective very religious backgrounds and were turned off by hypocrisy and bad behavior of religious people. I had no bar mitzvah or baptism or confirmation or anything. But I was given a Bible and I read a fair amount of it, plus later on stuff by C.S. Lewis, a biography of a saint, etc.

Recently, I've been talking to someone with a Catholic background, and it perplexes me that they seem to not be very familiar with the Bible, to believe in all sorts of new-agey things that seem to me in conflict with Catholicism, but they don't seem to be consciously hostile to it either.

When I started reading about the Inquisition, while I don't accept the *premises*, the theological debate over whether and which magic or astrology is compatible with Christianity makes some sense to me within a closed system. Normal people are oblivious or think of witch trials and/or Monty Python, though, right?

Conversely, contemporary magical beliefs like the law of attraction completely confuse me as to why someone would accept them, and not even see a conflict with traditional beliefs. But I don't think it's normal to analyze things like a Talmud scholar.

I guess where I'm meandering to, is that religious (or Christian) can mean many things, and I'm doubtful that there is even an intersection between what is it logical for it to mean and what it is normal for it to mean.

Expand full comment

I'd go with a pretty standard usage: attend church and confession with some regularity, and deferring to the hierarchy on spiritual matters.

Expand full comment
Feb 5, 2022·edited Feb 5, 2022

There you go, that's exactly what I'm talking about. Sure, that is "standard", but from what I read it excludes most Catholics! And my anecdote is not inconsistent with that.

Also, I notice you don't even include "reading the Bible".

"Most Catholics worldwide disagree with church teachings on divorce, abortion and contraception and are split on whether women and married men should become priests, according to a large new poll released Sunday and commissioned by the U.S. Spanish-language network Univision."

https://www.washingtonpost.com/national/pope-francis-faces-church-divided-over-doctrine-global-poll-of-catholics-finds/2014/02/08/e90ecef4-8f89-11e3-b227-12a45d109e03_story.html

Expand full comment

What if it would make your day great to see the lion because you’d run away and then feel like a hero?

Expand full comment
author

I think this is the wrong level on which to engage with the example.

Expand full comment

Is this the way to like scary movies? Not “I like to be scared,” but “I like the sequence of feeling scared and then realizing I’m totally safe/heroically safe at home”?

Expand full comment

I definitely think part of the appeal of horror (whether film, game, or attraction) is the ability to feel fear in a controlled environment where no ACTUAL danger can happen to you*. Of course, people have their own personal tolerances for this, and some people have their risk-assessment feedback set to the point that even simulated peril is unacceptable.

*Provided nothing goes severely wrong

Expand full comment

I've heard this a lot, but never with any evidence I find convincing. I think it might be the kind of thing that sounds nice so no one wants to reject it.

My experience with horror is more like spicy food. It's literally activating pain receptors, but so detached from pain, and so mixed up with a particular sensory context, that the ostensibly bad stimulus becomes more of a unique overtone creating a richer more textured flavor. (Yes, I know some people eat spicy food just to show off their pain tolerance, but I don't think many horror film junkies are like this)

Expand full comment

Another interpretation is that humans are actually pretty bad at distinguishing emotions, and so, for example, heightened emotional states are equally confused. Being excited is not that different from being terrified.

Expand full comment

See also: roller coasters

Expand full comment

This could also be an example of reinforcement. The thinking is: 'You were in danger, and you did something to avoid harm, so you should feel pretty darn good about whatever it was. Here are some endorphins'.

Expand full comment

It's similar as with hot food. Your pain sensores get triggered, but nothing bad happen to you. But the pain triggers the fight or flight response, that make you feel awake and fit.

So over time your brain asociate this kind of pain, with the fun feeling of being awake an fit, but no danger, so you start to like the pain.

Expand full comment

The "bad day" example is meant as a hypothetical scenario in which a singular bad experience would worsen the lion detection circuitry if reinforcement learning was applied to it.

Expand full comment

Right, but it hinges on the detection actually being negative, whereas noticing a danger before it hurts you is generally exciting and stimulating, a positive experience..

Expand full comment

It's entirely hypothetical. For this example it does not matter what would likely happen, but what *could* happen. I'd agree that it is not a particularly good example.

Expand full comment

I think in the lion-in-corner-of-eye example a lot of people would freeze, which also explains the tax behavior.

It makes sense from an evolutionary perspective, where if a predator doesn't spot you they'll eventually leave you alone, but the IRS doesn't work that way.

Expand full comment

arguably the IRS does work that way, but they have more object permanence than your average predator

Expand full comment

You made my day. That's one of the things I love about this community. Interesting, funny and (because?) so on-point

Expand full comment

Tax advice: if you can’t see the IRS, the IRS can’t see you, so invest in an eyepatch and stop paying taxes.

Expand full comment

You can probably avoid the IRS by playing dead. By that I mean, faking your own death and moving to Brazil.

Expand full comment

+1 for the freeze hypothesis

Expand full comment

The IRS is kind of weird. US law can subject you to criminal penalties for not submitting an honest tax return, but not for refusing to send the IRS money. However, if you *do* refuse to pay them, the IRS does get to take your stuff to get what it's owed plus interest and penalties.

Expand full comment

My mailbox definitely projects an "Ugh Field" for me. I know I need to check it, but it only ever brings me junk mail or problems. So every time I think "I should check the mail" another part of me is thinking "Do I have time to solve problems right now? Do I want to? No and no."

Expand full comment

Huh, this is an un-realized advantage of not having street delivery and instead only having a PO box. Lots of my packages (which I ordered and want/am looking forward do) got to the PO box, so the only way to get them is to pretty regularly check my mail.

This in no way outweighs the inconvenience of needing to drive 15 minutes whenever I have something I need/just to check the mail, but it _is_ an advantage I suppose.

Expand full comment

I get my packages delivered to my office, which makes my mailbox even worse. Not even a chance of a cool package!

Expand full comment

Not exactly applicable, but you made me think of this:

Napoleon had a policy of ignoring incoming letters for three weeks. Most things "requiring" his attention didn't actually need him to do anything. Thus, many things just resolved themselves by the time he got around to reading the letter.

Expand full comment

I am writing this comment in large part to avoid looking at my inbox.

Expand full comment

This reminds me very much of Plantinga's evolutionary argument against naturalism. https://en.wikipedia.org/wiki/Evolutionary_argument_against_naturalism#Plantinga's_1993_formulation_of_the_argument

In case that's helpful.

Expand full comment

I find Plantinga's argument strange, in the sense that in most situations, having a belief closely aligned with the observed phenomenon seems clearly advantageous. To my knowledge, systematic discrepancies between belief and phenomenon correspond to relatively rare cases where both (1) it is difficult to determine what is true and (2) there is an inbalance between the costs of the different ways of being wrong (ie. if you are not sure whether A or B is true, and wrongly believing A is much better than wrongly believing B then believing A is better, even if B is slightly more probable).

Expand full comment

The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs. It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't. So if there are truthful beliefs that aren't adaptive, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more adaptive than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.

Really the argument boils down to: if the human mind is the creation (regardless of method of creation, could still be evolution) of a rational mind trying to create more rational minds, then it makes sense that the human mind is rational. If the human mind is the product of blind forces that are only aimed at increasing fitness, then we can expect the human mind to be very fit for reproduction but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of blind forces because that belief is also the product of an irrational mind.

It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "designed to be able to do that" model fits more parsimoniously than the "maybe if you select for only fitness you'll get rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.

Expand full comment

I have a vague idea of an argument for why intelligence isn't adaptive past a certain point. Intelligent people are afraid of small probabilities and unknown unknowns. But these are where intellect is least useful, and analysis is most sensitive to assumptions. People who aren't as intelligent and aren't as imaginative, are more likely to just do stuff until they are stopped or killed. That could be systematically better for a population even if not for an individual. Because it explores phase space that is inaccessible to anyone who must *understand* what they are doing to do it.

Suppose there are 1,000 people facing some threat to all of them, and they can each do something with a 1% chance of success and a 99% chance of death. If they're all intelligent, then they won't do the thing until it becomes clear that the alternative is certain death. At which point it might be too late. But if they're fearless and oblivious, and all do the thing, then 10 people will survive, and carry on their risk-taking genes.

Someone smarter than me could put it more rigorously, but I feel like intuitively that analytic pessimists do not take optimal amounts of risk from an evolutionary perspective.

Empirically, evolution isn't producing runaway intelligence, so there must be some logical reason it doesn't increase fitness, right?

Expand full comment

This assumes a risk distribution, not too unreasonably, like the one in which we evolved, where it is easy for me to get myself killed, possible for me to get my entire band wiped out, but nearly impossible for me to wipe out all humans or all life. We have changed our environment.

Expand full comment

It's still easy enough to get oneself killed, and easier to wipe out humanity, than thousands of years ago. What the modern world has reduced, I think, is the effect of natural disasters that aren't global. And it's increased the individual benefit of being analytical, but, I think, only to a point.

Expand full comment

“ easier to wipe out humanity” was my point, though maybe not much of one.

Evolution is slow. Why isn’t intelligence more culturally adaptive? Or maybe it is, and we’re not noticing?

Expand full comment

It could also be the case that intelligence does increase fitness but that the cost of a bigger brain outweighs that fitness increase. That would be analogous to the explanation I'd use for why men don't have bigger muscles and why women don't have bigger breasts, despite the obvious fitness advantages.

Expand full comment

After thinking about this, I don't understand either:

(1) Why it's "obvious" that those things would be better, or;

(2) How decomposing a reduction in fitness into a small increase and a bigger decrease is meaningful rather than an arbitrary choice applicable to everything.

Expand full comment

I can't see any other explanation than sexual selection for the existence of D cups, and I can't see any other explanation than natural selection for the persistence of A cups despite this sexual selection, and to me these things both seem obvious.

If some variation affects fitness through multiple mechanisms, I think that understanding those mechanisms separately will give a better understanding of that variation. Experimentally, those mechanisms can be manipulated separately.

Expand full comment

Using a heuristic that seeks truth without trying to filter on adaptivity might be more adaptive in the long run. It’s hard enough trying to figure out what's true. Is figuring out what is adaptive easier?

Expand full comment

"Is figuring out what is adaptive easier?" It is very easy. All living organisms do that very efficiently by dying when badly adapted.

Expand full comment

I thought that properly, something being "adaptive" means that it spreads through a population over time. Leading to death or preventing biological reproduction doesn't necessarily prevent a progression of changing statistics. I like that way of defining evolutionary fitness because it avoids mixing in human values and remains abstract and general.

Expand full comment

I was answering the part about "figuring out" what is adaptive. I thought was misleading, because organisms do not "figure it out" which of course does not in the least prevent adaptation from hapening.

And yes, of course, adaptation optimizes the number of produced copies of genes, and not the fact that the individual bearing these gens die or not, though the two are usually quite strongly related!

Expand full comment

"The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs."

I completely agree with the argument, but I think it has little consequence, in the sense that I expect evolution has produced "reasonably" true beliefs in most cases (and no beliefs at all in many situations in our modern lives!) because true beliefs are generally adaptive.

There are many examples where evolution has produced beliefs that are not perfectly true, because when there is uncertainty it is often better to be wrong in one direction (e.g., predator detection), but I know of no examples of "very false" beliefs constructed by evolution.

Expand full comment

"I know of no examples of "very false" beliefs constructed by evolution."

If there were such, they would have to be well insulated from analysis somehow, thus, one would not expect to be aware.

Expand full comment

I don't think so. We have many "slightly false" beliefs that are not insulated from analysis at all. For example evolution gave us beliefs that spiders are dangerous, but we are very well able to determine than this is false for many spider species. The beliefs produced by evolution are expectations or emotions, it is perfectly possible to analyse them (though it can of course be difficult to act rationnally even when we are concious that a given belief is false).

Expand full comment

I was assuming there's an important semantic distinction between "slightly" and "very". What does "very false" mean, as opposed to "slightly false"?

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

Yes, "a wizard did it" is a very parsimonious explanation, to a certain type of person.

Expand full comment

> It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't.

I've seen this claim often, but it's never accompanied by a plausible example that would make it convincing. It's basically asserting that there's no known reason why evolved adaptations would correspond to truth, and thus concludes that evolved adaptations may in fact imply false beliefs.

Not knowing the reason does not entail such a reason doesn't exist, so that argument just doesn't follow though. It's a premature conclusion at best.

Expand full comment

The crux of the argument is that divine creation should plausibly produce behaviors that are in line with God's will, but not truthful beliefs. It might be that truthful beliefs are in line with God's will, but not necessarily so, and divine creation would only reinforce truthful beliefs that are in line with God's will but not ones that aren't. So if there are truthful beliefs that aren't in line with God's will, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more in line with God's will than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.

Really the argument boils down to: if the human mind is the result of an evolutionary process which selects for rationality, then it makes sense that the human mind is rational. If the human mind is the product of intelligent design carrying out some inscrutable divine plan, then we can expect the human mind to be very fit for carrying out God's plan but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of intelligent design because that belief is also the product of an irrational mind.

It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "evolved to be able to do that" model fits more parsimoniously than the "maybe God wills there to be rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.

Expand full comment

This is actually a pretty good counterargument I haven't heard of before. At least in the case of evolution you have good reason to believe that reproductive ability correlates with truthfinding. There is no reason to believe an omnipotent entity wants to create rational minds.

Expand full comment

The main objection, is that it is at least plausible that a rational mind could be designed by a rational mind: it is harder to see how a rational mind could come about through irrational processes that are not aimed at rationality in any case. You can certainly object that a rational mind might design an irrational mind if they wanted to, but at least it isn't mysterious where the rationality came from if they designed a rational mind.

Expand full comment

"My preferred theory is plausible and yours isn't, because I say so" isn't an argument.

Expand full comment

I never said that. I said one side was at least plausible on its face, but the other side is a bit harder to see if it is plausible. Do you disagree that a rational mind designing another rational mind is plausible? Do you disagree that it's more obviously plausible than a non-rational process that isn't aimed at producing rationality producing one?

Expand full comment

As for the rules portion of his argument, I think Plantinga's argument makes more sense of a counter-argument against 1970s era rules-based AI. Which is fair enough, since that kind of AI is no longer seen as an effective tool with which to model human behavior anyways.

I think Plantinga is a bit binary in terms of labeling things 'rational' vs. 'irrational.'

Expand full comment

Don't put too much by my summary: I don't think he'd necessarily use the labels "rational" and "irrational" in his formal argument, I just grabbed them as easy words to use in a short explanation of the argument. Obviously what it means to be "rational" in this sense requires a lot of defining of terms.

Expand full comment

Yes, yes! I totally agree! I think that is the usual mistake that mathematician/philosophers do when dealing with living things : they consider that "very slightly false" is identical to false, whereas in biology, the "truthfulness" is better considered quantitatively, with "very slightly false" in practice very similar to "correct".

Expand full comment

Yeah, in his actual argument, Plantinga deliberately avoids the binaries of 'true', 'false', 'rational' and 'irrational'. He talks about a type of belief he calls a "defeater": a belief which makes other beliefs seem less likely, but in some other way than by directly disproving them.

Expand full comment

It's always difficult to determine truth if truth means correspondence to reality. We can check predictiveness and usefulness but not correspondence.

Expand full comment

I go with structural realism : predictiveness and usefulness probably imply some kind of correspondence.

https://plato.stanford.edu/entries/structural-realism/

Expand full comment

I'm thinking something like: This post takes first steps towards some kind of taxonomy of neurological skepticism. The Plantinga argument relies on (something like) reinforcement learning governing the whole brain. But if this post is right, that's not the case; the lion country plan part is, but the visual cortex isn't. So you could insulate naturalism from Plantingean skepticism if you could securely place it in the kind of thinking not governed by reinforcement learning.

Expand full comment

Which I suppose is the whole point of the scientific method: to insulate the truth-deriving system from anybody's personal hedonic reinforcement.

Expand full comment

For social creatures who evolved in small groups, challenging the group consensus could create a sort of negative hedonic reinforcement learning. There's little to be gained from sharing your opinion, even if it's true, if everyone else is going to get upset at you for saying it. So, best not think too hard about whether what your community thinks is true is actually true, and just go with the flow.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

That would certainly explain a lot.

The Emperor's New Clothes being the canonical example.

Expand full comment

I try not to fall too hard into the cliche of "constantly explain complex brain functionality with simple comparisons to something from my field", but with the field in question being machine learning, it's impossible to resist these comparisons, especially as of the last few years. Too many subsets of our functionality seem so analogous to the way many artificial neural networks work!

This example also gets more interesting if we add other fundamental ML concepts in, e.g. learning rates (what magnitude of events correspond to what kinds of belief updates? are there some areas gradients are passed to more strongly than others, and if so what changes and modulates this?), weight freezing (at some point we have to learn to recognize basic objects and patterns - at what point(s) is this most adaptable, and which parts of it are immutable as an adult?), and of course backprop and other hyper-parameters, which are already interesting enough to contrast on their own. This also reminds me of a deepmind paper I saw yesterday https://deepmind.com/blog/article/grid-cells in which they construct an ANN similar to grid cells https://en.wikipedia.org/wiki/Grid_cell

Expand full comment

Can I ask a tangentially related question. How often does it become a practical concern that you have to teach your models to ignore some of the data?

Expand full comment

There is a relevant concept of "catastrophic forgetting" when fine-tuning a model where it is a serious concern that training on recent data may "unlearn" knowledge gained from earlier data, losing the capacity for generalization.

Another very different but also relevant concept is the idea of "negative class" for classifiers where e.g. if you would train an image classifier to distinguish between different types of butterflies, you might also want to include a class of random non-butterfly images to teach the classifier what images should be ignored.

Expand full comment

I think that the decreasing learning rate concept is strongly related to the explore vs exploit tradeoff which has a solid basis in game theory (multi-armed bandit model) and applied in reinforcement learning, and also observed in biology ("you can't teach an old dog new tricks") and changes in human behavior as we age.

I mean, a decreasing learning rate is adaptive (according to game theory) so we should expect to see that in evolved systems, and we do.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I pretty much agree with everything you said.

One of 5 or so places in the brain that can get a dopamine burst when a bad thing happens (opposite of the usual) is closely tied to inferotemporal cortex (IT). I talked about it in "Example 2C" here - https://www.lesswrong.com/posts/jrewt3rLFiKWrKuyZ/big-picture-of-phasic-dopamine#Example_2C__Visual_attention Basically, as far as I can tell, IT is "making decisions" about what to attend to within the visual scene, and it's being rewarded NOT for "things are going well in life", but rather for "something scary or exciting is happening". So from IT's own narrow perspective, noticing the lion is very rewarding. (Amusingly, "noticing a lion" was the example in my blog post too!)

Turning to look at the lion is a type of "orienting reaction", I think. I'm not entirely sure of the details, but I think orienting reactions involve a network of brain regions one of which is IT. The superior colliculus (SC) is involved here too, and SC is ALSO not part of the "things are going well in life" RL system—in fact, SC is not even in the cortex at all, it's in the brainstem.

So yeah, basically, looking at the lion mostly "isn't reinforceable", or to the extent that it is "reinforceable", it's being reinforced by a different reward signal, one in which "scary" is good, as far as I understand right now.

Deciding to open an email, on the other hand, has basically nothing to do with IT or superior colliculus, but rather involves high-level decision-making (dorsolateral prefrontal cortex maybe?), and that bran region DOES get driven by the main "things are going well in life" reward signal.

Expand full comment

Offering a clarification here: I don't believe that the IT cortex receives a dopamine burst when bad things happen. The paper you linked to in the Less Wrong post (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC38733/) correctly identifies that IT is an input to and output of the *basal ganglia*, which is a loop from cortex -> striatum -> nigra -> thalamus -> cortex. But that's not the same as saying that IT receives lots of dopamine input (it does not). So there isn't really a problem here in terms of dopamine training higher-order visual areas/rewarding when bad things happen. (The tail of the striatum, and potentially other aversive hotspots, is another question. Those regions tend to drive avoidance and not approach, so it's not really fair to say noticing the lion is rewarding in this case, despite phasic dopamine in these areas.)

Expand full comment

Thanks! In my defense I didn't say that IT has a dopamine burst. What I said was, well, maybe it's a bit confusing out of a particular context. So here's the context.

There are a bunch of parallel cortico-basal ganglia-thalamocortical loops, throughout the cerebrum. I think for certain purposes, we should treat "one single loop" as a unit that is trained (by RL or supervised learning) to do one particular thing. See https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.379.6154&rep=rep1&type=pdf . In that paper, referring to their Fig. 5, they talk about "column-wise thinking" as the traditional way of thinking, where you say what does the striatum do? what does the cortex do? Etc. They are proposing an alternative, namely "row-wise thinking": What does this loop do? What does that loop do? And that's where this particular comment is coming from.

It turns out that IT and the tail of the caudate are looped up into the same cortico-basal ganglia-thalamocortical loops—IT is the cortex stop and tail of caudate is the striatum stop of the same loops. Therefore (in this way of thinking) if the tail of the caudate is getting aversive dopamine bursts, that's basically a mechanism for manipulating what IT will do.

You're still welcome to disagree with that perspective of course, but hopefully at least you understand it a bit now.

Expand full comment

Yes, thank you for the clarification! Sorry if I misinterpreted your original comment. I'm a big fan of the "row-wise" thinking you espouse. However, I don't think there's good evidence either way for what effect DA has on the cortical sites to which BG loops return. Indeed, I don't think there's a good account for why these are loops at all! Certainly, the easiest way to think about it is that cortex is providing the "state" input to the RL algorithm, and DA is training the weights of corticostriatal synapses to compute value (or threat, in the case of the caudate tail). But there has been interesting work in RL asking how can we harness more information from RPE signals to train better state representations (see e.g. speculation in Dabney et al., Nature, 2020). Of course, in ML, you just backpropagate this information so it's no big deal. In the brain, backpropagation is tricky or impossible, so maybe the BG gets around this limitation by somehow forward-propagating the info all the way around the BG loop. But there isn't really a plausible account of how that would happen either. By the way, this is much more general than just IT cortex; primary visual and auditory cortex project to tail of striatum, motor and sensory cortex to dorsolateral striatum, etc. See Hintiryan et al., 2016 and Hunnicutt et al., 2016 for much more than you wanted to know.

tl;dr I think we don't yet know nearly enough to say what effect, if any, DA has on sensory cortices that provide input to the striatum.

Expand full comment

Re. "IT is "making decisions" about what to attend to within the visual scene, and it's being rewarded NOT for "things are going well in life", but rather for "something scary or exciting is happening": I would be interested in anything ": Do you know of any brain systems which relate this to aesthetics (cast as preferences about what we attend to), curiosity, or the fun of problem-solving?

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

Not sold on the "visual-cortex-is-not-a-reinforcement-learner" conclusion. If the objective is to maximize total reward (the reinforcement learning objective), then surely having your day ruined by spotting a tiger is better than ignoring the tiger and having your day much more ruined by being eaten by said tiger. (i.e.: visual cortex is "clever" and has incurred some small cost now in order to save you a big cost). Total reward is the same reason humans will do any activities with delayed payoffs.

Expand full comment
author

Sounds like a plausible experiment would be something like: have somebody show you pictures of circles. You report whether it looks like a circle or a square (be honest!) Each time you say circle, the person gives you an electric shock. See whether, after long enough, you start genuinely seeing squares.

My guess is this never happens - do you guess the opposite?

Expand full comment
Comment deleted
Expand full comment
author

I'm not sure why you're bringing up trolley problems. Phil and I have different scientific theories about how the brain works; it's hardly unfair to test them by experiment.

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment

I'd sign up for it. I'm curious what would happen: I don't think I would start seeing squares, but it would be nice if that was confirmed.

Expand full comment

What if this reinforcement learning could only happen during a limited developmental window? Would you be willing to sign up your toddler?

Expand full comment
author

I just did the experiment on myself (using "hitting myself very hard" as a standin for electric shocks) and got the predicted result.

I don't think this was necessary (except in order to win this argument) though - I think it's a *good* thought experiment, in the sense that just clarifying what the experiment would be makes everyone agree on how it would turn out. Essentially I was asking Phil "How does your theory survive the fact that in this particular concrete situation we both agree X would happen?" It's *possible* he could say "I disagree that would be the result", but then I would have learned something interesting about his model!

Expand full comment
deletedFeb 2, 2022·edited Feb 2, 2022
Comment deleted
Expand full comment

That's the wrong timescale for your visual system to change. Changing so fast would have been unstable. A small learning rate doesn't mean the objective isn't a reinforcement-learning one.

Expand full comment

Trolley problems are actually extremely common and can be found almost everywhere - the law, the medical system, engineering. It's probably the most useful thought experiment of all time.

The thing everyone in the world discussed for two years straight ("should we lock down the economy to stop the virus?") was essentially a form of the problem.

> the overwhelming majority of cases, it's just that the person making the analogy wants what they want and presenting the situation as a trolley problem is an effective method by which to emotionally manipulate their audience into agreeing with them.

I don't agree. A feature of the problem is that there's *no* obviously correct answer. Ethical arguments can be advanced for either course of action.

Expand full comment
deletedFeb 2, 2022·edited Feb 2, 2022
Comment deleted
Expand full comment

I would totally sign up for it, as long as the shock was non-fatal and left no permanent damage. Color me abnormal, I guess.

Expand full comment

I take the unrealistic aspect of a trolley problem to be knowing exactly what happens given each choice, and only being in doubt about the ethical weught.

Expand full comment

I'm pretty sure this was an episode of Star Trek-- four or five lights?

Expand full comment

Itself based on OBrians torture of Smith in nineteen eighty four.

Expand full comment

Now I'm wondering whether and to which extent the famous basketball gorilla experiments would give different results in live action than on video.

http://theinvisiblegorilla.com/gorilla_experiment.html

Expand full comment

This reminds me of that Star Trek: The Next Generation episode where Picard is tortured for days by Cardassians, who are trying to get him to say the wrong number when asked "How many lights are there". At the end of the episode, he tells RIker that he actually saw the number they wanted him to say.

Expand full comment

I thought it's kind of mainstream science that evolution has 'programmed' us to see things that aren't there if it's good for us. For example, here's a quote from Why Buddhism is True:

"Suppose you’re hiking through what you know to be rattlesnake terrain, and suppose you know that only a year ago, someone hiking alone in this vicinity was bitten by a rattlesnake and died. Now suppose there’s a stirring in the brush next to your feet. This stirring doesn’t just give you a surge of fear; you feel the fear that there is a rattlesnake near you. In fact, as you turn quickly toward the disturbance and your fear reaches its apex, you may be so clearly envisioning a rattlesnake that, if the culprit turns out to be a lizard, there will be a fraction of a second when the lizard looks like a snake. This is an illusion in a literal sense: you actually believe there is something there that isn’t there; in fact, you actually “see” it.

These kinds of misperceptions are known as “false positives”; from natural selection’s point of view, they’re a feature, not a bug. Though your brief conviction that you’ve seen a rattlesnake may be wrong ninety-nine times out of a hundred, the conviction could be lifesaving the other one time in a hundred. And in natural selection’s calculus, being right 1 percent of the time in matters of life or death can be worth being wrong 99 percent of the time, even if in every one of those ninety-nine instances you’re briefly terrified."

Seeing a square when there's really a circle is a pretty extreme example (which I guess is why it got called a 'trolley problem'). But seeing a rattlesnake when there's a weird-shaped stick in the grass (and the penalties for false positives and false negatives are vastly different) seems pretty plausible to me.

Expand full comment

This is not an adequate comparison. "genuinely seeing square" would do nothing to change the electric shocks; *claiming* you saw them would, which is certainly what you would have learned to do.

I think Phil's disagreement comes rather from thinking in a different time scale. The total reward maximitazion model is correct, but it works in evolutionary time: a visual cortex that correctly identifies objetcs is "reinforced" by evolution because this maximizes total reward. This is precisely what originates the *epistemic* architecture. But on the lifetime of a person, however, the visual cortex does not do any reinforcement-learning (at least after infancy).

Expand full comment

Speculations about microscopic processes a billion miles away are a complete waste of time in the long run. We have the prameters of the Standard Model that 'explains the properties of atoms molecules that enable the construction of cells the visual-cortex is filled with, yet we can't get our most powerful computers to accurately predict the behavior of a single molecule of H2O. Considering the fact a single cell is built from hundreds or thousands of molecules far more complex, and that the behavior of everything at the core is a probabilistic quantum mechanical information process, I might be wise to listen to those with knowledge about God the Creator and life's purpose than scientists who's god is random mutation.

Expand full comment

Relevant webcomic:

https://i.imgur.com/kFqkPEb.jpeg

"That."

"That?"

"That is why we're screwed. That number. That number will doom us."

"I hate that number. Can we run away from it?"

"No. It's a number. It represents an immutable fact."

"Can we fight it?"

"No. <...> What?"

"Sorry. Evolving a threat response over a half-million years on the African savannah hasn't really left me with any good mechanisms for dealing with a threatening number."

"That is also why we're screwed."

Expand full comment

Betting money on things seems like one way to push your brain more toward the less-hedonically-reinforceable regime.

Expand full comment
Feb 1, 2022·edited Feb 2, 2022

I think all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

If you do a lot of research on epistemic facts related to your political beliefs, the first order consequence is often that you spend hours doing mildly unpleasant reading, then your friends yell at you and call you a Nazi.

In the case of doing your taxes or the lion, that unpleasantness is modulated by the much larger unpleasantness of being sued by the IRS and/or eaten alive by a lion. So there's a normal tradeoff between costs (filing taxes is boring, seeing lions is scary) and benefits (not being sued or devoured).

But in the case of political beliefs, the costs are internalized (your friends hate you) and the benefits are diffuse (1 vote out of 160 million for different policy outcome). So it's no wonder that people aren't motivated to have a scout mindset.

Expand full comment
Comment deleted
Expand full comment

People are pretty good at avoiding sources of stress, especially ones that offer no obvious benefit. I can't really imagine why anyone should go become a scout given the tradeoffs. Maybe it's a noble sacrifice you make for the greater good?

Expand full comment
deletedFeb 1, 2022·edited Feb 1, 2022
Comment deleted
Expand full comment

I don't think that's obvious. The scout mindset is almost definitionally being less confident about your knowledge. Soldiers seem confident too, that's kind of the point.

Expand full comment

> Confidence comes from competence

I think that's only one route to competence. Plenty of people are very confident without being competent specifically because they have the soldier mindset.

Expand full comment

If scout mindset were that effective, I think evolution would have made it more common.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

I don't think even Julia Galef would claim that a scout mindset makes you more likely to have babies (which is all 'effectiveness' means from an evolutionary perspective)

Expand full comment

Yes, and I'll emphasize that this was the case prior to the invention of birth control as well.

Expand full comment

Because if you become good at predicting social and economic trends that everyone else thinks is wild pie-in-the-sky nonsense but actually happen, you get words like "disrupter" and "avant-garde" attached to you, attain social status and financial success, and ultimately be able to have a kid with Grimes before she divorces you and decides she's a Marxist now.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

Now this I could believe with the caveat that you have to already be a billionaire to get "disrupter" or "avant garde". The rest of us just get "crazy" or "weird".

And of course then there's views that aren't pie in the sky, just ordinary run-of-the-mill views of the opposite tribe. Those get the label "Nazi" or "Marxist" depending on which tribe you defect to.

Expand full comment

>The rest of us just get "crazy" or "weird".

If you're a Bitcoin millionaire, people aren't going to call you "crazy" (unless you start trying to argue for the AOC to be abolished, cannibalism to be legalized, or something else so far outside the Overton Window that no amount of money can shield you from social censure). "Crazy" requires you to flap your arms, jump off the barn, and break your legs. If you actually FLY, you're an eccentric visionary. I realize it's really really easy to imagine that social classes are an iron-clad law of the universe and people will still treat you as Joe Schmoe from Buffalo even if your bank account hits 7+ figures, but (in America, at least), like 70% of social class is attached to your net worth.

>Politics

I hope you aren't looking for someone on a random blog's comment section to give a complete solution to solve political disagreement. I was just making the point that there are, in fact, reasons to break from the herd besides pure altruism.

Expand full comment

There are some, very limited, reasons to break from the herd in specific highly limited circumstances.

Expand full comment
author

The title of this post is "Motivated Reasoning As Mis-Applied Reinforcement Learning". I'm not missing it, I'm trying to explain what it is.

Expand full comment
Feb 1, 2022·edited Feb 2, 2022

I'm not trying to focus on the title, just the argument being made. I believe that all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

I have a prior: Being eaten alive by a lion would be extremely painful and end my life before I can procreate. I've assigned 100% probability to that prior. The brain is able to backpropagate to my visual cortex that a False Positive on a lion is scary, but low risk (let's say loss value = 1.0), but a False Negative on a lion is life-ending (let's say loss value = 10000.0). Given those incentives, your visual cortex optimizes lion recognition for high recall and reasonable precision.

I don't see why there's any contradiction between hedonic reinforcement learning (learn not to get eaten alive, extremely low hedonic state) and what the visual cortex does.

Expand full comment

His point is that sometimes in life we do the thing that doesn't happen with our eyes: our eyes always see the lion, even if we really do not want the lion to be there. Your explanation of why that is makes sense: if we don't see the lion, we die. Yet at the same time in other areas of our lives there is a problem, it really will negatively impact us if we don't do something about it, and yet instead of seeing the problem our minds work hard to find a way to convince ourselves the problem doesn't exist. So why does it do that? That's the question, not why our eyes always see the lion. It's why our minds sometimes try to tell us the lion we see isn't really there because we don't want it to be.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

Is that the point? The post seems to presuppose that there are "reinforcable" and "non-reinforcable" parts of the brain. I don't see any need for that. You can perfectly explain the function of the neural cortex and every other part of the brain through hedonic reinforcement learning.

I originally supposed that he was asking "why does our brain work for the lion, but not for reasoning about politics?", so I answered that question with what I think is a logical, mathematical view based on internalized costs and externalized (diffuse) benefits. I think that's the answer to your question "Why does our brain not see the problem when it will negatively impact us?" My answer is that being wrong about politics *doesn't* negatively impact you because the costs/benefits of being right/wrong are shared across hundreds of millions of people. But the costs/benefits of agreeing/disagreeing with your friends accrue only to you.

Expand full comment

Because the future is uncertain, and the further away it is, the more uncertain, and while a lion in the bush will eat you *right now* almost all the things we procrastinate about are lions that may or may not eat us some time from now, assuming our fearful reasoning about the future is correct, which it often isn't. The fact that our emotional centers routinely dial down threats which are more hypothetical and uncertain is hardly surprising, and clearly adaptive. The fact that we have warring agents in our awareness that struggle to gain consensus on threat-level estimates that do not agree with each other is also unsurprising, given normal conscious experience.

Expand full comment

Love it. Great thinking. Really well put.

Expand full comment

I rephrased the first sentence of my original post to not focus on the title and instead summarize my argument. My argument is that all of the supposed discrepanices with modeling the brain as a hedonic reinforcement learning model can be explained with standard ML and economics.

Expand full comment

"This question - why does the brain so often confuse what is true vs. what I want to be true?"

Going back to first principles of natural selection one would presume the human brain to be:

(a) **well adapted** to discern true facts that have positive impacts on reproductive fitness (e.g. identifying actual lions, learning which hunting or gathering techniques work best, etc.);

(b) **well adapted** to engage in useful self-deceptions that also have positive impacts on reproductive fitness. (e.g., believing that your tribes socially constructed superstitions and political rules are "true" so that you fit in and get along.

(c) **Non-adapted** for determining true facts that might make your life more financially profitable, enjoyable and stress-free but that don't have any direct impacts on your hunter-gatherer reproductive fitness. (e.g., realizing that you shouldn't procrastinate on your taxes, or shouldn't worry about things you can't change.)

Sadly, the human brain is evolved to make more surviving humans, not to make us happy or successful in a modern capitalist economy. I think the happy/high-achieving people are probably those who are more successful in somehow tricking their brains to move category (c) issues into category (a). If only you can convince yourself that the search for absolute truth is a life or death hunt that will keep you from starving to death and instead allow you to have sex with the hot cave woman https://youtu.be/gSYmJur0Npw?list=PLVVuOIA1lowEKOVGZgf4_GsmH51Lu4511&t=68.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I thought motivated reasoning was.... Reasoning with a motivation. Reasoning, but, WITH AN AGENDA!! DUN DUN DUNN... Like, you want something, so you spend a lot of time entertaining counterfactuals which seem like they could be pieces of plans you could make which lead to you getting the thing you want, as opposed to Perfect Unmotivated Reasoning: reasoning done entirely for the sake of Truth, and not utility.

I thought motivated reasoning meant reasoning with ulterior motives

Expand full comment

Well, yeah, it is. The problem is that you can use reason to come to lots of non-true beliefs, provided your main motivation is not finding the truth. That's what Scotts trying to puzzle out: why exactly do succeed at tricking ourselves that a real problem isn't real just because we don't want it to be?

Expand full comment

"Maybe thinking about politics - like doing your taxes - is such a novel modality that the relevant brain networks get placed kind of randomly on a bunch of different architectures, and some of them are reinforceable and others aren’t. Or maybe evolution deliberately put some of this stuff on reinforceable architecture in order to keep people happy and conformist and politically savvy. "

I wonder if it is not simpler to just consider a dichotomy between tasks we have evolved for (e.g., learning to speak) and those we have not (e.g., learning to read), rather than the epistemic/behavioural dichotomy. Detecting threatening animals is clearly an evolved ability, while doing one's taxes is not at all. This could mean that "doing one's taxes" will not be done automatically, and will depend on many factors, including of course the pleasantness of the task (and also the tendency to ignore the future, trust in the group, anxiety, etc.).

Expand full comment
author

I don't think it's mysterious why we don't like doing our taxes, I think it's mysterious why this leads to behavior like not checking whether our taxes need to get done, or refusing to open your email because you're worried there will be a difficult task in it.

Even if you hate email, I think the "rational" action is to open the email, check if there's any message which is so important it overrides your desire not to do it, and *then* procrastinate. But that's not how most people experience this.

Expand full comment

I of course agree that the rational action is to open the email and get the information (which I find a bit hard to do...). But well, we already know that we do not act very rationally.

It seems to me that evolution has given us specific rules for situations that were both common and important during our evolution (avoiding the predator, indeed), and very general rules for all other situations. Among these general rules, there is clearly the acquisition of information, primates are notoriously curious, and of course the avoidance of annoyances. But we could not get a rule like "evaluate the situation and choose the behavior according to the best predicted option". So we "choose" a behavior according to the impulse that turns out to be the strongest, and many people procrastinate to open that email.

Because our behavior usually does not result from rational considerations but

Expand full comment

I think you're thinking about this too much from a model where everybody is a good-faith Mistake Theorist.

In a mistake theory model, it's a mystery why people fail to update their beliefs in response to evidence that they're wrong. If the only penalty for being wrong is the short term pain of realising that you'd been wrong, then what you've written makes sense.

I think that most people tend to be conflict theorists at heart, though, using mistake theory as a paper-thin justification for their self interest. When I say "Policy X is objectively better for everybody", what I mean is "Policy X is better for me and people I like, or bad for people I hate, and I'm trying to con you into supporting it".

There's no mystery, in this model, why people are failing to update their "Policy X is objectively better" argument based on evidence that Policy X is objectively worse; they never really cared whether Policy X was objectively better in the first place, they just want Policy X.

Expand full comment
author

I think there are three things: honest mistakes, honest conflicts, and bias - with this last being a state in which you "honestly believe" (at least consciously) whatever is most convenient for you.

If a rich person says the best way to help the economy is cutting taxes on rich people, or a poor person says the best way to help the economy is to stimulate spending by giving more to the poor, it's possible that they're thinking "Haha, I'm going to pull one over on the dumb people who believe me". But it also seems like even well-intentioned rich/poor people tend to be more receptive to the arguments that support their side, and genuinely believe them.

I don't think honest mistakes or honest conflicts need much of an explanation, but bias seems interesting and important and worth figuring out.

Expand full comment

Is the conventional explanation unsatisfactory? That people are more convincing when they argue for their position honestly, and so it's beneficial for them to become biased in ways that favor their interests.

Expand full comment
Feb 4, 2022·edited Feb 4, 2022

The question is, what is the mechanism? If I was offered a billion dollar incentive to sincerely believe I have three hands, I still wouldn't be able to do it. I can't alter beliefs on command. That makes the bias idea harder to explain.

Expand full comment
Feb 4, 2022·edited Feb 4, 2022

That's because such offers presumably weren't common in the ancestral environment in which our brains evolved.

As far as I can tell, we're pretty far off of being able to establish the exact ways in which high-level adaptive strategies are implemented in the brain. For these purposes, it's an extremely complex black box, which also isn't too amenable to high-precision controlled experiments.

Expand full comment
Feb 4, 2022·edited Feb 4, 2022

Selection. Economists who believe things convenient things for rich people get more connections, media exposure, grants, etc.

(In the conflict theory worldview; I have no idea how econ academia works IRL)

Expand full comment

Thank you for the comments on this post. I was having a hard time figuring out what the post was about. I'm not sure why we ignore the Ugh fields, I sometimes see these balance scales in my head, on one side is the badness of not taking care of my taxes, and on the other side is the slight goodness of ignoring this problem for another day. The badness keeps building and eventually I have to open the dang letter.

Expand full comment

Then why do many people make decisions that are objectively bad for them and the people they care about, are GOOD for the people they hate, and then actively defend these decisions unto the trenches for decades?

Expand full comment

It's likely that your definitions of objectively good/objectively bad are good examples of what Melvin is describing.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I'm referring to people who, as an example, vote for tax breaks for the wealthy and increased taxes for their own income bracket who, when asked WHY they do this, don't have a satisfactory answer beyond "My strawman conception of an enemy tribe wants that and I'll be damned if I ever agree with them on anything." (Yes, you can accuse this itself of being a strawman, but these people really do exist, and in sufficient numbers that calling it a "strawman" is a bit deceptive. You see them on both sides of the aisle, usually yelling at each other on social media). This isn't behavior that can be explained purely by calculated zero-sum games, because EVEN BY THEIR OWN CALCULUS they're making decisions that damage their own side (salt-of-the-earth Real Americans who work with their hands and backs) and empower their enemies (rich, effete latte-sipping Coastal Elites) based on weird ideological principles.

I mean, you could certainly say that the above descriptions are just branding those people consciously adopt, but that doesn't explain the core issue here- I'm not sure if conflict theory is willing to believe in tactically shooting yourself in the foot.

Expand full comment

>I'm referring to poor people...

I knew exactly where you were going with it and responded accordingly. I've not met a single person who bemoans people "voting against their own interests" who wasn't entirely convinced that their preferences are derived from universal and timeless moral rightness and that anyone opposed to their preferences are stupid (poor people), evil (rich people), or both.

Expand full comment

Yes, I do in fact think people should be paid a living wage and shouldn't be condemned to a life of misery, hunger, or dependence on intoxicants as an alternative to suicide, regardless of their genetics. And I do, in fact, think people who argue that Moloch-sacrifice is good and that inquiries into bringing the number down as close to zero as we can should be sneered at are evil.

If someone kept hitting themselves in the head with a hammer, complaining loudly every time he got hit, you would, in fact, call him an idiot- or else determine that he really likes the idea of dying via TBI, but I don't pretend this is some kind of just world where people are making the rational decision to be systemically discriminated against, not make any money, get hooked on painkillers, and die in their own waste. If you think my attitude comes from a sneering condescension towards the working poor instead of being one of them, that's your problem- unless you want to say you have access to my mental states.

Expand full comment
Feb 2, 2022·edited Feb 2, 2022

>If someone kept hitting themselves in the head with a hammer, complaining loudly every time he got hit, you would, in fact, call him an idiot

No one did or said this, stop fantasizing.

>If you think my attitude comes from a sneering condescension towards the working poor instead of being one of them, that's your problem

I think your attitude comes from an unwarranted confidence in the correctness of your beliefs - which is pretty damn funny given that I knew the direction your diatribe was going to go based on how banal your "insight" is.

For my part, I assume in the overwhelming majority of cases that people make decisions for themselves for good reasons, where "good" is defined by the decision-making person him/herself. I don't presume to be able to define another person's preferences for them.

Expand full comment

This model explains why people would make this kind of mistake when it benefits them, but it doesn't explain examples like procrastinating on your taxes.

Expand full comment

I agree, which is why I think that procrastinating on your taxes has a different mechanism to maintaining self-interested political beliefs.

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

If this is true, then maybe Cognitive Behavioral Therapy is just using your "basically epistemic" brain regions to retrain your "basically behavioral" brain regions. Like, trick yourself into ending up in a better hedonic state than you used to -> reinforcement learning counter updates towards "yay". That could explain why it's so crazy effective despite being "just" talk therapy.

I know less about brains and psychology than just about every other commenter out here, am I on to something here?

Expand full comment
Feb 1, 2022·edited Feb 1, 2022

I don't think anybody really knows anything about brains and psychology at this point -- at least when it comes to complex behaviors and beliefs. For example, this behavioral vs. epistemic brain dichotomy sounds like one of those models that appeals to pattern-seekers but I doubt there is any way to prove it's useful or accurate with concrete data.

Consider that AI programmers will set up a self-teaching model for their programs and then set it lose to evolve a way to solve a particular problem. But once it's taught itself to solve the problem you can't go back and figure out exactly how it learned or how it logically gets the right answer. So why would we think we could reverse engineer our bio-brain's 3-billion year old self-learned chemical algorithms.

Expand full comment

I think it is wrong-footed to proceed as if reasoning always directly connects your brain to the phenomenon that you are reasoning about. Instead, the process is social. You decide what to believe based on who you believe. You decide who to believe mostly based on who you would like to get approval from (or to get imaginary approval from). If in your imagination you would rather hear Joe Rogan say you're a great guy than hear Anthony Fauci say you're a great guy, then when they differ you will work harder to discredit what Fauci says than what Rogan says. And conversely. That is what puts the motivation into motivated reasoning.

Expand full comment
founding

I'm a fan of your idea of (mostly) 'social epistemology', but surely reasoning isn't _entirely_ social, especially for things like 'doing taxes'.

Expand full comment

I'm trying to make sense of this. First, I'm going to rename '[people] who you would like to get approval from' as 'desired approvers', because it makes the following sentences more comprehensible.

Does your model imply a connection between how vividly you imagine your desired approvers, and how easily you reason? And does it imply a connection between how aware you are of which beliefs are endorsed by your desired approvers, and how easily you reason?

If so, consider people with only a nebulous sense of their desired approvers (as individuals and/or as belief-holders). Your model seems to imply that such people will tend to have more trouble with reasoning than other people do. And the rare truly isolated person, with minimal media and no religion, family, or friends, would have a great deal of trouble with reasoning. Is that what you expect? Have I misunderstood?

Expand full comment

I think that is correct. Knowledge is social. See Joseph Henrich, "The Secret of our Success"

Expand full comment

Thanks for the recommendation. I've been seeing that book and Henrich's book on WEIRD mentioned, and was leaning toward reading the latter, but now I'm motivated to read both.

Expand full comment

How did AlphaStar learn to overcome the fear of checking what's covered by the fog of war?

Expand full comment
Comment deleted
Expand full comment

I like how this explanation meshes with:

- don't increase the pressure, lower the resistance

- Leave A Line Of Retreat

- exposure therapy

- Street Epistemology

Expand full comment