I claim that you've made a dozen bad life decisions relating to this in the past week, and that several forms of therapy are built on top of it (though they wouldn't frame it in those terms).
The ugh fields post linked above has a lot of examples of bad life decisions. In terms of therapy, more speculative, and I'll write more on it later, but I think a lot of blocks in therapy relate to something like "It is easier and more fun to come up with a theory of why I shouldn't have to feel bad about this problem, than to solve it", which I think is this same idea of reward operating on epistemics instead of action.
That is, if there's a problem (you are poor), and your brain can solve it by either coming up with some kind of theory in which it's rewarding (rich people suck, they've all sold out, my poverty actually makes me a good person) or by working hard to get more money, and your brain is indifferent between getting reward by epistemic changes or by real-world changes, making the real-world changes is going to be a really tough sell. My guess is it's much more complicated than this, but I do think this is part of the dynamic.
Ok, but you didn't say it was "untethered from the physical nuts and bolts of MY daily life", you said it was from "daily life". A reasonable reading of this is that you think it doesn't apply to the daily life of any significant number of people.
Can people really stop feeling something because they have a theory? Or is your model really that people *cannot* stop feeling bad and that's *why* they have to solve the real problem?
Regarding procrastination and ugh fields, here's an interesting and relevant question: Do animals procrastinate? I don't know the answer, but it certainly seems quite possible to study it. For instance, squirrels gathering nuts for winter: Are they at all like students studying, i.e. gathering information, for an exam? Do squirrels, like students, often laze around til the deadline is near, then put out a burst of effort that exhausts them and sometimes does not permit gathering of a sufficient store of the resource in question? Maybe squirrels' behavior with nuts isn't the best thing to study, since the nuts don't land on the ground at a steady rate all fall but instead all drop over a short span of time. But there have to be some situations where animals really do have the option to procrastinate. And if there isn't, we could create on artifically in the lab with rats.
I thought your original comment was a little hostile, but I really appreciated your follow-up comments to me (that you've since deleted) and also wanted to reply that I'm sympathetic to what you described as part of your impulse to comment, tho I think having discussions like those in this post is a perfectly fine hobby to have :)
I don't think it was a cheap shot. I think your underlying point was important and sound, and it resonated with me. It *is* a little worrying when the best and brightest young folks are seriously debating how many angels can dance on the head of a pin *and* seem at least somewhat unaware that the question is a priori frivolous. One can't help thinking someone ought to say something. Recall the wry popular quote: "If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera." There's a lot of fixing the drains stuff that still needs to get done in 2022.
If I had to guess, I'd say you regretted that you expressed it in a tone that came closer to contempt than bemusement, which unnecessarily antagonizes and derails your point. Fair enough, but I could wish you had rephrased it rather than eliminated it.
Cleverly worded cheap shots are deletable? I assume that includes sardonic, obscenity-free ad hominem. What a world we live in where judiciously phrased, non-vulgar insults are considered outside the bounds of reasoned discussion. We are all snowflakes now, aren't we?
Gunflint deleted their own comment. I didn't consider it "outside the bounds of reasoned discussion" – I, and several others, directly discussed their comment!
I'm pretty sensitive to the price of pasta, potatoes, flour, and rice. I'm somewhat aware of the price of meat. But I don't live on milk. It is something I buy rather sporadically, if I get a craving for something I don't eat regularly - rice pudding, or cold cereal.
I suspect that milk consumption in the US is excessive due to PR from the dairy industry driven by...motivated reasoning.
Anyway, I was wondering if Scott has talked to anyone who is diagnosed with schizophrenia recently, and what he's observed or assumes to be their relationship with "motivated reasoning". Sub-question is, do people with schizophrenia go to therapy?
Something I found interesting as a difference in perspective between generations that may get lost in time: I've heard/read people scorn skim milk as unpleasant, watery, and blue colored. At the same time, products of a family farm are idealized as wholesome, rich, organic, unprocessed. But someone I knew that grew up on a farm in the 1940s, didn't like whole milk, considered it greasy and distasteful. Why? Conditioning. On the farm, the fat was always skimmed off, otherwise you wouldn't have butter and cream.
Possibly they didn’t like it because it wasn’t homogenized. Whole milk is actually worse to drink when the fat floats in greasy globs instead of being smoothly mixed in.
Me neither. If he lived on a farm he may have drunk milk from his own cows, and that milk wouldn’t necessarily be homogenized. That’s just what comes to mind when I hear milk described as “greasy.”
It's probably got a lot to do with what you're accustomed to. Skim milk is more watery than whole milk, but which you actually like better is subjective. If you're used to a certain level of thickness, different levels are probably going to taste wrong.
(On a related note, sugar-sweetened soda gives me heartburn but diet soda doesn't. After drinking only diet soda for a while, regular soda now tastes too syrupy for me.)
Certainly from the few people I know with schizophrenia, many of them go to therapy - there's a lot of useful technique to be learned re: how to calm down enough to not act on the more dangerous of the unshared experiences, how to survive the anhedonia side effects of the antipsychotics, etc.
>how to calm down enough to not act on the more dangerous of the unshared experiences
I think this advice could be harmful to anyone who takes it at face value. There's an inherent runaway positive feedback loop. If a consumer discloses that they are concerned about anything, then they have conceded there might be a danger, that they aren't sure of the level, and irrevocably put judgement of it in the hands of others who cannot read minds. Sharing with someone who has professional responsibilities to take action, on balance, might be worse even than talking to a lay friend.
I can't imagine any way to fix the dichotomy between suffering alone from an illness and overreaction/misunderstanding other than coming up with an objective way to determine states of mind based on biological markers. If there is one thing I would like to see in my lifetime it's something analogous to blood sugar and A1C tests, only for depression, suicidality, aggression, psychosis, etc.
Obviously it could be used in harmful ways and many (especially paranoid) people would be fearful of a neo-eugenicist movement, but I think one has to come to terms with how harmful and intractable it is to *not* have any objective measurement to base life and death decisions on.
And I do not believe that good treatments can be developed for anything that does not even have a real definition based on measurement.
I think he's saying locksmiths had to approach strangers in the dark (who were maybe locked out of their car, or are maybe hoping to rob him) and this is a scary action
In a nutshell, for me at the very least, the Catholic faith solves this problem. From a skeptical outside perspective, ignore the deistic epistemology entirely and focus on the phenomology.
One benefit of faith is that it makes your utility function non-monotonic and flexible. It does this by putting a presumably conscious entity at the peak of the hierachy of values (think virtue ethics here). So you get an ultimate determiner of value that is pretty rigid but can decide between competing values. If Christ isn't this for you, well, I enjoy sharing this experience with 1 billion + people
¯\(°_o)/¯
24 hour later edit:
A few responders to this post seem to be operating on an assumption that no one is capable of being very intelligent, well-informed, intellectually honest, and religious simultaneously. While I don't agree with this perspective, even though it may be a strawman, I suspect many of the benefits of this way of thinking could apply to eg HPMOR fans who want to consider the actions of their favorite character rather than the deity I choose to follow. I think good fiction is invaluable, it allows for an idealized and situationally unique perspective, to which one might consider themselves an "apprentice" in a way that they would balk at doing for a real person. I think holding yourself to a higher ideal is generally a great thing, even if it's not mine (and from where I'm sitting, if my assessment is correct an honest and courageous attempt at this will have the same ultimate endpoint one way or another).
(PS I have cried more than one at HPMOR Harry and his interactions with dementors and phoenixes, it's better than the original IMO)
I can say more, but I did preface it by saying nutshell. I'll try some additional bullet points for now and if more clarity is needed I can provide it later.
- utility functions that aren't monotonic are susceptible to "Dutch booking" ie a form of exploiting cycles in values
-having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism)
-if you assume virtue ethics, then you have the company of roughly a billion people also asking the question "what would Jesus do?"
I mean if being tractable is the only thing you care about, why not just base your ethics around something simple, like maximizing the amount of hydrogen in the universe or increasing the amount of entropy? Most people care more about their ethics actually being right than they do about them being easy to follow.
Right, this is why I asked to ignore deistic epistemology. If you want Catholic apologetics I'm not your man, I'll just say my personal process of developing my faith has included a lot of wrestling with God which is also an ongoing process at times.
I agree with Paul above, and I don't see how your comment addresses his point. Also, your initial argument sounded something like "Catholicism simplifies ethics", but this comment sounds like even that might not be the case.
Tractable perhaps, but Catholic ethics is anything but a simplification of the problem. Tbh if you think that I question if you're making a good faith faith effort to understand my position.
As to bringing it back to original point, I suspect having an overarching moral/ethical/ideological system is an extreme benefit to motivational and decisional paralysis, which to me does seem to address the gist of the problem with "motivated reasoning" and reinforcement learners.
Well, if you are in a reflexive equilibria such that choosing a utility function involving maximizing H2 and choosing one that maximizes human happiness are indifferent, then your re probably better off going with the first. The problem is that that is not where we find ourselves, in that we prefer happiness to H2 is there is nothing we really can, not to metion should, do about it. Whereas if having a simple tractable utility function (doing what God says maximizes utility) that still seems to conform to human values (God says to do things I generally agree are good) then that would, initially, seem to solve the problem quite neatly, by being both tractable and ethical.
Of course, there are plenty of other objections to that paradigm (“how do I actually know what God wants me to do here? If God’s depress are isomorphic to what I already think of as good, why is this elaborate utility function necessary, and if they aren’t, then doesn’t that create mor problems?” But I think this particular objection is answerable.
My point wasn’t to provide a good answer, but merely to explain why this objection didn’t make sense. I think if you work it out the “Christianity (or any deistic religion with certain qualities) solves certain self referential problems in ethics and motivation” could work, but you would have to do far more work to get there than OP did.
>having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism
I think you need to be clearer about what rigid means. Utilitarianism doesn't approximate deontology so long as utility is finite, because a sufficient number of specks has more negative utility than torture.
The problem of hell and comparing infinities something I wrestled with for a long time, and has led me to some difficulties considering heresy. I don't think there's a consensus on the topic among more interested, expert, and intelligent theologians than my amateur musings and I defer to them.
I'd be quite interested in hearing your thoughts about the problem of hell.
When I was a christian, I simply didn't believe that hell existed. It just felt completely against my personal experience of the divine. The Benevelent God, whose existence I directly felt and saw, would have never created such a place. It contradicted everything on multiple levels. So I just never even thought about it as a philosophical problem
Later I understood that all my religious experiences was not my communication with God, but a glimpse into my own values and ethics. And this allowed me to think about the problem of hell and be really terrified. Not of the cruelty of God, of course, but of people who simultaneously claim that their ethics comes from their religion and that somebody being eternally tortured for finite amount of evildoing is part of this religion. If ones ethics can accept hell it can justify literally anything.
How do you, deal with the existence of hell? This is the mode of thinking that has never been possible for me, so I'm very currious.
I spent the last few years as agnostic and anti-religion and recently converted to Christianity. One of my big gripes with Christianity was the idea of hell until I came to the conclusion that popular understanding of heaven and hell doesn’t reflect what’s actually spelled out in scripture.
The way I understand it now is that God is Love and Hell isn’t a place, it is the absence of love. God doesn’t send anyone to hell but we instead choose Love or it’s absence.
Closest analogy I can come up with: you’re dating the perfect significant other. You lie, apologize, they forgive you. You cheat, apologize, they forgive you. But you continue cheating and lose the remorse. They call off the wedding and you are stuck for the rest of your life with the agony of losing the perfect spouse. That is Hell.
Important to note that this also means a deeply Loving/selfless Hindu, Zoroastrian, Atheist, etc already knows God and will spend eternity with God
I don't really have any original or personal thoughts on the topic. What I'll say is, for most things I seem to side with Thomists, but on this question I do "dare to hope", since it's the most legitimate seeming option that's available to me.
I'm going to post this here and rely on some security through obscurity with my Google profile, I will come back and delete it once someone comments reminding me to do so (please do this is you've got the link saved). I don't want to share early drafts that I haven't at least presented to clergy, but this is my most sincere answer to how I (currently and after attempting to study the issues, but without any claim to expertise) deal with squaring my faith with my understanding of the world.
> a sufficient number of specks has more negative utility than torture.
Does the majority of utilitarians actually agree with this? I seem to recall that there was plenty of argument on LW on this, and no clear consensus emerged.
I wonder if this also ties in to self-identity. I remember reading about if kids are praised for being smart, they can start to self-identify as smart and so become reluctant to try new things (since they will initially be bad at them, which clashes with their self-identity). Maybe that was bunk but I think I can see it a little bit in my own life.
If we start to self-identify as being right, or smart, it might make it more painful to update our beliefs because being wrong in the past could clash with that identity. I could see the same thing happening with virtue too (although I've read another Christian say the view is more that everyone is a sinner, which if practiced might prevent that).
For now, I try to think of the truth as a model. The world is too complex to totally understand, but we imagine simplified models in order to navigate it and find patterns. It would be unusual to be upset at finding a better layout for a model train set, since in that context the goal is usually to *have* the best model rather than to *be the best at building models*.
I like this train of thought. One thing I've found helpful is to not only think, what would Jesus do? But also, what would my "saint self" do. This approach has pros and cons, but fortunately the Catholic approach makes both methods isomorphic.
I once heard someone (I think Leah Libresco but I'm not highly confident in this attribution) describe the "communion of saints" as an attempt to provide cross-temporal bayesian agreement on theological questions. I really like this framing, as it justifies the variety and sometimes imperfect nature of recognized saints. Few if any saints were virtuous their entire lives, and some of the most notable we're quite sinful (a great example here is the apostle Paul, who prior to his conversion persecuted Christians).
Another great benefit, imo, is that Christ was incarnate in a particular time and place, which though his example was indeed perfect, has limited utility in informing people in other contexts. So saints, while not perfect, at least by the end led sufficiently holy lives for the Church to confidently state that they're in heaven, which is a great element for a faith that is living to have as examples.
The "everyone is a sinner" thing is really good for solving that sort of problem. My own experience with Christianity was that it solved a lot of circular reasoning.
Suppose I'm depressed. I could solve the problems that are exacerbating that depression, but then I'd have to think hard about depressing things, which I obviously don't want to do. The actual truth is that, because of the aforementioned debilitating depression, at this moment I do indeed suck and am doing bad things with my life. But if everyone's a sinner and Jesus loves me anyways, then I have motivation to examine my life with clear eyes. No matter what I find, it won't decrease Christ's opinion of me (even though it will temporarily reduce my own opinion of myself). Then I can make the changes that allow me to love myself like Christ does.
Buddha's four noble truths (notably the one about "everything sucks all the time just cause") can achieve a similar end
I'm a little confused -- what does the problem have to do with ethics ? I suppose you could say that the fear of opening your tax book because you know you'll find taxes in it, could be labeled as "sloth", but it's a bit of a stretch.
When you write "and religious", my question is, what is "religious"? It's kind of a cliche that people supposedly like to say they're spiritual but not religious. I don't know if you are using that sense of "religious" or even what the cliche really means.
I'm not religious in the sense of belonging to any community, attending regular services, preferring one type of Christianity, and so on. That doesn't seem directly related to being intelligent or honest.
I was raised by people who I think were essentially atheists but probably would've abhorred the label. I think, but am not sure, that they came from their respective very religious backgrounds and were turned off by hypocrisy and bad behavior of religious people. I had no bar mitzvah or baptism or confirmation or anything. But I was given a Bible and I read a fair amount of it, plus later on stuff by C.S. Lewis, a biography of a saint, etc.
Recently, I've been talking to someone with a Catholic background, and it perplexes me that they seem to not be very familiar with the Bible, to believe in all sorts of new-agey things that seem to me in conflict with Catholicism, but they don't seem to be consciously hostile to it either.
When I started reading about the Inquisition, while I don't accept the *premises*, the theological debate over whether and which magic or astrology is compatible with Christianity makes some sense to me within a closed system. Normal people are oblivious or think of witch trials and/or Monty Python, though, right?
Conversely, contemporary magical beliefs like the law of attraction completely confuse me as to why someone would accept them, and not even see a conflict with traditional beliefs. But I don't think it's normal to analyze things like a Talmud scholar.
I guess where I'm meandering to, is that religious (or Christian) can mean many things, and I'm doubtful that there is even an intersection between what is it logical for it to mean and what it is normal for it to mean.
There you go, that's exactly what I'm talking about. Sure, that is "standard", but from what I read it excludes most Catholics! And my anecdote is not inconsistent with that.
Also, I notice you don't even include "reading the Bible".
"Most Catholics worldwide disagree with church teachings on divorce, abortion and contraception and are split on whether women and married men should become priests, according to a large new poll released Sunday and commissioned by the U.S. Spanish-language network Univision."
Is this the way to like scary movies? Not “I like to be scared,” but “I like the sequence of feeling scared and then realizing I’m totally safe/heroically safe at home”?
I definitely think part of the appeal of horror (whether film, game, or attraction) is the ability to feel fear in a controlled environment where no ACTUAL danger can happen to you*. Of course, people have their own personal tolerances for this, and some people have their risk-assessment feedback set to the point that even simulated peril is unacceptable.
I've heard this a lot, but never with any evidence I find convincing. I think it might be the kind of thing that sounds nice so no one wants to reject it.
My experience with horror is more like spicy food. It's literally activating pain receptors, but so detached from pain, and so mixed up with a particular sensory context, that the ostensibly bad stimulus becomes more of a unique overtone creating a richer more textured flavor. (Yes, I know some people eat spicy food just to show off their pain tolerance, but I don't think many horror film junkies are like this)
Another interpretation is that humans are actually pretty bad at distinguishing emotions, and so, for example, heightened emotional states are equally confused. Being excited is not that different from being terrified.
This could also be an example of reinforcement. The thinking is: 'You were in danger, and you did something to avoid harm, so you should feel pretty darn good about whatever it was. Here are some endorphins'.
It's similar as with hot food. Your pain sensores get triggered, but nothing bad happen to you. But the pain triggers the fight or flight response, that make you feel awake and fit.
So over time your brain asociate this kind of pain, with the fun feeling of being awake an fit, but no danger, so you start to like the pain.
The "bad day" example is meant as a hypothetical scenario in which a singular bad experience would worsen the lion detection circuitry if reinforcement learning was applied to it.
Right, but it hinges on the detection actually being negative, whereas noticing a danger before it hurts you is generally exciting and stimulating, a positive experience..
It's entirely hypothetical. For this example it does not matter what would likely happen, but what *could* happen. I'd agree that it is not a particularly good example.
I think in the lion-in-corner-of-eye example a lot of people would freeze, which also explains the tax behavior.
It makes sense from an evolutionary perspective, where if a predator doesn't spot you they'll eventually leave you alone, but the IRS doesn't work that way.
The IRS is kind of weird. US law can subject you to criminal penalties for not submitting an honest tax return, but not for refusing to send the IRS money. However, if you *do* refuse to pay them, the IRS does get to take your stuff to get what it's owed plus interest and penalties.
My mailbox definitely projects an "Ugh Field" for me. I know I need to check it, but it only ever brings me junk mail or problems. So every time I think "I should check the mail" another part of me is thinking "Do I have time to solve problems right now? Do I want to? No and no."
Huh, this is an un-realized advantage of not having street delivery and instead only having a PO box. Lots of my packages (which I ordered and want/am looking forward do) got to the PO box, so the only way to get them is to pretty regularly check my mail.
This in no way outweighs the inconvenience of needing to drive 15 minutes whenever I have something I need/just to check the mail, but it _is_ an advantage I suppose.
Not exactly applicable, but you made me think of this:
Napoleon had a policy of ignoring incoming letters for three weeks. Most things "requiring" his attention didn't actually need him to do anything. Thus, many things just resolved themselves by the time he got around to reading the letter.
I find Plantinga's argument strange, in the sense that in most situations, having a belief closely aligned with the observed phenomenon seems clearly advantageous. To my knowledge, systematic discrepancies between belief and phenomenon correspond to relatively rare cases where both (1) it is difficult to determine what is true and (2) there is an inbalance between the costs of the different ways of being wrong (ie. if you are not sure whether A or B is true, and wrongly believing A is much better than wrongly believing B then believing A is better, even if B is slightly more probable).
The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs. It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't. So if there are truthful beliefs that aren't adaptive, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more adaptive than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.
Really the argument boils down to: if the human mind is the creation (regardless of method of creation, could still be evolution) of a rational mind trying to create more rational minds, then it makes sense that the human mind is rational. If the human mind is the product of blind forces that are only aimed at increasing fitness, then we can expect the human mind to be very fit for reproduction but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of blind forces because that belief is also the product of an irrational mind.
It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "designed to be able to do that" model fits more parsimoniously than the "maybe if you select for only fitness you'll get rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.
I have a vague idea of an argument for why intelligence isn't adaptive past a certain point. Intelligent people are afraid of small probabilities and unknown unknowns. But these are where intellect is least useful, and analysis is most sensitive to assumptions. People who aren't as intelligent and aren't as imaginative, are more likely to just do stuff until they are stopped or killed. That could be systematically better for a population even if not for an individual. Because it explores phase space that is inaccessible to anyone who must *understand* what they are doing to do it.
Suppose there are 1,000 people facing some threat to all of them, and they can each do something with a 1% chance of success and a 99% chance of death. If they're all intelligent, then they won't do the thing until it becomes clear that the alternative is certain death. At which point it might be too late. But if they're fearless and oblivious, and all do the thing, then 10 people will survive, and carry on their risk-taking genes.
Someone smarter than me could put it more rigorously, but I feel like intuitively that analytic pessimists do not take optimal amounts of risk from an evolutionary perspective.
Empirically, evolution isn't producing runaway intelligence, so there must be some logical reason it doesn't increase fitness, right?
This assumes a risk distribution, not too unreasonably, like the one in which we evolved, where it is easy for me to get myself killed, possible for me to get my entire band wiped out, but nearly impossible for me to wipe out all humans or all life. We have changed our environment.
It's still easy enough to get oneself killed, and easier to wipe out humanity, than thousands of years ago. What the modern world has reduced, I think, is the effect of natural disasters that aren't global. And it's increased the individual benefit of being analytical, but, I think, only to a point.
It could also be the case that intelligence does increase fitness but that the cost of a bigger brain outweighs that fitness increase. That would be analogous to the explanation I'd use for why men don't have bigger muscles and why women don't have bigger breasts, despite the obvious fitness advantages.
After thinking about this, I don't understand either:
(1) Why it's "obvious" that those things would be better, or;
(2) How decomposing a reduction in fitness into a small increase and a bigger decrease is meaningful rather than an arbitrary choice applicable to everything.
I can't see any other explanation than sexual selection for the existence of D cups, and I can't see any other explanation than natural selection for the persistence of A cups despite this sexual selection, and to me these things both seem obvious.
If some variation affects fitness through multiple mechanisms, I think that understanding those mechanisms separately will give a better understanding of that variation. Experimentally, those mechanisms can be manipulated separately.
Using a heuristic that seeks truth without trying to filter on adaptivity might be more adaptive in the long run. It’s hard enough trying to figure out what's true. Is figuring out what is adaptive easier?
I thought that properly, something being "adaptive" means that it spreads through a population over time. Leading to death or preventing biological reproduction doesn't necessarily prevent a progression of changing statistics. I like that way of defining evolutionary fitness because it avoids mixing in human values and remains abstract and general.
I was answering the part about "figuring out" what is adaptive. I thought was misleading, because organisms do not "figure it out" which of course does not in the least prevent adaptation from hapening.
And yes, of course, adaptation optimizes the number of produced copies of genes, and not the fact that the individual bearing these gens die or not, though the two are usually quite strongly related!
"The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs."
I completely agree with the argument, but I think it has little consequence, in the sense that I expect evolution has produced "reasonably" true beliefs in most cases (and no beliefs at all in many situations in our modern lives!) because true beliefs are generally adaptive.
There are many examples where evolution has produced beliefs that are not perfectly true, because when there is uncertainty it is often better to be wrong in one direction (e.g., predator detection), but I know of no examples of "very false" beliefs constructed by evolution.
I don't think so. We have many "slightly false" beliefs that are not insulated from analysis at all. For example evolution gave us beliefs that spiders are dangerous, but we are very well able to determine than this is false for many spider species. The beliefs produced by evolution are expectations or emotions, it is perfectly possible to analyse them (though it can of course be difficult to act rationnally even when we are concious that a given belief is false).
> It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't.
I've seen this claim often, but it's never accompanied by a plausible example that would make it convincing. It's basically asserting that there's no known reason why evolved adaptations would correspond to truth, and thus concludes that evolved adaptations may in fact imply false beliefs.
Not knowing the reason does not entail such a reason doesn't exist, so that argument just doesn't follow though. It's a premature conclusion at best.
The crux of the argument is that divine creation should plausibly produce behaviors that are in line with God's will, but not truthful beliefs. It might be that truthful beliefs are in line with God's will, but not necessarily so, and divine creation would only reinforce truthful beliefs that are in line with God's will but not ones that aren't. So if there are truthful beliefs that aren't in line with God's will, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more in line with God's will than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.
Really the argument boils down to: if the human mind is the result of an evolutionary process which selects for rationality, then it makes sense that the human mind is rational. If the human mind is the product of intelligent design carrying out some inscrutable divine plan, then we can expect the human mind to be very fit for carrying out God's plan but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of intelligent design because that belief is also the product of an irrational mind.
It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "evolved to be able to do that" model fits more parsimoniously than the "maybe God wills there to be rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.
This is actually a pretty good counterargument I haven't heard of before. At least in the case of evolution you have good reason to believe that reproductive ability correlates with truthfinding. There is no reason to believe an omnipotent entity wants to create rational minds.
The main objection, is that it is at least plausible that a rational mind could be designed by a rational mind: it is harder to see how a rational mind could come about through irrational processes that are not aimed at rationality in any case. You can certainly object that a rational mind might design an irrational mind if they wanted to, but at least it isn't mysterious where the rationality came from if they designed a rational mind.
I never said that. I said one side was at least plausible on its face, but the other side is a bit harder to see if it is plausible. Do you disagree that a rational mind designing another rational mind is plausible? Do you disagree that it's more obviously plausible than a non-rational process that isn't aimed at producing rationality producing one?
As for the rules portion of his argument, I think Plantinga's argument makes more sense of a counter-argument against 1970s era rules-based AI. Which is fair enough, since that kind of AI is no longer seen as an effective tool with which to model human behavior anyways.
I think Plantinga is a bit binary in terms of labeling things 'rational' vs. 'irrational.'
Don't put too much by my summary: I don't think he'd necessarily use the labels "rational" and "irrational" in his formal argument, I just grabbed them as easy words to use in a short explanation of the argument. Obviously what it means to be "rational" in this sense requires a lot of defining of terms.
Yes, yes! I totally agree! I think that is the usual mistake that mathematician/philosophers do when dealing with living things : they consider that "very slightly false" is identical to false, whereas in biology, the "truthfulness" is better considered quantitatively, with "very slightly false" in practice very similar to "correct".
Yeah, in his actual argument, Plantinga deliberately avoids the binaries of 'true', 'false', 'rational' and 'irrational'. He talks about a type of belief he calls a "defeater": a belief which makes other beliefs seem less likely, but in some other way than by directly disproving them.
I claim that you've made a dozen bad life decisions relating to this in the past week, and that several forms of therapy are built on top of it (though they wouldn't frame it in those terms).
The ugh fields post linked above has a lot of examples of bad life decisions. In terms of therapy, more speculative, and I'll write more on it later, but I think a lot of blocks in therapy relate to something like "It is easier and more fun to come up with a theory of why I shouldn't have to feel bad about this problem, than to solve it", which I think is this same idea of reward operating on epistemics instead of action.
That is, if there's a problem (you are poor), and your brain can solve it by either coming up with some kind of theory in which it's rewarding (rich people suck, they've all sold out, my poverty actually makes me a good person) or by working hard to get more money, and your brain is indifferent between getting reward by epistemic changes or by real-world changes, making the real-world changes is going to be a really tough sell. My guess is it's much more complicated than this, but I do think this is part of the dynamic.
Ok, but you didn't say it was "untethered from the physical nuts and bolts of MY daily life", you said it was from "daily life". A reasonable reading of this is that you think it doesn't apply to the daily life of any significant number of people.
Can people really stop feeling something because they have a theory? Or is your model really that people *cannot* stop feeling bad and that's *why* they have to solve the real problem?
Regarding procrastination and ugh fields, here's an interesting and relevant question: Do animals procrastinate? I don't know the answer, but it certainly seems quite possible to study it. For instance, squirrels gathering nuts for winter: Are they at all like students studying, i.e. gathering information, for an exam? Do squirrels, like students, often laze around til the deadline is near, then put out a burst of effort that exhausts them and sometimes does not permit gathering of a sufficient store of the resource in question? Maybe squirrels' behavior with nuts isn't the best thing to study, since the nuts don't land on the ground at a steady rate all fall but instead all drop over a short span of time. But there have to be some situations where animals really do have the option to procrastinate. And if there isn't, we could create on artifically in the lab with rats.
Anyone here know anything about this?
We all know that grasshoppers procrastinate and ants don't.
> The ugh fields post linked above has a lot of examples of bad life decisions. In terms of therapy, more speculative, and I'll write more on it later
If you want to do a post of ugh fields I would greatly appreciate it.
This is the kind of comment that makes me really curious about what a deleted post used to say.
It was just a cleverly worded cheap shot, inaccurately implying this discussion doesn’t have practical value.
I thought your original comment was a little hostile, but I really appreciated your follow-up comments to me (that you've since deleted) and also wanted to reply that I'm sympathetic to what you described as part of your impulse to comment, tho I think having discussions like those in this post is a perfectly fine hobby to have :)
To remove any mystery here, I’ll say that part of it was a feeling of “Don’t these fellas have any real problems?”
I don't think it was a cheap shot. I think your underlying point was important and sound, and it resonated with me. It *is* a little worrying when the best and brightest young folks are seriously debating how many angels can dance on the head of a pin *and* seem at least somewhat unaware that the question is a priori frivolous. One can't help thinking someone ought to say something. Recall the wry popular quote: "If everybody contemplates the infinite instead of fixing the drains, many of us will die of cholera." There's a lot of fixing the drains stuff that still needs to get done in 2022.
If I had to guess, I'd say you regretted that you expressed it in a tone that came closer to contempt than bemusement, which unnecessarily antagonizes and derails your point. Fair enough, but I could wish you had rephrased it rather than eliminated it.
The tone was way off base. I don’t want to ruin an apology with an explanation.
Cleverly worded cheap shots are deletable? I assume that includes sardonic, obscenity-free ad hominem. What a world we live in where judiciously phrased, non-vulgar insults are considered outside the bounds of reasoned discussion. We are all snowflakes now, aren't we?
I’ve been trying to eliminate snark from my conversations. Serves no purpose if you’re seriously trying to communicate.
Gunflint deleted their own comment. I didn't consider it "outside the bounds of reasoned discussion" – I, and several others, directly discussed their comment!
Ok, great. Sorry for assuming.
I'm pretty sensitive to the price of pasta, potatoes, flour, and rice. I'm somewhat aware of the price of meat. But I don't live on milk. It is something I buy rather sporadically, if I get a craving for something I don't eat regularly - rice pudding, or cold cereal.
I suspect that milk consumption in the US is excessive due to PR from the dairy industry driven by...motivated reasoning.
Anyway, I was wondering if Scott has talked to anyone who is diagnosed with schizophrenia recently, and what he's observed or assumes to be their relationship with "motivated reasoning". Sub-question is, do people with schizophrenia go to therapy?
Something I found interesting as a difference in perspective between generations that may get lost in time: I've heard/read people scorn skim milk as unpleasant, watery, and blue colored. At the same time, products of a family farm are idealized as wholesome, rich, organic, unprocessed. But someone I knew that grew up on a farm in the 1940s, didn't like whole milk, considered it greasy and distasteful. Why? Conditioning. On the farm, the fat was always skimmed off, otherwise you wouldn't have butter and cream.
Possibly they didn’t like it because it wasn’t homogenized. Whole milk is actually worse to drink when the fat floats in greasy globs instead of being smoothly mixed in.
I haven't ever seen non-homogenized milk in a grocery store.
Me neither. If he lived on a farm he may have drunk milk from his own cows, and that milk wouldn’t necessarily be homogenized. That’s just what comes to mind when I hear milk described as “greasy.”
You can buy non homogenised milk as a specialty item in a few shops. I've had it.
It's probably got a lot to do with what you're accustomed to. Skim milk is more watery than whole milk, but which you actually like better is subjective. If you're used to a certain level of thickness, different levels are probably going to taste wrong.
(On a related note, sugar-sweetened soda gives me heartburn but diet soda doesn't. After drinking only diet soda for a while, regular soda now tastes too syrupy for me.)
Certainly from the few people I know with schizophrenia, many of them go to therapy - there's a lot of useful technique to be learned re: how to calm down enough to not act on the more dangerous of the unshared experiences, how to survive the anhedonia side effects of the antipsychotics, etc.
>how to calm down enough to not act on the more dangerous of the unshared experiences
I think this advice could be harmful to anyone who takes it at face value. There's an inherent runaway positive feedback loop. If a consumer discloses that they are concerned about anything, then they have conceded there might be a danger, that they aren't sure of the level, and irrevocably put judgement of it in the hands of others who cannot read minds. Sharing with someone who has professional responsibilities to take action, on balance, might be worse even than talking to a lay friend.
I can't imagine any way to fix the dichotomy between suffering alone from an illness and overreaction/misunderstanding other than coming up with an objective way to determine states of mind based on biological markers. If there is one thing I would like to see in my lifetime it's something analogous to blood sugar and A1C tests, only for depression, suicidality, aggression, psychosis, etc.
Obviously it could be used in harmful ways and many (especially paranoid) people would be fearful of a neo-eugenicist movement, but I think one has to come to terms with how harmful and intractable it is to *not* have any objective measurement to base life and death decisions on.
And I do not believe that good treatments can be developed for anything that does not even have a real definition based on measurement.
Doing one's taxes is "untethered from the physical nuts and bolts of daily life"?
You did qualify that your comment was "probably unfair" but I guess I'm still confused why you bothered commenting at all.
What is the connection between locksmithing and going toward strangers in the dark?
I think he's saying locksmiths had to approach strangers in the dark (who were maybe locked out of their car, or are maybe hoping to rob him) and this is a scary action
In a nutshell, for me at the very least, the Catholic faith solves this problem. From a skeptical outside perspective, ignore the deistic epistemology entirely and focus on the phenomology.
One benefit of faith is that it makes your utility function non-monotonic and flexible. It does this by putting a presumably conscious entity at the peak of the hierachy of values (think virtue ethics here). So you get an ultimate determiner of value that is pretty rigid but can decide between competing values. If Christ isn't this for you, well, I enjoy sharing this experience with 1 billion + people
¯\(°_o)/¯
24 hour later edit:
A few responders to this post seem to be operating on an assumption that no one is capable of being very intelligent, well-informed, intellectually honest, and religious simultaneously. While I don't agree with this perspective, even though it may be a strawman, I suspect many of the benefits of this way of thinking could apply to eg HPMOR fans who want to consider the actions of their favorite character rather than the deity I choose to follow. I think good fiction is invaluable, it allows for an idealized and situationally unique perspective, to which one might consider themselves an "apprentice" in a way that they would balk at doing for a real person. I think holding yourself to a higher ideal is generally a great thing, even if it's not mine (and from where I'm sitting, if my assessment is correct an honest and courageous attempt at this will have the same ultimate endpoint one way or another).
(PS I have cried more than one at HPMOR Harry and his interactions with dementors and phoenixes, it's better than the original IMO)
Can you say more here? Having trouble relating this to the post.
I can say more, but I did preface it by saying nutshell. I'll try some additional bullet points for now and if more clarity is needed I can provide it later.
- utility functions that aren't monotonic are susceptible to "Dutch booking" ie a form of exploiting cycles in values
-having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism)
-if you assume virtue ethics, then you have the company of roughly a billion people also asking the question "what would Jesus do?"
I mean if being tractable is the only thing you care about, why not just base your ethics around something simple, like maximizing the amount of hydrogen in the universe or increasing the amount of entropy? Most people care more about their ethics actually being right than they do about them being easy to follow.
Right, this is why I asked to ignore deistic epistemology. If you want Catholic apologetics I'm not your man, I'll just say my personal process of developing my faith has included a lot of wrestling with God which is also an ongoing process at times.
I agree with Paul above, and I don't see how your comment addresses his point. Also, your initial argument sounded something like "Catholicism simplifies ethics", but this comment sounds like even that might not be the case.
Tractable perhaps, but Catholic ethics is anything but a simplification of the problem. Tbh if you think that I question if you're making a good faith faith effort to understand my position.
As to bringing it back to original point, I suspect having an overarching moral/ethical/ideological system is an extreme benefit to motivational and decisional paralysis, which to me does seem to address the gist of the problem with "motivated reasoning" and reinforcement learners.
Well, if you are in a reflexive equilibria such that choosing a utility function involving maximizing H2 and choosing one that maximizes human happiness are indifferent, then your re probably better off going with the first. The problem is that that is not where we find ourselves, in that we prefer happiness to H2 is there is nothing we really can, not to metion should, do about it. Whereas if having a simple tractable utility function (doing what God says maximizes utility) that still seems to conform to human values (God says to do things I generally agree are good) then that would, initially, seem to solve the problem quite neatly, by being both tractable and ethical.
Of course, there are plenty of other objections to that paradigm (“how do I actually know what God wants me to do here? If God’s depress are isomorphic to what I already think of as good, why is this elaborate utility function necessary, and if they aren’t, then doesn’t that create mor problems?” But I think this particular objection is answerable.
What's the answer then? I'm not asking to be tedious. I don't see a good answer.
My point wasn’t to provide a good answer, but merely to explain why this objection didn’t make sense. I think if you work it out the “Christianity (or any deistic religion with certain qualities) solves certain self referential problems in ethics and motivation” could work, but you would have to do far more work to get there than OP did.
>having a rigid hiearchy of values is basically deontology, which can lead you to ratting out a friend in a Kantian murder mystery because you aren't clever enough to be evasive rather than lie; if lying is always wrong, and saving your friend is usually right given the circumstances, this can lead to ineffective consumption of congnitive resources on intractable problems (also a critique of much utilitarianism
I think you need to be clearer about what rigid means. Utilitarianism doesn't approximate deontology so long as utility is finite, because a sufficient number of specks has more negative utility than torture.
The problem of hell and comparing infinities something I wrestled with for a long time, and has led me to some difficulties considering heresy. I don't think there's a consensus on the topic among more interested, expert, and intelligent theologians than my amateur musings and I defer to them.
I'd be quite interested in hearing your thoughts about the problem of hell.
When I was a christian, I simply didn't believe that hell existed. It just felt completely against my personal experience of the divine. The Benevelent God, whose existence I directly felt and saw, would have never created such a place. It contradicted everything on multiple levels. So I just never even thought about it as a philosophical problem
Later I understood that all my religious experiences was not my communication with God, but a glimpse into my own values and ethics. And this allowed me to think about the problem of hell and be really terrified. Not of the cruelty of God, of course, but of people who simultaneously claim that their ethics comes from their religion and that somebody being eternally tortured for finite amount of evildoing is part of this religion. If ones ethics can accept hell it can justify literally anything.
How do you, deal with the existence of hell? This is the mode of thinking that has never been possible for me, so I'm very currious.
I spent the last few years as agnostic and anti-religion and recently converted to Christianity. One of my big gripes with Christianity was the idea of hell until I came to the conclusion that popular understanding of heaven and hell doesn’t reflect what’s actually spelled out in scripture.
The way I understand it now is that God is Love and Hell isn’t a place, it is the absence of love. God doesn’t send anyone to hell but we instead choose Love or it’s absence.
Closest analogy I can come up with: you’re dating the perfect significant other. You lie, apologize, they forgive you. You cheat, apologize, they forgive you. But you continue cheating and lose the remorse. They call off the wedding and you are stuck for the rest of your life with the agony of losing the perfect spouse. That is Hell.
Important to note that this also means a deeply Loving/selfless Hindu, Zoroastrian, Atheist, etc already knows God and will spend eternity with God
I don't really have any original or personal thoughts on the topic. What I'll say is, for most things I seem to side with Thomists, but on this question I do "dare to hope", since it's the most legitimate seeming option that's available to me.
https://www.youtube.com/watch?v=dmsa0sg4Od4
I'm going to post this here and rely on some security through obscurity with my Google profile, I will come back and delete it once someone comments reminding me to do so (please do this is you've got the link saved). I don't want to share early drafts that I haven't at least presented to clergy, but this is my most sincere answer to how I (currently and after attempting to study the issues, but without any claim to expertise) deal with squaring my faith with my understanding of the world.
https://docs.google.com/document/d/1ItOgM-zI5Ybtp_4TTg4xB23BdNryQhIF-z5hUlX4MxE/edit?usp=drivesdk
> a sufficient number of specks has more negative utility than torture.
Does the majority of utilitarians actually agree with this? I seem to recall that there was plenty of argument on LW on this, and no clear consensus emerged.
I wonder if this also ties in to self-identity. I remember reading about if kids are praised for being smart, they can start to self-identify as smart and so become reluctant to try new things (since they will initially be bad at them, which clashes with their self-identity). Maybe that was bunk but I think I can see it a little bit in my own life.
If we start to self-identify as being right, or smart, it might make it more painful to update our beliefs because being wrong in the past could clash with that identity. I could see the same thing happening with virtue too (although I've read another Christian say the view is more that everyone is a sinner, which if practiced might prevent that).
For now, I try to think of the truth as a model. The world is too complex to totally understand, but we imagine simplified models in order to navigate it and find patterns. It would be unusual to be upset at finding a better layout for a model train set, since in that context the goal is usually to *have* the best model rather than to *be the best at building models*.
I like this train of thought. One thing I've found helpful is to not only think, what would Jesus do? But also, what would my "saint self" do. This approach has pros and cons, but fortunately the Catholic approach makes both methods isomorphic.
Adding another 2c to this:
I once heard someone (I think Leah Libresco but I'm not highly confident in this attribution) describe the "communion of saints" as an attempt to provide cross-temporal bayesian agreement on theological questions. I really like this framing, as it justifies the variety and sometimes imperfect nature of recognized saints. Few if any saints were virtuous their entire lives, and some of the most notable we're quite sinful (a great example here is the apostle Paul, who prior to his conversion persecuted Christians).
Another great benefit, imo, is that Christ was incarnate in a particular time and place, which though his example was indeed perfect, has limited utility in informing people in other contexts. So saints, while not perfect, at least by the end led sufficiently holy lives for the Church to confidently state that they're in heaven, which is a great element for a faith that is living to have as examples.
The "everyone is a sinner" thing is really good for solving that sort of problem. My own experience with Christianity was that it solved a lot of circular reasoning.
Suppose I'm depressed. I could solve the problems that are exacerbating that depression, but then I'd have to think hard about depressing things, which I obviously don't want to do. The actual truth is that, because of the aforementioned debilitating depression, at this moment I do indeed suck and am doing bad things with my life. But if everyone's a sinner and Jesus loves me anyways, then I have motivation to examine my life with clear eyes. No matter what I find, it won't decrease Christ's opinion of me (even though it will temporarily reduce my own opinion of myself). Then I can make the changes that allow me to love myself like Christ does.
Buddha's four noble truths (notably the one about "everything sucks all the time just cause") can achieve a similar end
I'm a little confused -- what does the problem have to do with ethics ? I suppose you could say that the fear of opening your tax book because you know you'll find taxes in it, could be labeled as "sloth", but it's a bit of a stretch.
When you write "and religious", my question is, what is "religious"? It's kind of a cliche that people supposedly like to say they're spiritual but not religious. I don't know if you are using that sense of "religious" or even what the cliche really means.
I'm not religious in the sense of belonging to any community, attending regular services, preferring one type of Christianity, and so on. That doesn't seem directly related to being intelligent or honest.
I was raised by people who I think were essentially atheists but probably would've abhorred the label. I think, but am not sure, that they came from their respective very religious backgrounds and were turned off by hypocrisy and bad behavior of religious people. I had no bar mitzvah or baptism or confirmation or anything. But I was given a Bible and I read a fair amount of it, plus later on stuff by C.S. Lewis, a biography of a saint, etc.
Recently, I've been talking to someone with a Catholic background, and it perplexes me that they seem to not be very familiar with the Bible, to believe in all sorts of new-agey things that seem to me in conflict with Catholicism, but they don't seem to be consciously hostile to it either.
When I started reading about the Inquisition, while I don't accept the *premises*, the theological debate over whether and which magic or astrology is compatible with Christianity makes some sense to me within a closed system. Normal people are oblivious or think of witch trials and/or Monty Python, though, right?
Conversely, contemporary magical beliefs like the law of attraction completely confuse me as to why someone would accept them, and not even see a conflict with traditional beliefs. But I don't think it's normal to analyze things like a Talmud scholar.
I guess where I'm meandering to, is that religious (or Christian) can mean many things, and I'm doubtful that there is even an intersection between what is it logical for it to mean and what it is normal for it to mean.
I'd go with a pretty standard usage: attend church and confession with some regularity, and deferring to the hierarchy on spiritual matters.
There you go, that's exactly what I'm talking about. Sure, that is "standard", but from what I read it excludes most Catholics! And my anecdote is not inconsistent with that.
Also, I notice you don't even include "reading the Bible".
"Most Catholics worldwide disagree with church teachings on divorce, abortion and contraception and are split on whether women and married men should become priests, according to a large new poll released Sunday and commissioned by the U.S. Spanish-language network Univision."
https://www.washingtonpost.com/national/pope-francis-faces-church-divided-over-doctrine-global-poll-of-catholics-finds/2014/02/08/e90ecef4-8f89-11e3-b227-12a45d109e03_story.html
What if it would make your day great to see the lion because you’d run away and then feel like a hero?
I think this is the wrong level on which to engage with the example.
Is this the way to like scary movies? Not “I like to be scared,” but “I like the sequence of feeling scared and then realizing I’m totally safe/heroically safe at home”?
I definitely think part of the appeal of horror (whether film, game, or attraction) is the ability to feel fear in a controlled environment where no ACTUAL danger can happen to you*. Of course, people have their own personal tolerances for this, and some people have their risk-assessment feedback set to the point that even simulated peril is unacceptable.
*Provided nothing goes severely wrong
I've heard this a lot, but never with any evidence I find convincing. I think it might be the kind of thing that sounds nice so no one wants to reject it.
My experience with horror is more like spicy food. It's literally activating pain receptors, but so detached from pain, and so mixed up with a particular sensory context, that the ostensibly bad stimulus becomes more of a unique overtone creating a richer more textured flavor. (Yes, I know some people eat spicy food just to show off their pain tolerance, but I don't think many horror film junkies are like this)
Another interpretation is that humans are actually pretty bad at distinguishing emotions, and so, for example, heightened emotional states are equally confused. Being excited is not that different from being terrified.
See also: roller coasters
This could also be an example of reinforcement. The thinking is: 'You were in danger, and you did something to avoid harm, so you should feel pretty darn good about whatever it was. Here are some endorphins'.
It's similar as with hot food. Your pain sensores get triggered, but nothing bad happen to you. But the pain triggers the fight or flight response, that make you feel awake and fit.
So over time your brain asociate this kind of pain, with the fun feeling of being awake an fit, but no danger, so you start to like the pain.
The "bad day" example is meant as a hypothetical scenario in which a singular bad experience would worsen the lion detection circuitry if reinforcement learning was applied to it.
Right, but it hinges on the detection actually being negative, whereas noticing a danger before it hurts you is generally exciting and stimulating, a positive experience..
It's entirely hypothetical. For this example it does not matter what would likely happen, but what *could* happen. I'd agree that it is not a particularly good example.
I think in the lion-in-corner-of-eye example a lot of people would freeze, which also explains the tax behavior.
It makes sense from an evolutionary perspective, where if a predator doesn't spot you they'll eventually leave you alone, but the IRS doesn't work that way.
arguably the IRS does work that way, but they have more object permanence than your average predator
You made my day. That's one of the things I love about this community. Interesting, funny and (because?) so on-point
Tax advice: if you can’t see the IRS, the IRS can’t see you, so invest in an eyepatch and stop paying taxes.
You can probably avoid the IRS by playing dead. By that I mean, faking your own death and moving to Brazil.
+1 for the freeze hypothesis
The IRS is kind of weird. US law can subject you to criminal penalties for not submitting an honest tax return, but not for refusing to send the IRS money. However, if you *do* refuse to pay them, the IRS does get to take your stuff to get what it's owed plus interest and penalties.
My mailbox definitely projects an "Ugh Field" for me. I know I need to check it, but it only ever brings me junk mail or problems. So every time I think "I should check the mail" another part of me is thinking "Do I have time to solve problems right now? Do I want to? No and no."
Huh, this is an un-realized advantage of not having street delivery and instead only having a PO box. Lots of my packages (which I ordered and want/am looking forward do) got to the PO box, so the only way to get them is to pretty regularly check my mail.
This in no way outweighs the inconvenience of needing to drive 15 minutes whenever I have something I need/just to check the mail, but it _is_ an advantage I suppose.
I get my packages delivered to my office, which makes my mailbox even worse. Not even a chance of a cool package!
Not exactly applicable, but you made me think of this:
Napoleon had a policy of ignoring incoming letters for three weeks. Most things "requiring" his attention didn't actually need him to do anything. Thus, many things just resolved themselves by the time he got around to reading the letter.
I am writing this comment in large part to avoid looking at my inbox.
This reminds me very much of Plantinga's evolutionary argument against naturalism. https://en.wikipedia.org/wiki/Evolutionary_argument_against_naturalism#Plantinga's_1993_formulation_of_the_argument
In case that's helpful.
I find Plantinga's argument strange, in the sense that in most situations, having a belief closely aligned with the observed phenomenon seems clearly advantageous. To my knowledge, systematic discrepancies between belief and phenomenon correspond to relatively rare cases where both (1) it is difficult to determine what is true and (2) there is an inbalance between the costs of the different ways of being wrong (ie. if you are not sure whether A or B is true, and wrongly believing A is much better than wrongly believing B then believing A is better, even if B is slightly more probable).
The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs. It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't. So if there are truthful beliefs that aren't adaptive, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more adaptive than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.
Really the argument boils down to: if the human mind is the creation (regardless of method of creation, could still be evolution) of a rational mind trying to create more rational minds, then it makes sense that the human mind is rational. If the human mind is the product of blind forces that are only aimed at increasing fitness, then we can expect the human mind to be very fit for reproduction but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of blind forces because that belief is also the product of an irrational mind.
It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "designed to be able to do that" model fits more parsimoniously than the "maybe if you select for only fitness you'll get rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.
I have a vague idea of an argument for why intelligence isn't adaptive past a certain point. Intelligent people are afraid of small probabilities and unknown unknowns. But these are where intellect is least useful, and analysis is most sensitive to assumptions. People who aren't as intelligent and aren't as imaginative, are more likely to just do stuff until they are stopped or killed. That could be systematically better for a population even if not for an individual. Because it explores phase space that is inaccessible to anyone who must *understand* what they are doing to do it.
Suppose there are 1,000 people facing some threat to all of them, and they can each do something with a 1% chance of success and a 99% chance of death. If they're all intelligent, then they won't do the thing until it becomes clear that the alternative is certain death. At which point it might be too late. But if they're fearless and oblivious, and all do the thing, then 10 people will survive, and carry on their risk-taking genes.
Someone smarter than me could put it more rigorously, but I feel like intuitively that analytic pessimists do not take optimal amounts of risk from an evolutionary perspective.
Empirically, evolution isn't producing runaway intelligence, so there must be some logical reason it doesn't increase fitness, right?
This assumes a risk distribution, not too unreasonably, like the one in which we evolved, where it is easy for me to get myself killed, possible for me to get my entire band wiped out, but nearly impossible for me to wipe out all humans or all life. We have changed our environment.
It's still easy enough to get oneself killed, and easier to wipe out humanity, than thousands of years ago. What the modern world has reduced, I think, is the effect of natural disasters that aren't global. And it's increased the individual benefit of being analytical, but, I think, only to a point.
“ easier to wipe out humanity” was my point, though maybe not much of one.
Evolution is slow. Why isn’t intelligence more culturally adaptive? Or maybe it is, and we’re not noticing?
It could also be the case that intelligence does increase fitness but that the cost of a bigger brain outweighs that fitness increase. That would be analogous to the explanation I'd use for why men don't have bigger muscles and why women don't have bigger breasts, despite the obvious fitness advantages.
After thinking about this, I don't understand either:
(1) Why it's "obvious" that those things would be better, or;
(2) How decomposing a reduction in fitness into a small increase and a bigger decrease is meaningful rather than an arbitrary choice applicable to everything.
I can't see any other explanation than sexual selection for the existence of D cups, and I can't see any other explanation than natural selection for the persistence of A cups despite this sexual selection, and to me these things both seem obvious.
If some variation affects fitness through multiple mechanisms, I think that understanding those mechanisms separately will give a better understanding of that variation. Experimentally, those mechanisms can be manipulated separately.
Using a heuristic that seeks truth without trying to filter on adaptivity might be more adaptive in the long run. It’s hard enough trying to figure out what's true. Is figuring out what is adaptive easier?
"Is figuring out what is adaptive easier?" It is very easy. All living organisms do that very efficiently by dying when badly adapted.
I thought that properly, something being "adaptive" means that it spreads through a population over time. Leading to death or preventing biological reproduction doesn't necessarily prevent a progression of changing statistics. I like that way of defining evolutionary fitness because it avoids mixing in human values and remains abstract and general.
I was answering the part about "figuring out" what is adaptive. I thought was misleading, because organisms do not "figure it out" which of course does not in the least prevent adaptation from hapening.
And yes, of course, adaptation optimizes the number of produced copies of genes, and not the fact that the individual bearing these gens die or not, though the two are usually quite strongly related!
"The crux of the argument is that evolution should plausibly produce adaptive behaviors, but not truthful beliefs."
I completely agree with the argument, but I think it has little consequence, in the sense that I expect evolution has produced "reasonably" true beliefs in most cases (and no beliefs at all in many situations in our modern lives!) because true beliefs are generally adaptive.
There are many examples where evolution has produced beliefs that are not perfectly true, because when there is uncertainty it is often better to be wrong in one direction (e.g., predator detection), but I know of no examples of "very false" beliefs constructed by evolution.
"I know of no examples of "very false" beliefs constructed by evolution."
If there were such, they would have to be well insulated from analysis somehow, thus, one would not expect to be aware.
I don't think so. We have many "slightly false" beliefs that are not insulated from analysis at all. For example evolution gave us beliefs that spiders are dangerous, but we are very well able to determine than this is false for many spider species. The beliefs produced by evolution are expectations or emotions, it is perfectly possible to analyse them (though it can of course be difficult to act rationnally even when we are concious that a given belief is false).
I was assuming there's an important semantic distinction between "slightly" and "very". What does "very false" mean, as opposed to "slightly false"?
Yes, "a wizard did it" is a very parsimonious explanation, to a certain type of person.
> It might be that truthful beliefs are adaptive, but not necessarily so, and evolution would only reinforce truthful beliefs that are adaptive but not ones that aren't.
I've seen this claim often, but it's never accompanied by a plausible example that would make it convincing. It's basically asserting that there's no known reason why evolved adaptations would correspond to truth, and thus concludes that evolved adaptations may in fact imply false beliefs.
Not knowing the reason does not entail such a reason doesn't exist, so that argument just doesn't follow though. It's a premature conclusion at best.
The crux of the argument is that divine creation should plausibly produce behaviors that are in line with God's will, but not truthful beliefs. It might be that truthful beliefs are in line with God's will, but not necessarily so, and divine creation would only reinforce truthful beliefs that are in line with God's will but not ones that aren't. So if there are truthful beliefs that aren't in line with God's will, can we trust that our minds can find those truths? How can we be sure that truthful beliefs tend to be more in line with God's will than non-truthful beliefs when the thing we use to make that determination, our mind, is the very thing we're trying to determine the accuracy of? If the problem was that you were not sure whether a scientific instrument was reading accurately, you can't determine that by using the instrument in question alone.
Really the argument boils down to: if the human mind is the result of an evolutionary process which selects for rationality, then it makes sense that the human mind is rational. If the human mind is the product of intelligent design carrying out some inscrutable divine plan, then we can expect the human mind to be very fit for carrying out God's plan but have no reason to believe it is necessarily rational. But if the human mind is irrational, then we have no reason to believe it is the product of intelligent design because that belief is also the product of an irrational mind.
It certainly seems that our minds our rational, in that we can understand concepts and think about them and come to true beliefs in a way that, say, a chicken can't. Given that data point (human mind seems able to come to true beliefs), the "evolved to be able to do that" model fits more parsimoniously than the "maybe God wills there to be rational minds in some cases." It's not exactly a knock down argument, and you can certainly disagree with it rationally.
This is actually a pretty good counterargument I haven't heard of before. At least in the case of evolution you have good reason to believe that reproductive ability correlates with truthfinding. There is no reason to believe an omnipotent entity wants to create rational minds.
The main objection, is that it is at least plausible that a rational mind could be designed by a rational mind: it is harder to see how a rational mind could come about through irrational processes that are not aimed at rationality in any case. You can certainly object that a rational mind might design an irrational mind if they wanted to, but at least it isn't mysterious where the rationality came from if they designed a rational mind.
"My preferred theory is plausible and yours isn't, because I say so" isn't an argument.
I never said that. I said one side was at least plausible on its face, but the other side is a bit harder to see if it is plausible. Do you disagree that a rational mind designing another rational mind is plausible? Do you disagree that it's more obviously plausible than a non-rational process that isn't aimed at producing rationality producing one?
As for the rules portion of his argument, I think Plantinga's argument makes more sense of a counter-argument against 1970s era rules-based AI. Which is fair enough, since that kind of AI is no longer seen as an effective tool with which to model human behavior anyways.
I think Plantinga is a bit binary in terms of labeling things 'rational' vs. 'irrational.'
Don't put too much by my summary: I don't think he'd necessarily use the labels "rational" and "irrational" in his formal argument, I just grabbed them as easy words to use in a short explanation of the argument. Obviously what it means to be "rational" in this sense requires a lot of defining of terms.
Yes, yes! I totally agree! I think that is the usual mistake that mathematician/philosophers do when dealing with living things : they consider that "very slightly false" is identical to false, whereas in biology, the "truthfulness" is better considered quantitatively, with "very slightly false" in practice very similar to "correct".
Yeah, in his actual argument, Plantinga deliberately avoids the binaries of 'true', 'false', 'rational' and 'irrational'. He talks about a type of belief he calls a "defeater": a belief which makes other beliefs seem less likely, but in some other way than by directly disproving them.