Well personally I wold consider that claim in your final sentence to be excellent evidence of a low IQ, all by itself. I would react to it the same way I would to a claim that the stars rule our destinies and horoscopes are the key to happiness. It's perhaps acceptable if you are raised in a 13th century Russian shtetl where only the rabbi knew how to read, but not otherwise.
That doesn't follow--I could cite someone as an example of a great dancer even if my ideas of what it meant to dance well predated that person's birth.
Whether he shares the ideas coincidentally or through inspiration, in both cases he's advocating for a form of rationality that Rationalists believe in, that i can disagree with without being anti-rational. I think is my point
"The Elephant in the Brain" explicitly says readers may be worse off as a result of reading the book, because evolution selected us not to be fully rational. Robin Hanson continues on that here: https://www.overcomingbias.com/2022/02/rationality-as-rules.html
There's also major benefits to being irrartional as a bargaining position.
"I'm willing to blow up this whole deal if you don't give me what I want, and I don't care that this will damage us both" is a fantastic bargaining position. "I am willing to take any deal as long as it's marginally better than my BATNA" is a terrible one.
It's a fantastic bargaining position until someone calls your bluff and also you never want to develop any soft power (in politics)/genuine interpersonal relationships (in life) ever.
It's a great position the first time you use it, if you already have some goodwill built up using soft skills. It's a terrible position to use repeatedly, because your potential bargaining partners will need to adjust their strategy if it's used repeatedly, and they will develop a strategy specifically to beat yours. Most likely, they will find a third party to work with and cut you out of any deal, if possible. If that's not possible, they will look for ways to make your own strategy cause you more harm, or perhaps just use an even more damaging tactic, such as fighting/war.
Nobody thinks North Korea (or Iran, or Syria) is in a better position because it's willing to be irrational. That only makes sense if you assume their current level of development is the normal one. Compare North Korea to South Korea and realize how much NK has missed out on due to their irrationality.
That is not *obviously* a stupid conclusion, though. There are plenty of highly functional delusions, e.g. if each of us was constantly consciously aware of our #1 existential problem -- mortality -- we would probably be mostly frozen in fear or apathetic with despair all the time, useless. Most people in very long-term marriages conclude that a certain amount of blindness to, or psychological denial about, the characteristics of their spouses is highly functional. So he may be quite correct that the idea is (1) incorrect but also (2) extremely useful. That can happen.
Good for you. Let us know how it works out the moment you actually get the final diagnosis. If you nod calmly and continue annotating your grocery list, you will be one of the remarkable few.
I'm pretty sure there's a large spectrum of possibilities between "would be frozen in fear or apathetic with despair from the knowledge that you are eventually going to die" vs "wouldn't have much reaction at all to receiving a terminal diagnosis".
Let me introduce you to this concept called rhetorical hyperbole. It helps get the point across concisely without requiring a boring scholarly exegesis that runs to many paragraphs. If you agree that the problem of undeniable mortality is one that *would* occupy your mind to an unhealthy degree -- regardless of whether you would literally exist in the extreme state -- then you already agree with the point and are merely adding a quibbling footnote, which to be sure would be reasonable -- if this were a scholarly debate and not Internet ephemera.
I think it would occupy my mind to a healthy degree. Insofar as it occupies one's mind, one has two options that wouldn't be available if one ignored it. First, one can try to avert whatever it is that one expects one is going to die by. Second, one can make decisions that take into account the fact that one has limited time left.
I’m watching my dad die right now. And honestly there seems
to be an incredible freedom realizing that death is nothing to fear because it’s going to happen to me no matter what. All I can choose is how I’ll spend the little precious time I have. And cowering in fear just seems like a waste because all it does is ruin what’s left.
I don't think it's about any delusion. For me, the problem just doesn't register emotionally, for some reason. Usually. I had periods where it did, and it sucked.
Yah that's called denial. It gets a bad rap, but denial is a powerful and useful psychological tool, it allows us to focus on the problems of the moment without being overwhelmed by the (perhaps much bigger, perhaps even insoluble) problems of the future. Another much less pejorative word for it is "focus." Psychological focus is a form of healthy denial. Unhealthy denial is what we call obsession or blindness et cetera.
I'm just pointing out that there's far more to our functioning, even mental functioning, than our conscious beliefs, and for that reason we have ways of essentially shunting the conscious beliefs to one side -- through denial or functional delusion, among other things -- when they get in the way of getting stuff done. So when Gardner says "this may be wrong but it's still very useful" that isn't self-contradictory nonsense. It may be correct. It may also be wrong, of course. But it isn't *obviously* wrong.
Rationality is about doing and believing what one has reason to do and believe. I take it then that no one really opposes rationality, but rather there exist disputes as to what counts as a reason to do or believe something. Members of one class of potential reasons have, for unfortunate historical reasons, been lumped together under the banner of "rational" reasons. Then, the debate about rationality is whether there exist reasons beyond those which commonly bear the title of "rational" reasons.
Not in any way which fails to make progress in the current discussion though. By framing rationality as that pattern of behaviour/belief which conforms to the existing reasons for action, we can understand what debates about the importance of rationality consist in, namely, which facts constitute reasons for action/belief and which don't (recall the above where I maintain the we have sequestered a subset of candidate reasons and labelled them "rational" reasons).
To worry about buck passing is to confuse the question of what rationality is (sensitivity to reasons for action), and the question of what the rational act is in a given case (what ever the weight of our reasons point towards). Debates about "rationality" as such and what counts as rational pertain to the first question. Here we have discussions about whether desires constitute reasons, whether moral facts (if such there be) constitute reasons etc. One can debate these questions without any view on what the actual reasons are. And it is these questions we are debating when Pinker and Gardner are hashing things out.
However, I'm pretty sure that the summary of his thought is that if you do everything according to the reasons you have, then you will ipso facto be rational. The point is just that the fact that it's rational does not itself give an additional reason to do it - it's just a summary of the reasons we already have.
Even though I'm an academic philosopher, and this is very close to the kind of stuff I work on, I find this sort of thing very difficult going, so I can't guarantee I'm reporting it correctly.
Well, after the "reason" part you still have to actually *do* the thing, which requires willpower. Could rationality have something to do with willpower? I mean, they're surely not the same thing, but I think they might have something to do with each other (rationalists are more likely to go "I broke up with him because it was logical to do so" instead of "I knew it was the right decision but couldn't bring myself to do it").
Rationality recognizes the utility of personal traits/attributes, but doesn't require them.
It's the logical thing in some global context, but it's also logical that locally that doesn't apply to me because my emotional attachment and fear of experiencing loss and loneliness, etc mean that first I need to logically strengthen my resolve (and ability to do emotionally hard things, ask for help, find strategies that can somehow work around this problem, etc.)
Systematic winning is "applied intelligence" (if intelligence is building correct models)
The only addendum needed is to notice that winning is subjective. Money and happiness are very useful, but maybe someone values having less happiness but also less migraines.
The problem is often a lot of unstated assumptions which are different between the sides of the argument. E.g. awhile back I got into an argument, and about a day afterwards I decided that what it was really about was how one should discount future expectations. That was never mentioned during the argument, but it lead to totally different choices of action. If you strongly discount future expectations, then "live for the moment" is the rational thing to do.
At various points over the past couple years I've wondered how different the musical Rent would be if it were about the covid pandemic instead of the AIDS pandemic (or tuberculosis, as in the original opera, La Boheme). I came to the conclusion that the central line would nearly need to be completely reversed: "There's only us there's only this / forget regret, or life is yours to miss / no other day no other way / no day but today" - probably useful advice when there's a slow-spreading pandemic that cuts you down to a few years of life expectancy if you get it, but probably bad advice when there's a fast-spreading pandemic where the regret and thinking about tomorrow can save you during the few tough weeks.
I don’t really get this though because your comparison is of the already HIV positive with those who don’t have COVID, right? (I’ve never seen Rent.) It seems like in both scenarios “no day but today” is terrible advice for anyone not yet positive.
Edit to add: I guess I’m asking to hear more about your reinterpretation of Rent! Why doesn’t it have a lot of characters dying of Covid or with Long Covid or something who sing No Day But Today?
Being HIV positive is a part of someone's lifelong identity, while being covid positive is just a fact you (usually) just have to deal with for a week. If you're HIV positive in the early 1990s (the characters have AZT, but they don't yet know that this means they now have 50 years of life ahead of them), the important part is not to dwell on expected death a couple years from now, but to find love and friendship and fun where you can in this brief time. But if you're covid positive in 2021, the important thing is to stay home for a few days now so that you can have fun with your friends next week, rather than having them all home sick in bed once you've recovered.
I think "No Day But Today" doesn't seem like it captures the experience of dying of covid - you're not temporarily well looking for some fun before you inevitably become bedridden, but you're either already bedridden, or pretty confident you'll get better. If someone is dying of covid, they can maybe sing "Goodbye Love", and you can maybe sing "Your Eyes" to them, just as when Mimi appears to be dying of AIDS. But you're definitely not going to sing "Out Tonight" or "Light My Candle", because you're on a hospital bed with a ventilator, not out looking for fun.
Long covid may be a more apt comparison, but I suspect it ties in to a different set of philosophical questions about the nature of urban life. I don't think long covid gives people a sense of impending mortality, the way HIV did in the 80s and 90s, but it sounds more like it gives a sense of impairment combined with (probably?) a standard life expectancy. I bet there would be a totally different and interesting musical about that. (I haven't actually seen La Boheme, so I don't know the comparison about how those characters interpreted living with consumption/tuberculosis, while not knowing the germ theory.)
Rationality is whatever helps us improve the reliability of the conclusions of "slow thought."
I'm using "slow thought" in the Kahneman sense, contrasted with "fast thought." Rationality can help "fast thought" only indirectly. Each kind of thought is useful in different contexts.
"Slow thought" goes by reasoning, and reasoning from incorrect assumptions or using incorrect tools is easy to mislead. Even in the absence of bad actors, incorrect thinking leads to stupidity. With bad actors, careless reasoning is easy to exploit (looking at you, QAnon). Rationality is trying to remove errors in "slow thought."
I think the Pinker/Gardner debate is talking past each other because they don't seem to be directly addressing the difference between fast and slow thought, and each has picked one of those to champion - which is silly, because they're tools for very different purposes.
I think rationality is also about when to use slow thinking rather than fast thinking (or intuition in Scott’s phrase). A good example of this is using EMH to guide your investments rather than using a investment strategy which is vulnerable to various human biases. Actually I wish more realistic cases of rationality were used as examples rather than slightly esoteric ones like Newcombs paradox which nobody will ever encounter in real life and ends up in various debates about practicality.
Careful here. Intuition can be either slow or fast thought. It's not-quite orthogonal. Intuition is what you use when either you don't have time to think things through, or you don't have the data to do so. It's not the same as "That things not worth thinking through thoroughly" and it's not the same as habit. For an example of intuition as slow thought, look up the discovery of the benzene ring structure in chemistry https://en.wikipedia.org/wiki/August_Kekul%C3%A9
There are various mechanisms for thought, and many of them can be either slow or fast thought, though logic, calculation, etc. seem to be entirely slow thought.
A little appreciated facet of Kahneman's ideas are that he pointed out a constant back-and-forth between Systems 1 and 2, e.g. he pointed out the "instinct" of the expert that can reliably smell the truth or not (in his field), clearly a form of System 1 thinking, is the *result of* years to decades of training and painful System 2 thought, which at some point results in some kind of strange emergent phenomenon where a trained System 1 can on occasion outperform System 2, even in the expert.
And I guess going the other way, peeling back the murk surrounding System 1 so that you can trace or mimic its machinations by System 2, is what psychology and psychiatry are all about.
Hmm So figuring out some problem in your sleep and getting the answer during the morning shower is part of system 1? (fast thinking) That doesn't smell right to me, but what do I know. :^)
It is, actually. System 1 isn't *only* characterized by speed, it's also not especially conscious, and not especially effortful. It's what goes on outside the spotlight of conscious awareness. So when Kekule went to sleep and dreamed of a snake biting its tail (they say) and awoke having solved the structure of benzene, that was System 1 at work. Had he sat down and worked it out logically and consciously, that would've been System 2.
Huh, so system 1 is all the stuff underneath 'conscience'. Well then I'm going to agree with the evil Gardner*, rational system 2, is what we do, but not as important as system 1. AFAIK system 1 is where my 'smarts' are. Though clearly trained by system 2. (Years of reading physics books and doing problems.)
*Yes Howard Gardner... Let me just say that when I hear Gardener, I have nothing but positive thoughts. Martin Gardener is a hero to me, and the next gardener I think of is Samwise Gamgee... (Well I started reading "The Fellowship of the Ring" again.)
I think it's a bit more complicated than that, and I am sure I am not doing it justice, but roughly -- yeah. One of the points Kahneman makes is that System 1 is what keeps us alive and chugging along, most of the time. We don't have to work out ballistic physics to duck when something is thrown at our heads, or to slam on the brakes when all the taillights turn abruptly red in front of us. We trundle through life making a million judgments and decisions a day, effortlessly and rapidly, and almost always correctly. It's only at rare intervals, typically when confronted with some new and strange problem, that we fire up System 2 and start burning serious glucose.
I think what he and his coworkers spend their time looking at is the dividing line: under what circumstances do we recognize the need for effortful conscious careful t hought (Ssystem 2), and when do we understand that the heuristical seat-of-the-pants System 1 is good enough? Clearly there are cases where we screw it up, we rely on "intuition" when we should be thinking things out, and still other cases where we consciously reason ourselves into absurdity or even horror, which our "intuitive common sense" would've helped us avoid. The general question of how we decide which system to use, and how and why we make mistakes in that decision, is fascinating and complex.
I'm just adding that they point out that the two systems are not rigidly separated, and certain decision-making processes and patterns can get transferred back and forth. Skilled workers in any field build a System 1 that lets them get stuff done correctly much faster and with less effortful thought than a newbie (consider the analogy of how much less energy efficient a newbie is than a skilled athlete at, say, swimming or skiing). But we also spend a lot of time and effort using System 2 to try to figure out what the hell System 1 is doing, when it seems to be leading us wrong. ("Why do I always date the wrong kind of guy?")
Hmm I make the analogy that, thinking slow = rational thought. But I agree that not recognizing the fast, slow distinction leads to some needless disagreements.
OK you can call them different names, slow is rational, conscious thought, fast is everything going on underneath... it doesn't have to be fast, so I agree fast can be a confusing descriptor.
System 2 builds models. System 1 interprets them. This is why a chess master can walk by a chess board and, off the cuff, say, "mate in 4," using only system 1.
Reminds me of the saying, "Neurotics build castles in the sky. Psychotics live in them. Psychiatrists collect the rent."
Hmm, I haven't read much Pinker or Gardner. Given that this article doesn't really define what it means to be rationalist, I'm going to fall back on my intuitions.
No, no, I joke! But I do have a couple of comments:
Intuition has multiple meanings or nuances; Scott is only addressing one of them, the "Intuition is really a reaction to a complex set of observations, and the observer isn't aware of how that complex set leads them to the conclusion they reached." But there is another meaning of intuition, which is "The ability to tap into knowledge which is not available to the intellect," such as through prayer or meditation, for the sufficiently attained or lucky. Now, it may be possible to train an AI to do that, or to interrogate a bunch of world-class meditators about who they did it, but it seems unlikely in the near to medium term future, at best.
Does Pinker address the Spock as strawman (strawVulcan?) rationality? I mean, Gardner seems to be making the "what does your rationality have to say about emotions, eh?" argument.
And I think that's a valid argument, in that most self-described rationalists I deal with (mostly libertarians, unfortunately) seem to have a really hard time dealing with emotions, and in particular to recognizing when they are buying into emotionalist appeals.
Lastly, rationality seems to be about intellect. And as Galef points out, emotions are often the source of our goals and desires. I mean, rationality can tell me whether one banana is better than another banana, but it can't tell me whether I like bananas more than oranges. Sure, it might be able to tell me which one is better for me, but it can't tell me which one I like more.
Love the post, but one aspect I think Scott might have missed though... the culture of it.
I think when Gardner says he's doesn't like rationalism, he's at least partially saying, "I don't like this culture of people who are into math and board games, like or at least don't dislike polyamory, celebrate weird holidays like solstice but not in a way I am familiar with, are mostly white, too often male,... etc." He doesn't see rationalism as systematized winning, but as this culture.
Yes, but there is still a broad popular conception of a "rational person". The kind of person that is good at math, that likes arguing, that favors "right brain" instead of "left brain". Or even just Ben Shapiro saying "facts don't care about your feelings". The point is that there is a common, intuitive idea of a rational person, and that draws a visceral reaction and then people argue based on such reaction.
+1 to this idea. Like most political arguments, I think a lot of arguments for or against "rationality" are motivated primarily by tribal affiliations, and are only superficially about discovering truth or whatever.
I wonder if you're overthinking this a bit. I agree that
> Everybody follows some combination strategy of mostly depending on heuristics, but [sometimes] using explicit computation
In my view, describing oneself as a "rationalist" or "non- rationalist" is just a way to say which side it's better to err on, in close cases. (Compare "trusting" vs "distrustful" — everyone agrees that you should trust *some* claims. But a generally trusting person errs on the side of trusting more claims).
If we wanted to be more jargon-y, we could rephrase the above by saying: "everyone agrees to sometimes use heuristics; a rationalist uses the meta-heuristic of using explicit calculation in cases where the best strategy is unclear whereas a non-rationalist uses the meta-heuristic of relying on heuristics in those cases"
But 'overthinking' is exactly what makes rationalism valuable! Much of the time you're just stating the obvious (or at least, the already-known) in a needlessly verbose show-your-working way, but occasionally showing that working produces new insights. It makes sense for Scott to show his working when defending/investigating the working-showing mentality :P
That said I think you;ve pretty much nailed the TL;DR
I'm not sure if this is actually relevant, but your final paragraph reminds me of an argument I've had several times about the "pie rule". ( https://en.wikipedia.org/wiki/Pie_rule )
There's some game that's unfair, and someone suggests making it fair by allowing one player to assign a handicap to one side and then the other player to choose which side to play, given that handicap. This incentivizes the first player to pick a handicap that makes the game as fair as possible.
The person making this suggestion often argues that if you are good at playing the game, then you should automatically also be good at picking a handicap that would be fair. And thus, the pie rule is meta-fair in the sense that the winner of the-game-with-pie-rule should always be the same as the winner of the-game-starting-from-a-hypothetical-fair-position.
I disagree.
I think the intuition for their claim is something like: One possible way of picking winning moves is to consider every position you could move to, estimate your degree of advantage in each one, and then pick the move that leads to the largest advantage. And if you can determine the degree of advantage of an arbitrary game-position, then you can also pick a fair handicap just by choosing the advantage that is closest to zero, instead of the largest advantage.
But that is not the ONLY possible way of picking winning moves. You could pick winning moves even if you can only RANK positions, and cannot score them on any absolute scale. You could even pick moves by applying some heuristic to the moves themselves, rather than to the resulting game-positions.
If you just have a black box that outputs the best move, you can't use that black box to pick a fair handicap.
This sounds a little bit like the idea that "skill at X" and "skill at studying X" are not (necessarily) the same.
X and studying-X are fundamentally related in a way where if you had a complete understanding of the nature of X that allowed you to predict exactly what would happen in any given situation, then that should make you good both at X and at studying-X.
But this doesn't rule out the possibility that there could be lesser strategies that make you good at one but not at the other. A black box that makes money doesn't (necessarily) give you a theory of how money-making works. A pretty-good-but-not-perfect theory of money-making won't necessarily let you outperform the black box.
But even if, because of the uncertainties you mention, one cannot assign a fair handicap, it seems like this pie division method will at least provide a fairer game, probably immediately, but if not at least after some iterations.
There are situations where the pie rule is useful, but it merely gives someone an *incentive* to make the game fair; it doesn't bestow the *capability* to make the game fair. If you add the pie rule to a game, you haven't solved the fairness problem; you have merely delegated that task to someone else.
In most cases, if you are designing a game, you are better able to pick a fair starting position than most of your players will be. If you can't pick one, then you probably can't expect your players to be able to pick one, either.
The pie rule is potentially useful for players who understand the game better than the game's designer, or if you have procedurally-generated content such that the player is handicapping a unique scenario that the designer never saw.
Spot on. I play a lot of games, some with asymmetric starting factions, some of which seem imbalanced - weaker than others. I consider myself a game designer and yet I recognise it's a very hard task to come up with a fair house rule that makes the weaker faction stronger while not disturbing many other balances that exist within the game.
Why is it hard? Just introduce the element of chance, and shazam! the weaker guys have just as good a chance as the stronger guys. But why would you want to do this? Isn't the *purpose* of games to prove out which people are better at Skills A and B, whatever A amd B might be? We have basketball games to see which set of five guys are better at throwing a ball through a hoop, as well as passing it to each other, and strategy encompassing the same. Why would we want to introduce a rule that arbitrarily makes it easier for people with less skill in those areas to win?
On the other hand, I can see why we might change the rules so that Skill C becomes part of the winning mix. We could prohibit any speech during a basketball game, and now the skill of nonverbal communication starts to become essential for winning. Is that the kind of thing you mean?
We're not talking about games like basketball, so much as games like Fox and Hounds https://en.wikipedia.org/wiki/Fox_games or Tafl, traditionally, or modern games like Gaia Project or Cosmic Encounter. What's the purpose of games? Whole books could be written on that. Having fun with friends? Mental exercise? And yes, one part of it is strategic challenge, "can I beat you?" (Or "beat the game" for cooperative or solo games.) But there's many aspects to that, especially with a large number of asymmetric factions or board states such that it's impractical to prepare for all combinations in advance: it rewards being able to quickly analyse a new strategic space, and even more so if new hidden or random information is added to the game partway through. There's delight in exploring interesting rules interactions ("spotting a combo" that nobody else has seen yet), finding a tactical move to satisfy the constraints of the actions of several other players, and so on. All of which may mean that if the factions are significantly asymmetric, it's very hard to predict the consequences of a proposed house rule or balancing tweak.
The point of the pie rule is not only to make the game fair, it's to forestall a certain kind of dispute. In the ur-example, maybe the pie-cutter has poor motor control, and the two halves end up being different. If the pie-cutter also suggested the rule "I'll cut, you pick", they can't complain about the process, because it's all on them. (I mean, maybe they didn't know they had such poor motor control, but that's not the opponent's fault.)
Let me just state, as an aside, that the meta-rational view here is that you need to understand why you're playing the game.
MOST games (and this includes most of the games of life) are played to be played. If you prioritize winning over everything else, you'll soon find no-one else wants to play with you.
So obsessing over details like above (and the comments below) is missing the point! You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing. If you miss that point, you miss everything.
(Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way.
This point has been made by a variety of rather different thinkers, from Jordan Peterson to David Chapman. Yet it still seems to be a kind of esotericum that should have, but has not, made its way into the "Rationality Community".)
> You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing.
That's exactly what we're doing. It just so happens that one of the heuristics that people commonly use to predict which rules will be worth playing is "fair games tend to be better than unfair ones".
(You really think that "obsessing over details" is going to make someone a WORSE game designer? Do you avoid painters or engineers who "obsess over details" of their craft?)
But you see, that's exactly where we differ. You think "fairness" is the most important thing; it's not! And so you obsess over the wrong details; you obsess over whether changes make the game more fair (whatever that means), but the point is not an abstract one (is this more fair in the eyes of Ahura Mazda), it is an empirical one, namely do enough of the parties involved in the game want to continue playing vs they would prefer to bow out.
For example: a person sometimes fights with his wife. Maybe they are basically a great match, but she has one particular small habit that leads to fights, like she frequently mis-remembers what happened in some situation in the past.
OK, we can treat this "rationally", go to the mat over these fights, drag up evidence and testimony, and generate a divorce within a year.
Or we can treat it "fairly", keeping score and allowing her to win exactly half the arguments (while each side secretly seethes), and generating a divorce within three years.
OR both parties can realize that what matters is keeping the game of marriage going, not winning the game of marriage, so he accepts that she is just that way and lets her say whatever mistaken memory she has, she accepts that his dress is somewhat sloppy but whatever, it's just not that important, and they have a happy life and a 50th yr wedding anniversary.
The same holds for business, for international negotiation, even for high stakes sports (where the rules are constantly tweaked, not for "fairness", but to ensure that everyone involved finds it worth continuing to play).
I feel like I'm saying "this motor would be 20% more efficient if you used this material instead of that material" and you're saying "you fool, why are you discussing efficiency instead of asking whether you need an engine at all!"
There certainly exist situations where you don't need the engine at all. But there are also SOME situations where you DO want an engine, and in those situations it probably matters how efficient it is, so learning how to make engines more efficient is still a useful skill. It's a reasonable thing to study and discuss.
Also, I am (mostly) using the word "fair" as a technical term of art that means "starting from this position, if you don't know which person is playing which side of the board, you would predict equal payouts for all sides on average."
Yes, exactly, that's precisely my point (as it is The Last Psychiatrist's point).
And yes, there are times when it is appropriate to optimize the engine. (And my day job involved and involves large amounts of this, both as physics and as optimizing computation in various ways.) But there are also times when it is appropriate to ask the meta-question...
And that's what this post was about. There are (at least) three levels
- Tribal idiot (which is where Gardener mostly seems to be)
- Rationalist (useful, very powerful, but ultimately short-sighted)
- Metarationalist (use the tools of rationality but think at a more abstract level about the goals to which those tools should be applied)
And yes, who's to say that my goals are more meta-rational than your goals? Well, that gets us into different discussion, one that's too far afield to pursue here.
I am a few days late, but I really wanted to give you a high five for "Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way."
The question of goal setting, and how often that goal is simply "keep the game going," is such a huge aspect to human existence, possibly life's existence in general, but totally overlooked by most economists using game theory. Especially considering how often 'Step Zero - Choose Goal' is in fact Step N-zillion in some earlier processes. Every game is a subgame within a veritable tessellation of games. Which is why normal people are vastly better at Ultimatum Games than economists, for example: the logical "winning" move from within the current game is highly illogical within the context of the ur-game. One sees it in politics as well, assuming one team wants to win over the other once and for all when really the correct goal is to keep the game going.
Isn't he explicitly talking about Tooby & Cosmides ecological rationalism? This isn't "an argument about heuristics. It’s the old argument from cultural evolution: tradition is the repository of what worked for past generations."
The individual action of engaging in rationality clears the dust out of the intuition pipes. Then the next time the angel flies by to deliver an inspiration, they have a direct shot.
This is flippant but in terms of discussing the symbiosis of rationality/nonrationality, it’s a necessary point. lots more to say but busy now - this was a good read.
Most explicit anti-rationalism I encounter boils down to "think locally & politically, because abstract first-principles thinking sometimes leads to socially undesirable conclusions." Of course that's mostly a complaint about how some perceived rationalists behave (antirational "read the room" vs rational "just asking questions") rather than a principled objection to rationalism in the abstract, but then that's exactly what a rationalist would say...
I agree that Pinker is being knowingly glib by equating the mere act of reasoning with the explicit practice of rationality. I haven't read his book (yet), but Gardner's objection brings to my mind Elizabeth and Zvi's Simulacra levels.
As in, committed, self-identified rationalists (and chauvinistic monists, for that matter) appear to make something of a cultural fetish of the object level. Instead of also applying rationality to the higher simulacra levels - how people think, and how people wish to influence the ways others think, and so on - they make it their mission to oppose and reduce the chaos produced by these considerations. Julia Galef's book is pretty much about that.
Gardner seems to be against this project of narrowing down discourse toward the object level, deeming it both impossible and potentially harmful in the process. At the very least, the project shouldn't cloak itself with the purity of mere reasoning. The moment you have a mission, you're playing the game.
(The fact that all the simulacra levels do eventually add up on the object level in the end is both true and mostly irrelevant - about as useful as pointing out all the hormones and gene expression to a man in the throes of love.)
#1 Don't knock drug-fueled orgies until you've tried them. The risk/reward ratio is probably favorable compared to many other edgy-but-socially-acceptable activities like general aviation.
#2 When a guy jumps out of the bushes (or in my case, the front seat of a cab in Panama City) and sticks a gun in your face, you have a remarkable amount of time to perform rational analysis. Adrenaline is a magic time dilator. In my case, after what seemed like an eternity of internal debate (but was actually half a second) in which I contemplated a half-dozen courses of action, the conclusion was "jump out of the moving cab". The comedown was harsh though. Adrenaline is one hell of a drug.
I've never been a big fan of the term "rationality" for a lot of the reasons described in this post -- it seems to carry the connotation (warranted or not) of opposing intuition and heuristics even when those may be valuable. I appreciate the bit about "truth seeking" and "study of truth seeking", which I find to be much more clarifying language so I'll stick with that here rather than trying to say anything about rationality.
I tend to think of truth-seeking as a fundamentally very applied endeavor, and also one that can be taught without necessarily being systematized. Not just a purely intuitive knack, but not necessarily best served by a formal discipline studying truth-seeking either.
As an example, I think one good way to learn effective truth seeking is to study a specific scientific field. A major part of a scientific education is learning how to seek more accurate models of the world. One learns tools, from formal mathematics to unwritten cultural practices, which help to separate truth from seductive but incorrect ideas.
Then, there are of course also people who study how science is carried out (eg, Kuhn & other philosophers of science). Tellingly, most practicing scientists pay relatively little attention to this, other than as a form of navel-gazing curiosity.
Rather than rather than rocks vs geology, I think science vs study of science is a better analogy to truth-seeking vs study-of-truth-seeking. And as with science, I am skeptical that the study of truth-seeking has much to say that will actually improve the practice of truth-seeking, compared to engaging directly with the practice of truth-seeking. Though perhaps 2000 years of follow-up research will prove me wrong.
On the contrary, we spent a thousand years between Aristotle and Galileo, roughly, studying truth seeking per se, and leaving almost all actual progress in acquisition of the truth to the Islamic world. It was only when we abandoned the study of truth seeking and decided to just seek the truth any old way we could -- just start measuring stuff, and start trying to formulate practical rules of thumb and dumb crude laws that would predict what would happen next -- that progress took off in the 1600s. Since that day the study of truth seeking has trundled along in the wake of actual truth acquisition (at least about the natural world), debating post facto how it was all done. All very interesting from the metaphysical point of view, but no actual physicist gives two bits for it. Not a one of us is trained in philosophical methods of inquiry, or cares to be, or would not find it tedious and useless.
Agree with this, human rhetoric is a very strong force and a good practitioner of it can convince you of anything absent empirical evidence. I think this is actually one of the best reasons for being a rationalist, don’t let yourself be fooled by rhetoric, insist on evidence. And if evidence cannot be produced at the moment for practical reasons (say like string theory) then keep an open mind but don’t commit anything on these ideas.
Being a rationalist only protects you against some forms of that. The kicker is the axioms and rules of inference. Look up Pascal arguing that one should believe in (the catholic) god. There are LOTS of other examples. When you say "insist on evidence" you're arguing for empiricism rather than rationality. They work well together, but they aren't the same thing.
I think a rationalist should be an empiricist for the reasons I gave. Any subject that relies mainly on arguments is going to be dominated by people with the best rhetorical skills not the people with the best understanding, that’s a fundamental rule of human society. Of course extrapolation from evidence is usually fine, but you still need to be less trusting than of direct evidence. There are hundreds of examples of areas where the accepted view derived from “logical analysis” was completely wrong until people actually did the experiment. The replication crisis in social science is one of the more recent ones. If being rational means anything is it means being careful about being fooled by BS because humans are very good at BS.
To be honest, what I've seen of the soi-disant rationalist community is that it has more in common with the medieval scholasticism I am describing than post-Enlightenment empiricism. The empiricist *specifically distrusts* purely logical argument.
Feynmann was especially clear and cogent on this point: he said (paraphrasing) that the way you do good science is that when you come up with a hypothesis or chain of logic you immediately distrust it, the more so if it's your own or you find it appealing and convincing, and then you set about diligently trying to disprove it every way you can, through direct measurement. You share the idea with your friends and even more importantly your enemies, specifically so that they can try to disprove it in ways to which you might be blind, out of ignorance or wishful thinking. Only after a lot of genuine effort has gone into trying to disprove your hypothesis and it has all failed do you start to (very cautiously) consider that it might be true -- at least until the next measurement comes along.
It's not *exactly* throwing human reasoning power out the window, but it's freighting it with such a load of skepticism that even the most "obvious" points in a chain of logic require experimental test. So very different from assigning the probability of a conclusion based on the persuasiveness of the logical argument leading to it.
It's also of course not a way of thinking one can afford to indulge in all aspects of life. Necessarily in many areas we *do* judge the probability of conclusions being correct by only the persuasiveness of the argument, because we just can't test everything for ourselves (and some stuff isn't amenable to experimental test anyway). But while the empiricist accepts this necessity, he does not consider it a virtue.
Hmm... The Islamic world a hotbed of truth acquisition in the Middle Ages? Well, sort-of, and up till approx. CE 1200 maybe. But the deference to "the historical importance of Islam for science" today is somewhat overdone. The argument tends to forget the importance of what went on in Byzantium, 1000 years of practical science (among other things) after the fall of the West-Roman empire. Some might even argue that the Renaissance was triggered by the fall of Byzantium in 1453, when scholars had to flee to Italy and other places to continue their work. By that time, the theologians and long since defeated the "philosophers" in the Islamic world of yesterday, i.e., Islam had stagnated from a "discover new science" point of view.
There's an even more important point which is that the argument kinda elides who exactly was doing this truth acquisition (or at least truth preservation).
As far as I can tell, for the most part it was being done by the remnants of prior classical culture, by people inheriting the mindset and culture of the previous hellenic overlords, not by those imbued with what we might call the "desert" culture.
If one want to use this argument as a claim about "my tribe/religion/ethnicity/whatever is tres awesome", be very careful because it's not at all clear that it actually proves what one wants it to prove... (Even apart from the general incoherence of imaging that you can somehow bask in the reflected glory of what people belonging to your claimed affiliation did 1000 years ago.)
Eh, the point was historical. An eventual agenda in this case is in your eyes, not mine.
The implicit reference is to the conflict between the philosophers and the theologians within medieval Islam. Although “who won” is still a matter of debate today, not least due to the prestige given to “science” everywhere in our time, a traditional opinion is that the (medieval) theologians defeated the (medieval) philosophers within Islam.
30 October 1662. William of Manchester, who at the time was the Lord High Grand Inquisitor (Hidden) of the League of Empyrical Gentlemen. All else follows from his fateful discussion with friends over low-quality brown ale one miserably cold and drizzly September afternoon in a pub the name of which is still a League secret.
"Newcomb’s Paradox is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently end up with nothing."
If you follow what seem like the rational rules, you'll consistently end up with just 1 thousand dollars.
Also, the transhumanists from the future will keep running simulations of you all the time, giving you 1 thousand dollars in each simulation, just to laugh at you. The guy who chooses 1 million will only be simulated once, to verify this hypothesis, but then it would be boring.
Would you rather be a rich guy who is dead, or an average guy who lives forever as a meme?
In general, simulation hypothesis seems to explain why people are so crazy. Crazy people are more fun to simulate.
Karl Popper's epistemology would be very useful here (it would always be useful, and it's almost never used!!)
In Popperian terms: effective truth seeking is basically just guessing plus error correction. Rationality is actively trying to correct errors in ideas, even very good ideas. Irrationality is assuming good idead don't have errors (or that errors exist but can't be corrected). Antirationality is actively preventing error correction.
A fictional example: an isolated island community has a sacred spring and a religious belief that no other water source can be used for drinking water. This is a useful belief because water from the spring is clean and safe, and many other water sources on the island have dangerous bacteria. One day, the spring dries up. Rationalists on the island try other water sources; some get sick in the process but they make progress overall. Irrationalists try prayers and spells to make the spring come back. Antirationalists declare the rationalists to be heretics and murder them all.
I think people who are "against" rationalism (and who aren't antirationalists like Khomeini) tend to be in the "good ideas have errors but it's vain/hubristic to think we can improve them" camp. Often trying to improve on established ideas leads you off of a local maximum (only drink from the sacred spring). But being trapped at a local maximum is bad, even if the path to a better point is treacherous. And external circumstances can make good ideas stop working (the spring drying up) anyway.
Thank you for beating me to mentioning Popper. When I read essays like this I'm always looking for anything interesting that Popper didn't adequately cover and I seldom find anything. For me, "Conjectures and Refutations" highlighted many interesting concepts beyond the title (which essentially does completely explain "rational" thinking). First is that I realized that the scientists I worked with didn't really know what "science" was. Which is inteteresting. (You can debate it, etc, but the point is it is more nebulous than the "scientific" community would have you believe.) The other thing I found interesting is that Popper is quite ok with tradition. Is it ok to do something because that's the way it's always been done? (Perhaps study "science" in the traditional way?) Sure is! That's a perfectly sensible heuristic it turns out. The time to consider rejecting tradition is if you can create a hypothesis that contradicts tradition that can be challenged but then survives that challenge (and all subsequent ones). It's all quite messy, sure, but Popper is some of the clearest thinking on rational thinking I've yet seen. He was concerned with separating woo from sensible "science" (whatever that might be) and I think his ideas work. If that doesn't sound compelling then I apologize for my bad interpretation and encourage you to study and criticize his work directly.
I feel like there's at least a semantic rhyme here with the idea in linguistics that, _by definition_, a native speaker of a given language has a kind of tacit "knowledge" of the rules of that language, even if they can't articulate those rules. Modern linguistics -- the kind that Pinker practices -- is the enterprise of first taking those rules and making them explicit, and then moving up one layer of abstraction higher, to try to understand why it is that certain kinds of rules exist, and other kinds do not. And the the flavor of psycholingustics that I studied in college tries to ground those questions about the rules in actual brain structures.
As a side note, I actually think Pinker is pretty deeply wrong about linguistics, and in a way that challenges his own claims to being uber-rational. The Johns Hopkins linguistics department is the home of "optimality theory", which posits that the "rules" of a language are actually like a set of neural nets for recognizing and generating patterns -- or, more to the point, they're _like_ a set of computational neural nets, because they are _actual networks of human neurons_. Once you adopt this frame, you can see how a given situation could result in different "rules" for generation giving you conflicting answers, and then you think about how different networks can tamp down the activity of other networks. Hence the concept of "optimality theory". The actual language produced by a given mind is the optimized output of those interacting rule networks. And we get linguistic drift as new minds end up having different balances between rules, and ultimately creating new ones.
I got to sit in on graduate seminars with both Chomsky and Pinker in my time at JHU, and while they're both clearly brilliant, they seemed committed to a purely algebraic, functional model in which rules operate deterministically, like clockwork. This seems to fly in the face of what we know about how neurons work -- it seems, dare I say it, irrational.
Chomsky, Pinker, and the whole UG crowd premised decades of work on the "poverty of the stimulus" argument which is a perfect example of intuition over rationality. The argument goes "children learn language even though it sort of seems like they don't hear enough input sentences to figure out what the rules of language should be".
That has always bothered me. Like, what single shred of evidence did anyone ever collect to establish a default assumption about how many input sentences a child should need to learn a language? Founding an entire branch of linguistics based on nothing more than a gut feeling is certainly bold, I'll give that to Chomsky.
I think this explanation undervalues the extent to which having an explicit model of the world helps you develop better intuitions. For instance, the guy who just has a knack for geology may be able to better find diamonds than the geologist, but I bet if you find a kid with a knack for rocks and *teach them geology*, they'll be able to find diamonds better than either of them. Intuition is not a black box, the brain doesn't do intuition versus models; models feed from intuition, intuition feeds from models.
Well personally I wold consider that claim in your final sentence to be excellent evidence of a low IQ, all by itself. I would react to it the same way I would to a claim that the stars rule our destinies and horoscopes are the key to happiness. It's perhaps acceptable if you are raised in a 13th century Russian shtetl where only the rabbi knew how to read, but not otherwise.
Pinker is not a devoted member of the LW rationalist community! He is just an academic who, by pure coincidence, has some similar ideas.
"Pinker cited the rationalist community as an example of a group of good reasoners" a quote from lesswrong.
Maybe devotion is a big word, but his ideas were definitely shaped by them.
That doesn't follow--I could cite someone as an example of a great dancer even if my ideas of what it meant to dance well predated that person's birth.
Whether he shares the ideas coincidentally or through inspiration, in both cases he's advocating for a form of rationality that Rationalists believe in, that i can disagree with without being anti-rational. I think is my point
Sure, I agree that you are not ipso facto anti-rational for disagreeing with Steven Pinker lol
Would you mind saying why Pinker is irstional?
I couldn't agree more. Oftentimes being perfectly rational doesn't pay off.
"The Elephant in the Brain" explicitly says readers may be worse off as a result of reading the book, because evolution selected us not to be fully rational. Robin Hanson continues on that here: https://www.overcomingbias.com/2022/02/rationality-as-rules.html
There's also major benefits to being irrartional as a bargaining position.
"I'm willing to blow up this whole deal if you don't give me what I want, and I don't care that this will damage us both" is a fantastic bargaining position. "I am willing to take any deal as long as it's marginally better than my BATNA" is a terrible one.
It's a fantastic bargaining position until someone calls your bluff and also you never want to develop any soft power (in politics)/genuine interpersonal relationships (in life) ever.
It's a great position the first time you use it, if you already have some goodwill built up using soft skills. It's a terrible position to use repeatedly, because your potential bargaining partners will need to adjust their strategy if it's used repeatedly, and they will develop a strategy specifically to beat yours. Most likely, they will find a third party to work with and cut you out of any deal, if possible. If that's not possible, they will look for ways to make your own strategy cause you more harm, or perhaps just use an even more damaging tactic, such as fighting/war.
Nobody thinks North Korea (or Iran, or Syria) is in a better position because it's willing to be irrational. That only makes sense if you assume their current level of development is the normal one. Compare North Korea to South Korea and realize how much NK has missed out on due to their irrationality.
That is not *obviously* a stupid conclusion, though. There are plenty of highly functional delusions, e.g. if each of us was constantly consciously aware of our #1 existential problem -- mortality -- we would probably be mostly frozen in fear or apathetic with despair all the time, useless. Most people in very long-term marriages conclude that a certain amount of blindness to, or psychological denial about, the characteristics of their spouses is highly functional. So he may be quite correct that the idea is (1) incorrect but also (2) extremely useful. That can happen.
I don't think I would be frozen in fear or apathetic with despair.
Good for you. Let us know how it works out the moment you actually get the final diagnosis. If you nod calmly and continue annotating your grocery list, you will be one of the remarkable few.
I'm pretty sure there's a large spectrum of possibilities between "would be frozen in fear or apathetic with despair from the knowledge that you are eventually going to die" vs "wouldn't have much reaction at all to receiving a terminal diagnosis".
Let me introduce you to this concept called rhetorical hyperbole. It helps get the point across concisely without requiring a boring scholarly exegesis that runs to many paragraphs. If you agree that the problem of undeniable mortality is one that *would* occupy your mind to an unhealthy degree -- regardless of whether you would literally exist in the extreme state -- then you already agree with the point and are merely adding a quibbling footnote, which to be sure would be reasonable -- if this were a scholarly debate and not Internet ephemera.
I think it would occupy my mind to a healthy degree. Insofar as it occupies one's mind, one has two options that wouldn't be available if one ignored it. First, one can try to avert whatever it is that one expects one is going to die by. Second, one can make decisions that take into account the fact that one has limited time left.
I’m watching my dad die right now. And honestly there seems
to be an incredible freedom realizing that death is nothing to fear because it’s going to happen to me no matter what. All I can choose is how I’ll spend the little precious time I have. And cowering in fear just seems like a waste because all it does is ruin what’s left.
I don't think it's about any delusion. For me, the problem just doesn't register emotionally, for some reason. Usually. I had periods where it did, and it sucked.
Yah that's called denial. It gets a bad rap, but denial is a powerful and useful psychological tool, it allows us to focus on the problems of the moment without being overwhelmed by the (perhaps much bigger, perhaps even insoluble) problems of the future. Another much less pejorative word for it is "focus." Psychological focus is a form of healthy denial. Unhealthy denial is what we call obsession or blindness et cetera.
I'm just pointing out that there's far more to our functioning, even mental functioning, than our conscious beliefs, and for that reason we have ways of essentially shunting the conscious beliefs to one side -- through denial or functional delusion, among other things -- when they get in the way of getting stuff done. So when Gardner says "this may be wrong but it's still very useful" that isn't self-contradictory nonsense. It may be correct. It may also be wrong, of course. But it isn't *obviously* wrong.
Rationality is about doing and believing what one has reason to do and believe. I take it then that no one really opposes rationality, but rather there exist disputes as to what counts as a reason to do or believe something. Members of one class of potential reasons have, for unfortunate historical reasons, been lumped together under the banner of "rational" reasons. Then, the debate about rationality is whether there exist reasons beyond those which commonly bear the title of "rational" reasons.
I think by saying "rationality is doing...what you have reason to do", you're passing the buck ( https://www.lesswrong.com/posts/rw3oKLjG85BdKNXS2/passing-the-recursive-buck ), in the sense of defining rationality by means of the almost-identical word "reason"
Not in any way which fails to make progress in the current discussion though. By framing rationality as that pattern of behaviour/belief which conforms to the existing reasons for action, we can understand what debates about the importance of rationality consist in, namely, which facts constitute reasons for action/belief and which don't (recall the above where I maintain the we have sequestered a subset of candidate reasons and labelled them "rational" reasons).
To worry about buck passing is to confuse the question of what rationality is (sensitivity to reasons for action), and the question of what the rational act is in a given case (what ever the weight of our reasons point towards). Debates about "rationality" as such and what counts as rational pertain to the first question. Here we have discussions about whether desires constitute reasons, whether moral facts (if such there be) constitute reasons etc. One can debate these questions without any view on what the actual reasons are. And it is these questions we are debating when Pinker and Gardner are hashing things out.
Personally, I'm not sure we need both words.
Niko Kolodny wrote a paper on whether or not we have reasons to be rational (which he presented as his job talk that got him hired at Berkeley, if I recall correctly), concluding that the answer is no: http://www.mapageweb.umontreal.ca/laurier/textes/Phi-6330-H12/Kolodny-Why-rationality-05.pdf
However, I'm pretty sure that the summary of his thought is that if you do everything according to the reasons you have, then you will ipso facto be rational. The point is just that the fact that it's rational does not itself give an additional reason to do it - it's just a summary of the reasons we already have.
Even though I'm an academic philosopher, and this is very close to the kind of stuff I work on, I find this sort of thing very difficult going, so I can't guarantee I'm reporting it correctly.
Well, after the "reason" part you still have to actually *do* the thing, which requires willpower. Could rationality have something to do with willpower? I mean, they're surely not the same thing, but I think they might have something to do with each other (rationalists are more likely to go "I broke up with him because it was logical to do so" instead of "I knew it was the right decision but couldn't bring myself to do it").
Rationality recognizes the utility of personal traits/attributes, but doesn't require them.
It's the logical thing in some global context, but it's also logical that locally that doesn't apply to me because my emotional attachment and fear of experiencing loss and loneliness, etc mean that first I need to logically strengthen my resolve (and ability to do emotionally hard things, ask for help, find strategies that can somehow work around this problem, etc.)
Systematic winning is "applied intelligence" (if intelligence is building correct models)
The only addendum needed is to notice that winning is subjective. Money and happiness are very useful, but maybe someone values having less happiness but also less migraines.
The problem is often a lot of unstated assumptions which are different between the sides of the argument. E.g. awhile back I got into an argument, and about a day afterwards I decided that what it was really about was how one should discount future expectations. That was never mentioned during the argument, but it lead to totally different choices of action. If you strongly discount future expectations, then "live for the moment" is the rational thing to do.
At various points over the past couple years I've wondered how different the musical Rent would be if it were about the covid pandemic instead of the AIDS pandemic (or tuberculosis, as in the original opera, La Boheme). I came to the conclusion that the central line would nearly need to be completely reversed: "There's only us there's only this / forget regret, or life is yours to miss / no other day no other way / no day but today" - probably useful advice when there's a slow-spreading pandemic that cuts you down to a few years of life expectancy if you get it, but probably bad advice when there's a fast-spreading pandemic where the regret and thinking about tomorrow can save you during the few tough weeks.
I don’t really get this though because your comparison is of the already HIV positive with those who don’t have COVID, right? (I’ve never seen Rent.) It seems like in both scenarios “no day but today” is terrible advice for anyone not yet positive.
Edit to add: I guess I’m asking to hear more about your reinterpretation of Rent! Why doesn’t it have a lot of characters dying of Covid or with Long Covid or something who sing No Day But Today?
Being HIV positive is a part of someone's lifelong identity, while being covid positive is just a fact you (usually) just have to deal with for a week. If you're HIV positive in the early 1990s (the characters have AZT, but they don't yet know that this means they now have 50 years of life ahead of them), the important part is not to dwell on expected death a couple years from now, but to find love and friendship and fun where you can in this brief time. But if you're covid positive in 2021, the important thing is to stay home for a few days now so that you can have fun with your friends next week, rather than having them all home sick in bed once you've recovered.
I think "No Day But Today" doesn't seem like it captures the experience of dying of covid - you're not temporarily well looking for some fun before you inevitably become bedridden, but you're either already bedridden, or pretty confident you'll get better. If someone is dying of covid, they can maybe sing "Goodbye Love", and you can maybe sing "Your Eyes" to them, just as when Mimi appears to be dying of AIDS. But you're definitely not going to sing "Out Tonight" or "Light My Candle", because you're on a hospital bed with a ventilator, not out looking for fun.
Long covid may be a more apt comparison, but I suspect it ties in to a different set of philosophical questions about the nature of urban life. I don't think long covid gives people a sense of impending mortality, the way HIV did in the 80s and 90s, but it sounds more like it gives a sense of impairment combined with (probably?) a standard life expectancy. I bet there would be a totally different and interesting musical about that. (I haven't actually seen La Boheme, so I don't know the comparison about how those characters interpreted living with consumption/tuberculosis, while not knowing the germ theory.)
Rationality is whatever helps us improve the reliability of the conclusions of "slow thought."
I'm using "slow thought" in the Kahneman sense, contrasted with "fast thought." Rationality can help "fast thought" only indirectly. Each kind of thought is useful in different contexts.
"Slow thought" goes by reasoning, and reasoning from incorrect assumptions or using incorrect tools is easy to mislead. Even in the absence of bad actors, incorrect thinking leads to stupidity. With bad actors, careless reasoning is easy to exploit (looking at you, QAnon). Rationality is trying to remove errors in "slow thought."
I think the Pinker/Gardner debate is talking past each other because they don't seem to be directly addressing the difference between fast and slow thought, and each has picked one of those to champion - which is silly, because they're tools for very different purposes.
Definitely agree with the talking past each other point.
I think rationality is also about when to use slow thinking rather than fast thinking (or intuition in Scott’s phrase). A good example of this is using EMH to guide your investments rather than using a investment strategy which is vulnerable to various human biases. Actually I wish more realistic cases of rationality were used as examples rather than slightly esoteric ones like Newcombs paradox which nobody will ever encounter in real life and ends up in various debates about practicality.
Careful here. Intuition can be either slow or fast thought. It's not-quite orthogonal. Intuition is what you use when either you don't have time to think things through, or you don't have the data to do so. It's not the same as "That things not worth thinking through thoroughly" and it's not the same as habit. For an example of intuition as slow thought, look up the discovery of the benzene ring structure in chemistry https://en.wikipedia.org/wiki/August_Kekul%C3%A9
There are various mechanisms for thought, and many of them can be either slow or fast thought, though logic, calculation, etc. seem to be entirely slow thought.
A little appreciated facet of Kahneman's ideas are that he pointed out a constant back-and-forth between Systems 1 and 2, e.g. he pointed out the "instinct" of the expert that can reliably smell the truth or not (in his field), clearly a form of System 1 thinking, is the *result of* years to decades of training and painful System 2 thought, which at some point results in some kind of strange emergent phenomenon where a trained System 1 can on occasion outperform System 2, even in the expert.
And I guess going the other way, peeling back the murk surrounding System 1 so that you can trace or mimic its machinations by System 2, is what psychology and psychiatry are all about.
Hmm So figuring out some problem in your sleep and getting the answer during the morning shower is part of system 1? (fast thinking) That doesn't smell right to me, but what do I know. :^)
It is, actually. System 1 isn't *only* characterized by speed, it's also not especially conscious, and not especially effortful. It's what goes on outside the spotlight of conscious awareness. So when Kekule went to sleep and dreamed of a snake biting its tail (they say) and awoke having solved the structure of benzene, that was System 1 at work. Had he sat down and worked it out logically and consciously, that would've been System 2.
Huh, so system 1 is all the stuff underneath 'conscience'. Well then I'm going to agree with the evil Gardner*, rational system 2, is what we do, but not as important as system 1. AFAIK system 1 is where my 'smarts' are. Though clearly trained by system 2. (Years of reading physics books and doing problems.)
*Yes Howard Gardner... Let me just say that when I hear Gardener, I have nothing but positive thoughts. Martin Gardener is a hero to me, and the next gardener I think of is Samwise Gamgee... (Well I started reading "The Fellowship of the Ring" again.)
I think it's a bit more complicated than that, and I am sure I am not doing it justice, but roughly -- yeah. One of the points Kahneman makes is that System 1 is what keeps us alive and chugging along, most of the time. We don't have to work out ballistic physics to duck when something is thrown at our heads, or to slam on the brakes when all the taillights turn abruptly red in front of us. We trundle through life making a million judgments and decisions a day, effortlessly and rapidly, and almost always correctly. It's only at rare intervals, typically when confronted with some new and strange problem, that we fire up System 2 and start burning serious glucose.
I think what he and his coworkers spend their time looking at is the dividing line: under what circumstances do we recognize the need for effortful conscious careful t hought (Ssystem 2), and when do we understand that the heuristical seat-of-the-pants System 1 is good enough? Clearly there are cases where we screw it up, we rely on "intuition" when we should be thinking things out, and still other cases where we consciously reason ourselves into absurdity or even horror, which our "intuitive common sense" would've helped us avoid. The general question of how we decide which system to use, and how and why we make mistakes in that decision, is fascinating and complex.
I'm just adding that they point out that the two systems are not rigidly separated, and certain decision-making processes and patterns can get transferred back and forth. Skilled workers in any field build a System 1 that lets them get stuff done correctly much faster and with less effortful thought than a newbie (consider the analogy of how much less energy efficient a newbie is than a skilled athlete at, say, swimming or skiing). But we also spend a lot of time and effort using System 2 to try to figure out what the hell System 1 is doing, when it seems to be leading us wrong. ("Why do I always date the wrong kind of guy?")
I feel like (intuit) system 1 is using the product of extreme chunking which has been built up by system 2 over time.
Hmm I make the analogy that, thinking slow = rational thought. But I agree that not recognizing the fast, slow distinction leads to some needless disagreements.
OK you can call them different names, slow is rational, conscious thought, fast is everything going on underneath... it doesn't have to be fast, so I agree fast can be a confusing descriptor.
System 2 builds models. System 1 interprets them. This is why a chess master can walk by a chess board and, off the cuff, say, "mate in 4," using only system 1.
Reminds me of the saying, "Neurotics build castles in the sky. Psychotics live in them. Psychiatrists collect the rent."
Hmm, I haven't read much Pinker or Gardner. Given that this article doesn't really define what it means to be rationalist, I'm going to fall back on my intuitions.
No, no, I joke! But I do have a couple of comments:
Intuition has multiple meanings or nuances; Scott is only addressing one of them, the "Intuition is really a reaction to a complex set of observations, and the observer isn't aware of how that complex set leads them to the conclusion they reached." But there is another meaning of intuition, which is "The ability to tap into knowledge which is not available to the intellect," such as through prayer or meditation, for the sufficiently attained or lucky. Now, it may be possible to train an AI to do that, or to interrogate a bunch of world-class meditators about who they did it, but it seems unlikely in the near to medium term future, at best.
Does Pinker address the Spock as strawman (strawVulcan?) rationality? I mean, Gardner seems to be making the "what does your rationality have to say about emotions, eh?" argument.
And I think that's a valid argument, in that most self-described rationalists I deal with (mostly libertarians, unfortunately) seem to have a really hard time dealing with emotions, and in particular to recognizing when they are buying into emotionalist appeals.
Lastly, rationality seems to be about intellect. And as Galef points out, emotions are often the source of our goals and desires. I mean, rationality can tell me whether one banana is better than another banana, but it can't tell me whether I like bananas more than oranges. Sure, it might be able to tell me which one is better for me, but it can't tell me which one I like more.
I think Yudowsky's "systematized winning" has some significant overlap with your final idea if you focus more on the "systematized" part.
Love the post, but one aspect I think Scott might have missed though... the culture of it.
I think when Gardner says he's doesn't like rationalism, he's at least partially saying, "I don't like this culture of people who are into math and board games, like or at least don't dislike polyamory, celebrate weird holidays like solstice but not in a way I am familiar with, are mostly white, too often male,... etc." He doesn't see rationalism as systematized winning, but as this culture.
I don't think Gardner knows about "the rationalist community". He's responding only to Pinker's book.
Yes, but there is still a broad popular conception of a "rational person". The kind of person that is good at math, that likes arguing, that favors "right brain" instead of "left brain". Or even just Ben Shapiro saying "facts don't care about your feelings". The point is that there is a common, intuitive idea of a rational person, and that draws a visceral reaction and then people argue based on such reaction.
It sounds like you're saying, "rationalism looks suspiciously like a flag to a lot of people" - is that fair?
Appreciate you steel-manning my argument :)
I think you mean left brain.
+1 to this idea. Like most political arguments, I think a lot of arguments for or against "rationality" are motivated primarily by tribal affiliations, and are only superficially about discovering truth or whatever.
The type of humor in this essay is I suspect one of the things people miss about older Scott posts. Been a while since it was this concentrated.
+1
I wonder if you're overthinking this a bit. I agree that
> Everybody follows some combination strategy of mostly depending on heuristics, but [sometimes] using explicit computation
In my view, describing oneself as a "rationalist" or "non- rationalist" is just a way to say which side it's better to err on, in close cases. (Compare "trusting" vs "distrustful" — everyone agrees that you should trust *some* claims. But a generally trusting person errs on the side of trusting more claims).
If we wanted to be more jargon-y, we could rephrase the above by saying: "everyone agrees to sometimes use heuristics; a rationalist uses the meta-heuristic of using explicit calculation in cases where the best strategy is unclear whereas a non-rationalist uses the meta-heuristic of relying on heuristics in those cases"
But 'overthinking' is exactly what makes rationalism valuable! Much of the time you're just stating the obvious (or at least, the already-known) in a needlessly verbose show-your-working way, but occasionally showing that working produces new insights. It makes sense for Scott to show his working when defending/investigating the working-showing mentality :P
That said I think you;ve pretty much nailed the TL;DR
I'm not sure if this is actually relevant, but your final paragraph reminds me of an argument I've had several times about the "pie rule". ( https://en.wikipedia.org/wiki/Pie_rule )
There's some game that's unfair, and someone suggests making it fair by allowing one player to assign a handicap to one side and then the other player to choose which side to play, given that handicap. This incentivizes the first player to pick a handicap that makes the game as fair as possible.
The person making this suggestion often argues that if you are good at playing the game, then you should automatically also be good at picking a handicap that would be fair. And thus, the pie rule is meta-fair in the sense that the winner of the-game-with-pie-rule should always be the same as the winner of the-game-starting-from-a-hypothetical-fair-position.
I disagree.
I think the intuition for their claim is something like: One possible way of picking winning moves is to consider every position you could move to, estimate your degree of advantage in each one, and then pick the move that leads to the largest advantage. And if you can determine the degree of advantage of an arbitrary game-position, then you can also pick a fair handicap just by choosing the advantage that is closest to zero, instead of the largest advantage.
But that is not the ONLY possible way of picking winning moves. You could pick winning moves even if you can only RANK positions, and cannot score them on any absolute scale. You could even pick moves by applying some heuristic to the moves themselves, rather than to the resulting game-positions.
If you just have a black box that outputs the best move, you can't use that black box to pick a fair handicap.
This sounds a little bit like the idea that "skill at X" and "skill at studying X" are not (necessarily) the same.
X and studying-X are fundamentally related in a way where if you had a complete understanding of the nature of X that allowed you to predict exactly what would happen in any given situation, then that should make you good both at X and at studying-X.
But this doesn't rule out the possibility that there could be lesser strategies that make you good at one but not at the other. A black box that makes money doesn't (necessarily) give you a theory of how money-making works. A pretty-good-but-not-perfect theory of money-making won't necessarily let you outperform the black box.
But even if, because of the uncertainties you mention, one cannot assign a fair handicap, it seems like this pie division method will at least provide a fairer game, probably immediately, but if not at least after some iterations.
There are situations where the pie rule is useful, but it merely gives someone an *incentive* to make the game fair; it doesn't bestow the *capability* to make the game fair. If you add the pie rule to a game, you haven't solved the fairness problem; you have merely delegated that task to someone else.
In most cases, if you are designing a game, you are better able to pick a fair starting position than most of your players will be. If you can't pick one, then you probably can't expect your players to be able to pick one, either.
The pie rule is potentially useful for players who understand the game better than the game's designer, or if you have procedurally-generated content such that the player is handicapping a unique scenario that the designer never saw.
Spot on. I play a lot of games, some with asymmetric starting factions, some of which seem imbalanced - weaker than others. I consider myself a game designer and yet I recognise it's a very hard task to come up with a fair house rule that makes the weaker faction stronger while not disturbing many other balances that exist within the game.
Why is it hard? Just introduce the element of chance, and shazam! the weaker guys have just as good a chance as the stronger guys. But why would you want to do this? Isn't the *purpose* of games to prove out which people are better at Skills A and B, whatever A amd B might be? We have basketball games to see which set of five guys are better at throwing a ball through a hoop, as well as passing it to each other, and strategy encompassing the same. Why would we want to introduce a rule that arbitrarily makes it easier for people with less skill in those areas to win?
On the other hand, I can see why we might change the rules so that Skill C becomes part of the winning mix. We could prohibit any speech during a basketball game, and now the skill of nonverbal communication starts to become essential for winning. Is that the kind of thing you mean?
We're not talking about games like basketball, so much as games like Fox and Hounds https://en.wikipedia.org/wiki/Fox_games or Tafl, traditionally, or modern games like Gaia Project or Cosmic Encounter. What's the purpose of games? Whole books could be written on that. Having fun with friends? Mental exercise? And yes, one part of it is strategic challenge, "can I beat you?" (Or "beat the game" for cooperative or solo games.) But there's many aspects to that, especially with a large number of asymmetric factions or board states such that it's impractical to prepare for all combinations in advance: it rewards being able to quickly analyse a new strategic space, and even more so if new hidden or random information is added to the game partway through. There's delight in exploring interesting rules interactions ("spotting a combo" that nobody else has seen yet), finding a tactical move to satisfy the constraints of the actions of several other players, and so on. All of which may mean that if the factions are significantly asymmetric, it's very hard to predict the consequences of a proposed house rule or balancing tweak.
The point of the pie rule is not only to make the game fair, it's to forestall a certain kind of dispute. In the ur-example, maybe the pie-cutter has poor motor control, and the two halves end up being different. If the pie-cutter also suggested the rule "I'll cut, you pick", they can't complain about the process, because it's all on them. (I mean, maybe they didn't know they had such poor motor control, but that's not the opponent's fault.)
Let me just state, as an aside, that the meta-rational view here is that you need to understand why you're playing the game.
MOST games (and this includes most of the games of life) are played to be played. If you prioritize winning over everything else, you'll soon find no-one else wants to play with you.
So obsessing over details like above (and the comments below) is missing the point! You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing. If you miss that point, you miss everything.
(Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way.
This point has been made by a variety of rather different thinkers, from Jordan Peterson to David Chapman. Yet it still seems to be a kind of esotericum that should have, but has not, made its way into the "Rationality Community".)
> You tweak the rules, by hook or by crook, by principled means or by random experimentation, to ensure that most people continue to consider the game worth playing.
That's exactly what we're doing. It just so happens that one of the heuristics that people commonly use to predict which rules will be worth playing is "fair games tend to be better than unfair ones".
(You really think that "obsessing over details" is going to make someone a WORSE game designer? Do you avoid painters or engineers who "obsess over details" of their craft?)
But you see, that's exactly where we differ. You think "fairness" is the most important thing; it's not! And so you obsess over the wrong details; you obsess over whether changes make the game more fair (whatever that means), but the point is not an abstract one (is this more fair in the eyes of Ahura Mazda), it is an empirical one, namely do enough of the parties involved in the game want to continue playing vs they would prefer to bow out.
For example: a person sometimes fights with his wife. Maybe they are basically a great match, but she has one particular small habit that leads to fights, like she frequently mis-remembers what happened in some situation in the past.
OK, we can treat this "rationally", go to the mat over these fights, drag up evidence and testimony, and generate a divorce within a year.
Or we can treat it "fairly", keeping score and allowing her to win exactly half the arguments (while each side secretly seethes), and generating a divorce within three years.
OR both parties can realize that what matters is keeping the game of marriage going, not winning the game of marriage, so he accepts that she is just that way and lets her say whatever mistaken memory she has, she accepts that his dress is somewhat sloppy but whatever, it's just not that important, and they have a happy life and a 50th yr wedding anniversary.
The same holds for business, for international negotiation, even for high stakes sports (where the rules are constantly tweaked, not for "fairness", but to ensure that everyone involved finds it worth continuing to play).
I feel like I'm saying "this motor would be 20% more efficient if you used this material instead of that material" and you're saying "you fool, why are you discussing efficiency instead of asking whether you need an engine at all!"
There certainly exist situations where you don't need the engine at all. But there are also SOME situations where you DO want an engine, and in those situations it probably matters how efficient it is, so learning how to make engines more efficient is still a useful skill. It's a reasonable thing to study and discuss.
Also, I am (mostly) using the word "fair" as a technical term of art that means "starting from this position, if you don't know which person is playing which side of the board, you would predict equal payouts for all sides on average."
Yes, exactly, that's precisely my point (as it is The Last Psychiatrist's point).
And yes, there are times when it is appropriate to optimize the engine. (And my day job involved and involves large amounts of this, both as physics and as optimizing computation in various ways.) But there are also times when it is appropriate to ask the meta-question...
And that's what this post was about. There are (at least) three levels
- Tribal idiot (which is where Gardener mostly seems to be)
- Rationalist (useful, very powerful, but ultimately short-sighted)
- Metarationalist (use the tools of rationality but think at a more abstract level about the goals to which those tools should be applied)
And yes, who's to say that my goals are more meta-rational than your goals? Well, that gets us into different discussion, one that's too far afield to pursue here.
I am a few days late, but I really wanted to give you a high five for "Of course this IS the point that is missed by many "1st order" rationalists, from economists to military strategists! Step zero in rationality is to get the goal correct; and if you believe that the goal is "win the game" rather than "keep the game going forever", then you're being being very dumb in a very complicated way."
The question of goal setting, and how often that goal is simply "keep the game going," is such a huge aspect to human existence, possibly life's existence in general, but totally overlooked by most economists using game theory. Especially considering how often 'Step Zero - Choose Goal' is in fact Step N-zillion in some earlier processes. Every game is a subgame within a veritable tessellation of games. Which is why normal people are vastly better at Ultimatum Games than economists, for example: the logical "winning" move from within the current game is highly illogical within the context of the ur-game. One sees it in politics as well, assuming one team wants to win over the other once and for all when really the correct goal is to keep the game going.
Isn't he explicitly talking about Tooby & Cosmides ecological rationalism? This isn't "an argument about heuristics. It’s the old argument from cultural evolution: tradition is the repository of what worked for past generations."
Who? Pinker or Gardner?
Pinker. I have not read the whole book but I did ask him via email if he would be mentioning Tooby & Cosmides ecological rationalism and he said yes. As an EP I'd be expecting him to be supporting it. https://www.researchgate.net/publication/231982306_Evolutionary_Psychology_Ecological_Rationality_and_the_Unification_of_the_Behavioral_Sciences
Also see their hypothesis of emotions as a superordinate programme (if you don't know it already). https://www.cep.ucsb.edu/emotion.html
and Hoffman's "fitness beats truth" hypothesis
https://pubmed.ncbi.nlm.nih.gov/33231784/
I nominate that Hoffman book to be reviewed by Scott.
The individual action of engaging in rationality clears the dust out of the intuition pipes. Then the next time the angel flies by to deliver an inspiration, they have a direct shot.
This is flippant but in terms of discussing the symbiosis of rationality/nonrationality, it’s a necessary point. lots more to say but busy now - this was a good read.
Most explicit anti-rationalism I encounter boils down to "think locally & politically, because abstract first-principles thinking sometimes leads to socially undesirable conclusions." Of course that's mostly a complaint about how some perceived rationalists behave (antirational "read the room" vs rational "just asking questions") rather than a principled objection to rationalism in the abstract, but then that's exactly what a rationalist would say...
I agree that Pinker is being knowingly glib by equating the mere act of reasoning with the explicit practice of rationality. I haven't read his book (yet), but Gardner's objection brings to my mind Elizabeth and Zvi's Simulacra levels.
As in, committed, self-identified rationalists (and chauvinistic monists, for that matter) appear to make something of a cultural fetish of the object level. Instead of also applying rationality to the higher simulacra levels - how people think, and how people wish to influence the ways others think, and so on - they make it their mission to oppose and reduce the chaos produced by these considerations. Julia Galef's book is pretty much about that.
Gardner seems to be against this project of narrowing down discourse toward the object level, deeming it both impossible and potentially harmful in the process. At the very least, the project shouldn't cloak itself with the purity of mere reasoning. The moment you have a mission, you're playing the game.
(The fact that all the simulacra levels do eventually add up on the object level in the end is both true and mostly irrelevant - about as useful as pointing out all the hormones and gene expression to a man in the throes of love.)
Do they say they're not, uh, "playing the game"?
#1 Don't knock drug-fueled orgies until you've tried them. The risk/reward ratio is probably favorable compared to many other edgy-but-socially-acceptable activities like general aviation.
#2 When a guy jumps out of the bushes (or in my case, the front seat of a cab in Panama City) and sticks a gun in your face, you have a remarkable amount of time to perform rational analysis. Adrenaline is a magic time dilator. In my case, after what seemed like an eternity of internal debate (but was actually half a second) in which I contemplated a half-dozen courses of action, the conclusion was "jump out of the moving cab". The comedown was harsh though. Adrenaline is one hell of a drug.
I've never been a big fan of the term "rationality" for a lot of the reasons described in this post -- it seems to carry the connotation (warranted or not) of opposing intuition and heuristics even when those may be valuable. I appreciate the bit about "truth seeking" and "study of truth seeking", which I find to be much more clarifying language so I'll stick with that here rather than trying to say anything about rationality.
I tend to think of truth-seeking as a fundamentally very applied endeavor, and also one that can be taught without necessarily being systematized. Not just a purely intuitive knack, but not necessarily best served by a formal discipline studying truth-seeking either.
As an example, I think one good way to learn effective truth seeking is to study a specific scientific field. A major part of a scientific education is learning how to seek more accurate models of the world. One learns tools, from formal mathematics to unwritten cultural practices, which help to separate truth from seductive but incorrect ideas.
Then, there are of course also people who study how science is carried out (eg, Kuhn & other philosophers of science). Tellingly, most practicing scientists pay relatively little attention to this, other than as a form of navel-gazing curiosity.
Rather than rather than rocks vs geology, I think science vs study of science is a better analogy to truth-seeking vs study-of-truth-seeking. And as with science, I am skeptical that the study of truth-seeking has much to say that will actually improve the practice of truth-seeking, compared to engaging directly with the practice of truth-seeking. Though perhaps 2000 years of follow-up research will prove me wrong.
On the contrary, we spent a thousand years between Aristotle and Galileo, roughly, studying truth seeking per se, and leaving almost all actual progress in acquisition of the truth to the Islamic world. It was only when we abandoned the study of truth seeking and decided to just seek the truth any old way we could -- just start measuring stuff, and start trying to formulate practical rules of thumb and dumb crude laws that would predict what would happen next -- that progress took off in the 1600s. Since that day the study of truth seeking has trundled along in the wake of actual truth acquisition (at least about the natural world), debating post facto how it was all done. All very interesting from the metaphysical point of view, but no actual physicist gives two bits for it. Not a one of us is trained in philosophical methods of inquiry, or cares to be, or would not find it tedious and useless.
Agree with this, human rhetoric is a very strong force and a good practitioner of it can convince you of anything absent empirical evidence. I think this is actually one of the best reasons for being a rationalist, don’t let yourself be fooled by rhetoric, insist on evidence. And if evidence cannot be produced at the moment for practical reasons (say like string theory) then keep an open mind but don’t commit anything on these ideas.
Being a rationalist only protects you against some forms of that. The kicker is the axioms and rules of inference. Look up Pascal arguing that one should believe in (the catholic) god. There are LOTS of other examples. When you say "insist on evidence" you're arguing for empiricism rather than rationality. They work well together, but they aren't the same thing.
I think a rationalist should be an empiricist for the reasons I gave. Any subject that relies mainly on arguments is going to be dominated by people with the best rhetorical skills not the people with the best understanding, that’s a fundamental rule of human society. Of course extrapolation from evidence is usually fine, but you still need to be less trusting than of direct evidence. There are hundreds of examples of areas where the accepted view derived from “logical analysis” was completely wrong until people actually did the experiment. The replication crisis in social science is one of the more recent ones. If being rational means anything is it means being careful about being fooled by BS because humans are very good at BS.
To be honest, what I've seen of the soi-disant rationalist community is that it has more in common with the medieval scholasticism I am describing than post-Enlightenment empiricism. The empiricist *specifically distrusts* purely logical argument.
Feynmann was especially clear and cogent on this point: he said (paraphrasing) that the way you do good science is that when you come up with a hypothesis or chain of logic you immediately distrust it, the more so if it's your own or you find it appealing and convincing, and then you set about diligently trying to disprove it every way you can, through direct measurement. You share the idea with your friends and even more importantly your enemies, specifically so that they can try to disprove it in ways to which you might be blind, out of ignorance or wishful thinking. Only after a lot of genuine effort has gone into trying to disprove your hypothesis and it has all failed do you start to (very cautiously) consider that it might be true -- at least until the next measurement comes along.
It's not *exactly* throwing human reasoning power out the window, but it's freighting it with such a load of skepticism that even the most "obvious" points in a chain of logic require experimental test. So very different from assigning the probability of a conclusion based on the persuasiveness of the logical argument leading to it.
It's also of course not a way of thinking one can afford to indulge in all aspects of life. Necessarily in many areas we *do* judge the probability of conclusions being correct by only the persuasiveness of the argument, because we just can't test everything for ourselves (and some stuff isn't amenable to experimental test anyway). But while the empiricist accepts this necessity, he does not consider it a virtue.
Hmm... The Islamic world a hotbed of truth acquisition in the Middle Ages? Well, sort-of, and up till approx. CE 1200 maybe. But the deference to "the historical importance of Islam for science" today is somewhat overdone. The argument tends to forget the importance of what went on in Byzantium, 1000 years of practical science (among other things) after the fall of the West-Roman empire. Some might even argue that the Renaissance was triggered by the fall of Byzantium in 1453, when scholars had to flee to Italy and other places to continue their work. By that time, the theologians and long since defeated the "philosophers" in the Islamic world of yesterday, i.e., Islam had stagnated from a "discover new science" point of view.
There's an even more important point which is that the argument kinda elides who exactly was doing this truth acquisition (or at least truth preservation).
As far as I can tell, for the most part it was being done by the remnants of prior classical culture, by people inheriting the mindset and culture of the previous hellenic overlords, not by those imbued with what we might call the "desert" culture.
If one want to use this argument as a claim about "my tribe/religion/ethnicity/whatever is tres awesome", be very careful because it's not at all clear that it actually proves what one wants it to prove... (Even apart from the general incoherence of imaging that you can somehow bask in the reflected glory of what people belonging to your claimed affiliation did 1000 years ago.)
Eh, the point was historical. An eventual agenda in this case is in your eyes, not mine.
The implicit reference is to the conflict between the philosophers and the theologians within medieval Islam. Although “who won” is still a matter of debate today, not least due to the prestige given to “science” everywhere in our time, a traditional opinion is that the (medieval) theologians defeated the (medieval) philosophers within Islam.
"Since that day"
What day? When in the 1600s? Who specifically was doing that abandoning?
30 October 1662. William of Manchester, who at the time was the Lord High Grand Inquisitor (Hidden) of the League of Empyrical Gentlemen. All else follows from his fateful discussion with friends over low-quality brown ale one miserably cold and drizzly September afternoon in a pub the name of which is still a League secret.
I can't tell the difference between "the study of truth seeking" and "philosophy". Is rationalism philosophy? If not, what's the difference?
Minor correction:
"Newcomb’s Paradox is a weird philosophical problem where (long story short) if you follow an irrational-seeming strategy you’ll consistently make $1 million, but if you follow what seem like rational rules you’ll consistently end up with nothing."
If you follow what seem like the rational rules, you'll consistently end up with just 1 thousand dollars.
Also, the transhumanists from the future will keep running simulations of you all the time, giving you 1 thousand dollars in each simulation, just to laugh at you. The guy who chooses 1 million will only be simulated once, to verify this hypothesis, but then it would be boring.
Would you rather be a rich guy who is dead, or an average guy who lives forever as a meme?
In general, simulation hypothesis seems to explain why people are so crazy. Crazy people are more fun to simulate.
Karl Popper's epistemology would be very useful here (it would always be useful, and it's almost never used!!)
In Popperian terms: effective truth seeking is basically just guessing plus error correction. Rationality is actively trying to correct errors in ideas, even very good ideas. Irrationality is assuming good idead don't have errors (or that errors exist but can't be corrected). Antirationality is actively preventing error correction.
A fictional example: an isolated island community has a sacred spring and a religious belief that no other water source can be used for drinking water. This is a useful belief because water from the spring is clean and safe, and many other water sources on the island have dangerous bacteria. One day, the spring dries up. Rationalists on the island try other water sources; some get sick in the process but they make progress overall. Irrationalists try prayers and spells to make the spring come back. Antirationalists declare the rationalists to be heretics and murder them all.
I think people who are "against" rationalism (and who aren't antirationalists like Khomeini) tend to be in the "good ideas have errors but it's vain/hubristic to think we can improve them" camp. Often trying to improve on established ideas leads you off of a local maximum (only drink from the sacred spring). But being trapped at a local maximum is bad, even if the path to a better point is treacherous. And external circumstances can make good ideas stop working (the spring drying up) anyway.
Thank you for beating me to mentioning Popper. When I read essays like this I'm always looking for anything interesting that Popper didn't adequately cover and I seldom find anything. For me, "Conjectures and Refutations" highlighted many interesting concepts beyond the title (which essentially does completely explain "rational" thinking). First is that I realized that the scientists I worked with didn't really know what "science" was. Which is inteteresting. (You can debate it, etc, but the point is it is more nebulous than the "scientific" community would have you believe.) The other thing I found interesting is that Popper is quite ok with tradition. Is it ok to do something because that's the way it's always been done? (Perhaps study "science" in the traditional way?) Sure is! That's a perfectly sensible heuristic it turns out. The time to consider rejecting tradition is if you can create a hypothesis that contradicts tradition that can be challenged but then survives that challenge (and all subsequent ones). It's all quite messy, sure, but Popper is some of the clearest thinking on rational thinking I've yet seen. He was concerned with separating woo from sensible "science" (whatever that might be) and I think his ideas work. If that doesn't sound compelling then I apologize for my bad interpretation and encourage you to study and criticize his work directly.
Is rationality "trying correct errors" or is it "the process for more effective error correction"? Or am I being a pedant?
Error correction in the sense of being less wrong, or in the sense of winning more? Because they are unlikely to lead you in the same direction.
I feel like there's at least a semantic rhyme here with the idea in linguistics that, _by definition_, a native speaker of a given language has a kind of tacit "knowledge" of the rules of that language, even if they can't articulate those rules. Modern linguistics -- the kind that Pinker practices -- is the enterprise of first taking those rules and making them explicit, and then moving up one layer of abstraction higher, to try to understand why it is that certain kinds of rules exist, and other kinds do not. And the the flavor of psycholingustics that I studied in college tries to ground those questions about the rules in actual brain structures.
As a side note, I actually think Pinker is pretty deeply wrong about linguistics, and in a way that challenges his own claims to being uber-rational. The Johns Hopkins linguistics department is the home of "optimality theory", which posits that the "rules" of a language are actually like a set of neural nets for recognizing and generating patterns -- or, more to the point, they're _like_ a set of computational neural nets, because they are _actual networks of human neurons_. Once you adopt this frame, you can see how a given situation could result in different "rules" for generation giving you conflicting answers, and then you think about how different networks can tamp down the activity of other networks. Hence the concept of "optimality theory". The actual language produced by a given mind is the optimized output of those interacting rule networks. And we get linguistic drift as new minds end up having different balances between rules, and ultimately creating new ones.
I got to sit in on graduate seminars with both Chomsky and Pinker in my time at JHU, and while they're both clearly brilliant, they seemed committed to a purely algebraic, functional model in which rules operate deterministically, like clockwork. This seems to fly in the face of what we know about how neurons work -- it seems, dare I say it, irrational.
Chomsky, Pinker, and the whole UG crowd premised decades of work on the "poverty of the stimulus" argument which is a perfect example of intuition over rationality. The argument goes "children learn language even though it sort of seems like they don't hear enough input sentences to figure out what the rules of language should be".
That has always bothered me. Like, what single shred of evidence did anyone ever collect to establish a default assumption about how many input sentences a child should need to learn a language? Founding an entire branch of linguistics based on nothing more than a gut feeling is certainly bold, I'll give that to Chomsky.
I think this explanation undervalues the extent to which having an explicit model of the world helps you develop better intuitions. For instance, the guy who just has a knack for geology may be able to better find diamonds than the geologist, but I bet if you find a kid with a knack for rocks and *teach them geology*, they'll be able to find diamonds better than either of them. Intuition is not a black box, the brain doesn't do intuition versus models; models feed from intuition, intuition feeds from models.